Thomas Serre
thomasserre.bsky.social
Thomas Serre
@thomasserre.bsky.social
Computational vision. Deep learning. Center for Computational Brain Science @Brown University. Artificial and Natural Intelligence Toulouse Institute (France). European Laboratory for Learning and Intelligent Systems (ELLIS).
Personal take: Current XAI tools can't yet discover novel mechanisms—they test hypotheses more than reveal the unexpected.

We need better methods NOW, before digital twins become so convincing we stop asking how they work.

📚 Full ref: arxiv.org/abs/2509.17280
📄 doi.org/10.1016/j.neuron.2025.09.039
From Prediction to Understanding: Will AI Foundation Models Transform Brain Science?
Generative pretraining (the "GPT" in ChatGPT) enables language models to learn from vast amounts of internet text without human supervision. This approach has driven breakthroughs across AI by allowin...
arxiv.org
October 24, 2025 at 11:22 AM
Moving beyond prediction means:
- Grounding models in neuroscience/cognitive science theory
- Revealing computations through interpretability/XAI studies
- Generating testable hypotheses to drive experiments

Challenge: turning data-fitting machines into theory-bearing instruments.
October 24, 2025 at 11:22 AM
Yet debate continues: Do high-performing models capture genuine mechanisms or just exploit statistical regularities?

Even with perfect predictions, we risk replacing one black box (the brain) with another (a deep neural network).

Explanatory value requires more than fit.
October 24, 2025 at 11:22 AM
Based on successes across AI and science, optimism is growing that scaled models will uncover true generative processes.

If a model predicts like a brain, has it discovered how the brain works?

Tempting to think so.
October 24, 2025 at 11:22 AM
Join a top interdisciplinary program exploring the intersection of artificial & natural intelligence. Strong ties with the @carneyinstitute.bsky.social, the Center for Computational Brain Science (CCBS), and the new NSF-funded AI Institute (ARIA).
September 23, 2025 at 5:51 PM
📢 Our takeaway: To truly model biological vision, vision science must diverge from conventional AI approaches and develop deep learning methods tailored to the intricacies of biological visual systems.
April 28, 2025 at 1:22 PM
🧠 This divergence suggests that DNNs may adopt visual strategies differing from those used by primates, as highlighted in our previous work on harmonization.
April 28, 2025 at 1:22 PM
🔍 Key finding: As DNNs achieve human or superhuman accuracy, their alignment with primate vision plateaus—and in some cases, deteriorates.
April 28, 2025 at 1:22 PM