Shahab Bakhtiari
banner
shahabbakht.bsky.social
Shahab Bakhtiari
@shahabbakht.bsky.social
|| assistant prof at University of Montreal || leading the systems neuroscience and AI lab (SNAIL: https://www.snailab.ca/) 🐌 || associate academic member of Mila (Quebec AI Institute) || #NeuroAI || vision and learning in brains and machines
Pinned
So excited to see this preprint released from the lab into the wild.

Charlotte has developed a theory for how learning curriculum influences learning generalization.
Our theory makes straightforward neural predictions that can be tested in future experiments. (1/4)

🧠🤖 🧠📈 #MLSky
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
Reposted by Shahab Bakhtiari
We're almost at the end of the year, and that means an end-of-year review! Send me your favorite NeuroAI papers of the year (preprints or published, late last year is fine too).
November 19, 2025 at 4:14 PM
Reposted by Shahab Bakhtiari
I’m really excited about our release of Gemini 3 today, the result of hard work by many, many people in the Gemini team and all across Google! 🎊

blog.google/products/gem...

Gemini 3 performs quite well on a wide range of benchmarks.
November 19, 2025 at 2:53 AM
Reposted by Shahab Bakhtiari
I have so many issues with this podcast with @earlkmiller.bsky.social . I think that this podcast nicely shows why I have trouble with such approaches. Lets go through some of the claims.
November 18, 2025 at 8:05 PM
Reposted by Shahab Bakhtiari
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex
Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...
arxiv.org
November 18, 2025 at 12:37 PM
Reposted by Shahab Bakhtiari
This is an excellent blueprint on a very fascinating use of AI scientist! And the results and super cool and interesting! 🤩
I have been asked this when talking about our work on using powerlaws to study representation quality in deep neural networks, glad to have a more concrete answer now! 😃
November 16, 2025 at 10:29 PM
Reposted by Shahab Bakhtiari
paper🚨
When we learn a category, do we learn the structure of the world, or just where to draw the line? In a cross-species study, we show that humans, rats & mice adapt optimally to changing sensory statistics, yet rely on fundamentally different learning algorithms.
www.biorxiv.org/content/10.1...
Different learning algorithms achieve shared optimal outcomes in humans, rats, and mice
Animals must exploit environmental regularities to make adaptive decisions, yet the learning algorithms that enabels this flexibility remain unclear. A central question across neuroscience, cognitive science, and machine learning, is whether learning relies on generative or discriminative strategies. Generative learners build internal models the sensory world itself, capturing its statistical structure; discriminative learners map stimuli directly onto choices, ignoring input statistics. These strategies rely on fundamentally different internal representations and entail distinct computational trade-offs: generative learning supports flexible generalisation and transfer, whereas discriminative learning is efficient but task-specific. We compared humans, rats, and mice performing the same auditory categorisation task, where category boundaries and rewards were fixed but sensory statistics varied. All species adapted their behaviour near-optimally, consistent with a normative observer constrained by sensory and decision noise. Yet their underlying algorithms diverged: humans predominantly relied on generative representations, mice on discriminative boundary-tracking, and rats spanned both regimes. Crucially, end-point performance concealed these differences, only learning trajectories and trial-to-trial updates revealed the divergence. These results show that similar near-optimal behaviour can mask fundamentally different internal representations, establishing a comparative framework for uncovering the hidden strategies that support statistical learning. ### Competing Interest Statement The authors have declared no competing interest. Wellcome Trust, https://ror.org/029chgv08, 219880/Z/19/Z, 225438/Z/22/Z, 219627/Z/19/Z Gatsby Charitable Foundation, GAT3755 UK Research and Innovation, https://ror.org/001aqnf71, EP/Z000599/1
www.biorxiv.org
November 17, 2025 at 7:18 PM
Reposted by Shahab Bakhtiari
Happy to share my new paper published in @nathumbehav.nature.com: A critical look at statistical power in computational modeling studies, particularly those based on model selection.
www.nature.com/articles/s41...
November 17, 2025 at 6:13 PM
I’m genuinely curious about this. The numbers in the blog are quite impressive.

Has anyone tried it and would like to share their $200 experience?
Today, we're announcing Kosmos, our newest AI Scientist, available today. Kosmos makes fully autonomous scientific discoveries at scale by analyzing datasets and literature, and is the most powerful agent for science so far. Beta users estimate that Kosmos does 6 months of work in a single day.
November 17, 2025 at 4:11 PM
Reposted by Shahab Bakhtiari
Dandi, dandiarchive.org, Brainlife brainlife.io/about/ etc are pretty good. But perhaps fostering meaningful interactions between experimentalist and theoretician are ultimate solution.
November 17, 2025 at 2:54 PM
Reposted by Shahab Bakhtiari
🧠Our new preprint is out on PsyArXiv!

We study how getting more feedback (seeing what you could have earned) and facing gains vs losses change the way people choose between risky and safe options.
🖇️Link: doi.org/10.31234/osf...

It's a thread🧶:
November 16, 2025 at 12:09 PM
Reposted by Shahab Bakhtiari
Is there an academic/industry divide in attitudes about using AI to support discovery? I noticed this post has 3.6k likes on X but only 6 likes on Bluesky. It deserves more attention here!
Today, we're announcing Kosmos, our newest AI Scientist, available today. Kosmos makes fully autonomous scientific discoveries at scale by analyzing datasets and literature, and is the most powerful agent for science so far. Beta users estimate that Kosmos does 6 months of work in a single day.
November 17, 2025 at 2:32 PM
For any question a theoretical neuroscientist is pondering, there are at least a few relevant datasets out there locked inside individual labs. I also suspect many of those labs would be willing to share their data if there were an easy way to prepare it for public release.
If you try to construct the model to be brain-like you inevitably face ~100 choices that are severely under-constrained by data, and you just have to muddle through.
November 17, 2025 at 1:07 PM
Reposted by Shahab Bakhtiari
It is actually an incredibly frustrating time to be a theoretical neuroscientist right now imo, for this reason
Same for neuroscience. The lack of ability to measure many neurons’ activity, perturb them, and measure intracellular processes and connections is what limits understanding the brain.

The key barriers are not algorithms or AI.

🧪#neuroscience 🧠🤖 #MLSky
November 17, 2025 at 1:23 AM
Reposted by Shahab Bakhtiari
There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.

References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.
From my discussions with other faculty, the use of generative AI I hear about the most is writing reference letters.

What's the point of having reference letters anymore if everyone is just having them written by machine?
November 14, 2025 at 7:10 PM
Reposted by Shahab Bakhtiari
MiniThread: I was reading this paper and through it was worth a comment because the results are very counterintuitive to me (and the authors too)

Miller, G. A., & Selfridge, J. A. (1950). Verbal context and the recall of meaningful material. The American journal of psychology, 63(2), 176-185.
November 13, 2025 at 5:06 PM
Reposted by Shahab Bakhtiari
Want to help shape the SCENE collaboration?! Join us as an executive director: www.cam.ac.uk/jobs/scene-m...
SCENE Manager
The Simons Collaboration on Ecological Neuroscience (SCENE): SCENE is an international consortium of 20 leading researchers in the fields of Computational, Systems and Cognitive Neuroscience, and
www.cam.ac.uk
November 13, 2025 at 5:32 PM
Reposted by Shahab Bakhtiari
Fei-Fei Li’s Worldlabs.ai releases their Marble Labs model and tools. They predict meshes that can be re-styled. Smart. And predictable. It helps solve the predictive impermanence issues with pure pixel-to-pixel world models and it’s going to work with how far engines already work.
World Labs
World Labs is a spatial intelligence company, building frontier models that can perceive, generate, and interact with the 3D world.
Worldlabs.ai
November 13, 2025 at 4:48 PM
We’re snailposting, post your snails!

I couldn’t let this pass without posting my lab logo for no good reason 🤪 🐌
November 13, 2025 at 3:11 PM
Reposted by Shahab Bakhtiari
Some pretty eye-opening data on the effect of AI coding.

When Cursor added agentic coding in 2024, adopters produced 39% more code merges, with no sign of a decrease in quality (revert rates were the same, bugs dropped) and no sign that the scope of the work shrank. papers.ssrn.com/sol3/papers....
November 13, 2025 at 5:18 AM
Reposted by Shahab Bakhtiari
Excited to share a new preprint, accepted as a spotlight at #NeurIPS2025!

Humans are imperfect decision-makers, and autonomous systems should understand how we deviate from idealized rationality

Our paper aims to address this! 👀🧠✨
arxiv.org/abs/2510.25951

a 🧵⤵️
Estimating cognitive biases with attention-aware inverse planning
People's goal-directed behaviors are influenced by their cognitive biases, and autonomous systems that interact with people should be aware of this. For example, people's attention to objects in their...
arxiv.org
November 13, 2025 at 1:20 PM
Reposted by Shahab Bakhtiari
Following today's political developments I realized there is a disturbing neuro-AI connection to Epstein: apparently Minsky--famed for work on the perceptron--attended Epstein's island and was accused by one of Epstein's victims of r**ing her. Discussed here: en.wikipedia.org/wiki/Marvin_...
November 13, 2025 at 4:31 AM
Reposted by Shahab Bakhtiari
We had a blast observing an EEG experiment at @umontreal.ca École d’optométrie! 🧠

The session showed how the brain responds to natural images— a reminder of the complexity we’re tackling as we build EEG–VR tools.
November 12, 2025 at 7:29 PM
Reposted by Shahab Bakhtiari
Are prompting and activation steering just two sides of the same coin?

The paper formalizes a Bayesian framework for model control: altering a model's "beliefs" over which persona or data source it's emulating. Context (prompting) and internal representations (steering)
November 12, 2025 at 5:42 AM
Reposted by Shahab Bakhtiari
Herzog talks skateboarding
youtu.be/EQLInlnfWUc?...
Discussing Skateboarding with Filmmaker Werner Herzog
YouTube video by jenkemmag
youtu.be
November 12, 2025 at 5:43 AM
Reposted by Shahab Bakhtiari
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 10, 2025 at 5:30 PM