Jean-Rémi King
jeanremiking.bsky.social
Jean-Rémi King
@jeanremiking.bsky.social
Researcher in Neuroscience & AI

CNRS, Ecole Normale Supérieure, PSL
currently detached to Meta
Reposted by Jean-Rémi King
[2/2] 📊 His research uses encoding and decoding approaches to show how modern speech and language models account for brain responses to natural speech, measured with EEG, MEG, iEEG, and fMRI, even in children aged 2 to 12.

📆 November 19–21, 2025.

+info 👇

brainhack-donostia.github.io
September 11, 2025 at 12:39 PM
Congrats Mariya!
September 4, 2025 at 4:49 PM
Thanks to all the great researchers who contributed to this project: Joséphine Raugel, the DINOv3 team, @valentinwyart.bsky.social, FAIR and ENS as well as the open source and open data #NeuroAI communities for making this possible! 🙏
September 3, 2025 at 5:18 AM
Overall, the training of DINOv3 mirror some striking aspects of brain development: late-acquired representations map onto the cortical areas with e.g. greater expansion and slower timescales, suggesting that DINOv3 spontaneously captures some of the neuro-developmental trajectory
September 3, 2025 at 5:18 AM
→ Second factor: data type: Even models trained only on satellite or cellular images significantly capture brain signals — but the same model trained on standard images encodes higher all brain regions.
September 3, 2025 at 5:18 AM
So what are the factors that lead DINOv3 to become brain-like?
→ 1st factor: Model size: bigger models become brain-like faster during training, reach higher brain-scores, especially in high-level brain regions.
September 3, 2025 at 5:18 AM
Third, the representations of the visual cortex are typically acquired early on in the training of DINOv3.
By contrast, it requires much more training to learn representations similar to those of the prefrontal cortex.
September 3, 2025 at 5:18 AM
Surprisingly, these encoding, spatial and temporal scores all emerge across training, but at different speeds.
September 3, 2025 at 5:18 AM
Second, DINOv3 learns a representational hierarchy which corresponds to the spatial and temporal hierarchies in the brain.
September 3, 2025 at 5:18 AM
First, we observe that, with training, DINOV3 learns representations that progressively align with those of the human brain.
September 3, 2025 at 5:18 AM
To evaluate how data type, data quantity and model size each leads DINOv3 to more-or-less brain-like activation, we trained and tested several variants:
September 3, 2025 at 5:18 AM
We compare the activation of DINOv3 (ai.meta.com/dinov3/), a SOTA self-supervised computer vision model trained on natural images,
to the activations of the human brain in response to the same images using both fMRI (naturalscenesdataset.org) and MEG (openneuro.org/datasets/ds0...)
September 3, 2025 at 5:18 AM
Reposted by Jean-Rémi King
Our first Keynote Speaker this year will be Jean-Rémi King
@jeanremiking.bsky.social (CNRS) who leads the Brain & AI team @metaai.bsky.social. He will be giving an exciting talk on the "Emergence of Language in the Human Brain".
August 22, 2025 at 10:35 AM
Very nice , congrats !
August 16, 2025 at 6:48 AM
Linearly readable information
June 18, 2025 at 6:43 AM