Tim Kietzmann
banner
timkietzmann.bsky.social
Tim Kietzmann
@timkietzmann.bsky.social
ML meets Neuroscience #NeuroAI, Full Professor at the Institute of Cognitive Science (Uni Osnabrück), prev. @ Donders Inst., Cambridge University
Pinned
We managed to integrate brain scans into LLMs for interactive brain reading and more.. check out Vicky's post below. Super excited about this one!
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
Reposted by Tim Kietzmann
What happens if you hook up an energy-efficiency optimising RNN on active vision input?

It learns predictive remapping and path integration into allocentric scene coordinates.

Now out in patterns: www.cell.com/patterns/ful...
November 21, 2025 at 8:01 AM
Reposted by Tim Kietzmann
Excited to share my first paper: Model–Behavior Alignment under Flexible Evaluation: When the Best-Fitting Model Isn’t the Right One (NeurIPS 2025). link below.
November 20, 2025 at 2:05 PM
Reposted by Tim Kietzmann
Our work reveals a sharp trade-off between predictive accuracy and model identifiability. Flexible mappings maximize predictivity, but blur the distinction between competing computational hypotheses.
November 20, 2025 at 2:05 PM
Reposted by Tim Kietzmann
🚨 Out in Patterns!

We asked ourselves, if complex neural dynamics like predictive remapping and allocentric coding can emerge from simple physical principles, in this case Energy Efficiency. Turns out they can!
More information in the 🧵 below.

I am super excited to see this one out in the wild.
November 20, 2025 at 7:47 PM
We went back to the drawing board to think about what information is available to the visual system upon which it could build scene representations.

The outcome: a self-supervised training objective based on active vision that beats the SOTA on NSD representational alignment. 👇
November 18, 2025 at 2:14 PM
Reposted by Tim Kietzmann
We managed to integrate brain scans into LLMs for interactive brain reading and more.. check out Vicky's post below. Super excited about this one!
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 3, 2025 at 3:21 PM
We managed to integrate brain scans into LLMs for interactive brain reading and more.. check out Vicky's post below. Super excited about this one!
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 3, 2025 at 3:21 PM
Reposted by Tim Kietzmann
Figuring out how the brain uses information from visual neurons may require new tools, writes @neurograce.bsky.social. Hear from 10 experts in the field.

#neuroskyence

www.thetransmitter.org/the-big-pict...
Connecting neural activity, perception in the visual system
Figuring out how the brain uses information from visual neurons may require new tools. I asked nine experts to weigh in.
www.thetransmitter.org
October 13, 2025 at 1:23 PM
Hi, we will have three NeuroAI postdoc openings (3 years each, fully funded) to work with Sebastian Musslick (@musslick.bsky.social), Pascal Nieters and myself on task-switching, replay, and visual information routing.

Reach out if you are interested in any of the above, I'll be at CCN next week!
August 9, 2025 at 8:13 AM
OK, time for a CCN runup thread. Let me tell you about all the lab’s projects present at CCN this year. #CCN2025
August 8, 2025 at 2:21 PM
A long time coming, now out in @natmachintell.nature.com: Visual representations in the human brain are aligned with large language models.

Check it out (and come chat with us about it at CCN).
August 7, 2025 at 2:16 PM
Reposted by Tim Kietzmann
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168
arxiv.org
July 8, 2025 at 1:04 PM
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168
arxiv.org
July 8, 2025 at 1:04 PM
Reposted by Tim Kietzmann
Nice paper by @zejinlu.bsky.social the group of @timkietzmann.bsky.social appearing in Nat Human Behav www.nature.com/articles/s41... showing the properties of a CNN for which you release the weight sharing constraint.. #neuroAI
End-to-end topographic networks as models of cortical map formation and human visual behaviour - Nature Human Behaviour
Lu et al. introduce all-topographic neural networks as a parsimonious model of the human visual cortex.
www.nature.com
June 16, 2025 at 6:20 AM
Reposted by Tim Kietzmann
Introducing All-TNNs: Topographic deep neural networks that exhibit ventral-stream-like feature tuning and a better match to human behaviour than the gold standard. Now out in Nature Human Behaviour. 👇
Now out in Nature Human Behaviour @nathumbehav.nature.com : “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: www.nature.com/articles/s41...
June 6, 2025 at 11:00 AM
Introducing All-TNNs: Topographic deep neural networks that exhibit ventral-stream-like feature tuning and a better match to human behaviour than the gold standard. Now out in Nature Human Behaviour. 👇
Now out in Nature Human Behaviour @nathumbehav.nature.com : “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: www.nature.com/articles/s41...
June 6, 2025 at 11:00 AM
Reposted by Tim Kietzmann
Can seemingly complex multi-area computations in the brain emerge from the need for energy efficient computation? In our new preprint on predictive remapping in active vision, we report on such a case.

Let us take you for a spin. 1/6 www.biorxiv.org/content/10.1...
June 5, 2025 at 1:14 PM
Can seemingly complex multi-area computations in the brain emerge from the need for energy efficient computation? In our new preprint on predictive remapping in active vision, we report on such a case.

Let us take you for a spin. 1/6 www.biorxiv.org/content/10.1...
June 5, 2025 at 1:14 PM
Reposted by Tim Kietzmann
I'd put these on the NeuroAI vision board:

@tyrellturing.bsky.social's Deep learning framework
www.nature.com/articles/s41...

@tonyzador.bsky.social's Next-gen AI through neuroAI
www.nature.com/articles/s41...

@adriendoerig.bsky.social's Neuroconnectionist framework
www.nature.com/articles/s41...
April 28, 2025 at 11:15 PM
Reposted by Tim Kietzmann
🚨 New preprint alert!
Our latest study, led by @DrewLinsley, examines how deep neural networks (DNNs) optimized for image categorization align with primate vision, using neural and behavioral benchmarks.
April 28, 2025 at 1:22 PM
Reposted by Tim Kietzmann
Check out our new paper at #ICLR2025, where we show that multi-task neural decoding is both possible and beneficial.

As well, the latents of a model trained only on neural activity capture information about brain regions and cell-types.

Step-by-step, we're gonna scale up folks!

🧠📈 🧪 #NeuroAI
Scaling models across multiple animals was a major step toward building neuro-foundation models; the next frontier is enabling multi-task decoding to expand the scope of training data we can leverage.

Excited to share our #ICLR2025 Spotlight paper introducing POYO+ 🧠

poyo-plus.github.io

🧵
POYO+
POYO+: Multi-session, multi-task neural decoding from distinct cell-types and brain regions
poyo-plus.github.io
April 25, 2025 at 10:21 PM
#CCN2025 abstract acceptances were sent out this morning.

I'll post a summary of each of our projects closer to the conference.

Looking forward to seeing you all in Amsterdam!
April 21, 2025 at 8:40 AM
Neat idea and great insight into the inner workings of MLLMs. MLLMs are good at high level vision, but fail at low- and mid-level visual tasks.
If GPT-4o walked into a neuro-opthalmology clinic, what would it be diagnosed with?

Here we administered 51 tests from 6 clinical and experimental batteries to assess vision in commercial AI models.

Very proud to share this first work from @genetang.bsky.social's PhD!

arxiv.org/abs/2504.10786
Visual Language Models show widespread visual deficits on neuropsychological tests
Visual Language Models (VLMs) show remarkable performance in visual reasoning tasks, successfully tackling college-level challenges that require high-level understanding of images. However, some recen...
arxiv.org
April 18, 2025 at 7:33 AM
Reposted by Tim Kietzmann
Top-down feedback is ubiquitous in the brain and computationally distinct, but rarely modeled in deep neural networks. What happens when a DNN has biologically-inspired top-down feedback? 🧠📈

Our new paper explores this: elifesciences.org/reviewed-pre...
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
elifesciences.org
April 15, 2025 at 8:11 PM