Jesse Geerts
jessegeerts.bsky.social
Jesse Geerts
@jessegeerts.bsky.social
Cognitive neuroscientist and AI researcher
Excited to share that the full programme for the 2025 UCL NeuroAI Annual Conference is now live! 🧠🤖 @ucl-neuroai.bsky.social

Spaces are limited, so register soon:
lnkd.in/ehiKrSaf

Full programme here:
lnkd.in/e8cV3mAZ
October 27, 2025 at 11:15 AM
4. Pre-training ICL models on linear regression tasks changed this outcome. These models then succeeded at transitive inference and didn't rely on induction circuits.
June 6, 2025 at 2:30 PM
3. Mechanistic analysis revealed why: ICL models developed induction circuits - specialized attention patterns that implement match-and-copy operations rather than encoding hierarchical relationships.
June 6, 2025 at 2:30 PM
2. In-context learning models failed to generalize transitively. Despite perfect performance on training pairs, they couldn't infer relationships between non-adjacent items.
June 6, 2025 at 2:29 PM
1. In-weights learning model developed transitive inference despite only seeing adjacent pairs during training. They also showed behavioral patterns consistent with human and animal performance on these tasks.
June 6, 2025 at 2:29 PM