Erin Grant
banner
eringrant.me
Erin Grant
@eringrant.me
Senior Research Fellow @ ucl.ac.uk/gatsby & sainsburywellcome.org

{learning, representations, structure} in 🧠💭🤖
my work 🤓: eringrant.github.io

not active: sigmoid.social/@eringrant @[email protected], twitter.com/ermgrant @ermgrant
Hoping you find out and share! 🤗
October 3, 2025 at 3:36 AM
Congrats Richard!!
September 23, 2025 at 2:03 PM
many thanks to my collaborators, @saxelab.bsky.social and especially Lukas :)
August 13, 2025 at 3:45 PM
I like the how Rosa Cao (sites.google.com/site/luosha) & @dyamins.bsky.social speculated about task constraints here (doi.org/10.1016/j.co...). I think the Platonic Representation hypothesis is a version of their argument, for multi-modal learning.
August 13, 2025 at 2:20 PM
Definitely! Task constraints certainly play a role in determining representational structure, which might interact with what we consider here (efficiency of implementation). We don't explicitly study it. Someone should!
August 13, 2025 at 2:19 PM
Main takeaway: Valid representational comparison relies on implicit assumptions (task-optimization *plus* efficient implementation). ⚠️ More work to do on making these assumptions explicit!

🧠 CCN poster (today): 2025.ccneuro.org/poster/?id=w...

📄 ICML paper (July): icml.cc/virtual/2025/poster/44890
ICML Poster Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networksICML 2025
icml.cc
August 13, 2025 at 11:31 AM
Our theory predicts that representational alignment is consistent with *efficient* implementation of similar function. Comparing representations is ill-posed in general, but becomes well-posed under minimum-norm constraints, which we link to computational advantages (noise robustness).
August 13, 2025 at 11:31 AM
Function-representation dissociations and the representation-computation link persist in deep nonlinear networks! Using function-invariant reparametrisations (@bsimsek.bsky.social), we break representational identifiability but degrade generalization (a computational consequence).
August 13, 2025 at 11:31 AM
We demonstrate that representation analysis and comparison is ill-posed, giving both false negatives and false positives, unless we work with *task-specific representations*. These are interpretable *and* robust to noise (i.e., representational identifiability comes with computational advantages).
August 13, 2025 at 11:31 AM
We parametrised this solution hierarchy to find differences in handling of task-irrelevant dimensions: Some solutions compress away (creating task-specific, interpretable representations), while others preserve arbitrary structure in null spaces (creating arbitrary, uninterpretable representations).
August 13, 2025 at 11:31 AM
To analyse this dissociation in a tractable model of representation learning, we characterize *all* task solutions for two-layer linear networks. Within this solution manifold, we identify a solution hierarchy in terms of what implicit objectives are minimized (in addition to the task objective).
August 13, 2025 at 11:31 AM
Deep networks have parameter symmetries, so we can walk through solution space, changing all weights and representations, while keeping output fixed. In the worst case, function and representation are *dissociated*.

(Networks can have the same function with the same or different representation.)
August 13, 2025 at 11:31 AM
August 13, 2025 at 7:01 AM
Want to contribute to this debate at #CCN2025? Please come to our session today, fill out the anonymous survey (forms.gle/yDBBcBZybGjogksC8), and comment on the GAC page (sites.google.com/ccneuro.org/gac2020/gacs-by-year/2025-gacs/2025-1)! Your perspectives will shape our eventual GAC paper. 👥
August 13, 2025 at 7:01 AM
This GAC focuses on three debates/questions around benchmarks in cognitive science (the what, why, and how): (1) Should data or theory come first? (2) Should we focus on replication or exploration? (3) What incentives should we build up, if we choose to invest effort as a community?
August 13, 2025 at 7:01 AM