Ann Huang
banner
annhuang42.bsky.social
Ann Huang
@annhuang42.bsky.social
Comp Neuro, ML, Dynamical Systems 🧠🤖PhD student at Harvard & Kempner Institute. Prev at McGill, Mila, EPFL.
Our results:
- support the contravariance principle (Cao & @dyamins.bsky.social)
- reveal when weight- & dynamic-level variability move together (or opposite)
- give "knobs" for controlling degeneracy, whether you're studying shared mechanisms or individual variability in task-trained RNNs.
November 24, 2025 at 4:43 PM
4️⃣ Regularization (L1, low-rank)
Both types of structural regularization reduce degeneracy across all levels. Regularization nudges networks toward more consistent, shared solutions.
November 24, 2025 at 4:43 PM
3️⃣ Network size
When we fix feature learning (using µP), larger RNNs converge to more consistent solutions at all levels — weights, dynamics, and behavior.
A clean convergence-with-scale effect, demonstrated on RNNs across levels.
November 24, 2025 at 4:43 PM
We then causally tested feature learning’s effect on degeneracy using µP scaling. Stronger feature learning reduces dynamical degeneracy & increases weight degeneracy (like harder tasks).
It also increases behavioral degeneracy under OOD inputs (likely due to overfitting).
November 24, 2025 at 4:43 PM
1️⃣ Task complexity
As tasks get harder, we observe less degeneracy in dynamics/behavior, but more degeneracy in the weights.

When trained on harder tasks, RNNs converge to similar neural dynamics and OOD behavior, but their weight configurations diverge. Why?
November 24, 2025 at 4:43 PM
Using 3,400 RNNs across 4 neuroscience-relevant tasks (flip-flop memory, working memory, pattern generation, path integration), we systematically varied:
- task complexity
- learning regime
- network size
- regularization

Our findings:
November 24, 2025 at 4:43 PM
📍Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!

arxiv.org/pdf/2410.03972

It started from a question I kept running into:

When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
November 24, 2025 at 4:43 PM