aaditya6284.bsky.social
@aaditya6284.bsky.social
Huge shoutout to my amazing collaborators: Ted Moskovitz, Sara Dragutinovic, Felix Hill, @scychan.bsky.social , @saxelab.bsky.social . Check out the full paper (arxiv.org/abs/2503.05631) for more (including a cool connection to circuit superposition!). (11/11)
Strategy Coopetition Explains the Emergence and Transience of In-Context Learning
In-context learning (ICL) is a powerful ability that emerges in transformer models, enabling them to learn from context without weight updates. Recent work has established emergent ICL as a transient ...
arxiv.org
March 11, 2025 at 7:13 AM
We hope our work deepens the dynamical understanding of how transformers learn, here applied to the emergence and transience of ICL. We're excited to see where else coopetition pops up, and more generally how different strategies influence each other through training. (10/11)
March 11, 2025 at 7:13 AM
Finally, we carry forward the intuitions from the minimal mathematical model to find a setting where ICL is emergent and persistent. This intervention holds true at larger scales as well, demonstrating the benefits of the improved mechanistic understanding! (9/11)
March 11, 2025 at 7:13 AM
We propose a minimal model of the joint competitive-cooperative ("coopetitive") interactions, which captures the key transience phenomena. We were pleasantly surprised when the model even captured weird non-monotonicities in the formation of the slower mechanism! (8/11)
March 11, 2025 at 7:13 AM
But why does ICL emerge in the first place, if only to give way to CIWL? The ICL solution lies close to the path to the CIWL strategy. Since ICL also helps with the task (and CIWL is "slow), it emerges on the way to the CIWL strategy due to the cooperative interactions. (7/11)
March 11, 2025 at 7:13 AM
Specifically, we find that Layer 2 circuits (the canonical "induction head") are largely conserved (after an initial phase change), while Layer 1 circuits switch from previous token to attending to self, driving the switch from ICL to CIWL. (6/11)
March 11, 2025 at 7:13 AM
This strategy is implemented through attention heads serving as skip-trigram copiers (e.g., … [label] … [query] -> [label]). While seemingly distinct from the induction circuits that lead to ICL, we find remarkably shared substructure! (5/11)
March 11, 2025 at 7:13 AM
We find that the asymptotic mechanism preferred by the model is actually a hybrid strategy, which we term context-constrained in-weights learning (CIWL): the network relies on its exemplar-label mapping from training, but requires the correct label in context. (4/11)
March 11, 2025 at 7:13 AM
Like prior work, we train on sequences of exemplar-label pairs, which permit in-context and in-weights strategies. We test for these strategies using out-of-distribution evaluation sequences, recovering the classic transience phenomenon (blue). (3/11)
March 11, 2025 at 7:13 AM
We use the transience of emergent in-context learning (ICL) as a case study, first reproducing it in a 2-layer attention-only model to enable mechanistic study. (2/11)
x.com/Aaditya6284/...
x.com
March 11, 2025 at 7:13 AM