Somin W
sominw.bsky.social
Somin W
@sominw.bsky.social
cs phd @ northeastern.
📋 Why it matters: interpretable, controllable compression students that learn how the teacher thinks. We also see faster, cleaner training dynamics compared to baselines. Preprint + details: arxiv.org/abs/2509.25002 (4/4)
Circuit Distillation
Model distillation typically focuses on behavioral mimicry, where a student model is trained to replicate a teacher's output while treating its internal computations as a black box. In this work we pr...
arxiv.org
September 30, 2025 at 11:32 PM
📘 How it works (high level): identify the teacher’s task circuit --> find functionally analogous student components via ablation --> align their internals during training. Outcome: the student learns the same computation, not just the outputs. (3/4)
September 30, 2025 at 11:32 PM
🍎 On Entity Tracking and Theory of Mind, a student that updates only ~11–15% of attention heads inherits the teacher’s capability and closes much of the gap; targeted transfer over brute-force fine-tuning. (2/4)
September 30, 2025 at 11:32 PM
5️⃣ Attribution techniques could help trace distillation practices, ensuring compliance with model usage policies & improving transparency in AI systems. [6/6]

🔗 Dive into the full details: arxiv.org/abs/2502.06659
Who Taught You That? Tracing Teachers in Model Distillation
Model distillation -- using outputs from a large teacher model to teach a small student model -- is a practical means of creating efficient models for a particular task. We ask: Can we identify a stud...
arxiv.org
February 11, 2025 at 5:16 PM
4️⃣ Our analysis spans summarization, question answering, and instruction following, using models like Llama, Mistral, and Gemma as teachers. Across tasks, PoS templates consistently outperformed n-grams in distinguishing teachers 📊 [5/6]
February 11, 2025 at 5:16 PM
3️⃣ But here’s the twist: Syntactic patterns (like Part-of-Speech templates) do retain strong teacher signals! Students unconsciously mimic structural patterns from their teacher, leaving behind an identifiable trace 🧩 [4/6]
February 11, 2025 at 5:16 PM
2️⃣ Simple similarity metrics like BERTScore fail to attribute a student to its teacher. Even perplexity under the teacher model isn’t enough to reliably identify the original teacher. Shallow lexical overlap is just not a strong fingerprint 🔍 [3/6]
February 11, 2025 at 5:16 PM
1️⃣ Model distillation transfers knowledge from a large teacher model to a smaller student model. But does the fine-tuned student reveal clues in its outputs about its origins? [2/6]
February 11, 2025 at 5:16 PM