Leshem (Legend) Choshen @EMNLP
banner
lchoshen.bsky.social
Leshem (Legend) Choshen @EMNLP
@lchoshen.bsky.social
🥇 LLMs together (co-created model merging, BabyLM, textArena.ai)
🥈 Spreading science over hype in #ML & #NLP
Proud shareLM💬 Donor

@IBMResearch & @MIT_CSAIL
To the point. In a big world🌏, a real world, nothing you learn can actually represent the world, it is so complex you can’t hold it in mind in any way.
There are atoms, physics, you are a tiny spec on earth etc.
You need a model that abstracts
December 3, 2025 at 5:37 PM
“Today, I have a vision, a vision of superintelligence from experience”

Presented in his humble way, Rich Sutton shares his vision of what AI needs
General, experiential, discovers its own abstractions and not bitter🤢
#NeurIPS2025 #NeurIPS
🤖📈🧠
December 3, 2025 at 5:37 PM
Moreover, models stop learning at training time, why?
They already interact and use more compute
Yes, some scenarios require learning conflicting things (e.g. personalization)
Ok, let's start training models that fit our needs, but also share some of this knowledge across them?
December 2, 2025 at 11:22 PM
LLMs do not learn from experience
LLMs do not learn from explicit corrections
LLMs do not learn from being told the answer
LLMs do not learn from being shown how to solve it
We study Machine Learning, these are opportunities!
A gold mine of research.
December 2, 2025 at 11:22 PM
"Hey dude, look. What is this button doing?"
⚡️BzZzZz⚡️
"Hey dude,..."
Would you press the button again?
Would an LLM?

Evolving LLMs, diverse open LLMs, and their evaluation are on my mind.
Before I share more, I encourage you to say hi here or in #NeurIPS 🤖📈🧠
December 2, 2025 at 11:22 PM
Join us at NeurIPS 2025 for the MindGames Challenge Workshop!
Explore theory of mind, game intelligence, and multi-agent LLMs in interactive game environments.
🗓 Sunday, December 7
⏰ 8:00–10:45 AM
📍 San Diego Convention Center, Ballroom 6CF
🤖📈🧠
November 29, 2025 at 4:14 PM
The Golden pacifiers are ready
See you soon in BabyLM (emnlp)
November 1, 2025 at 3:43 AM
They did it for images, video, text and it all compresses really, really well.
October 6, 2025 at 4:47 PM
So we get on average short numbers to represent sentences. and to decode them we run the model again, get probabilities and decide with those what next word to give to the model.
October 6, 2025 at 4:47 PM
LLM, VLMs, ... can compress data
3x over JPEG\PNG etc.
6x Zlib, gzip etc.
How?
We all know they provide a probability over data, which is all classical compression needs
(arithmetic coding, see below)
Understanding is compressing, but this time not by the weights themselves
🤖📈🧠
#AI #compress #data
October 6, 2025 at 4:47 PM
Thus, a "feature" is defined by the sparse activations we find.
And these are shifting quite rapidly at a certain part in training
September 26, 2025 at 3:27 PM
How can we do it
So crosscoders map activations into a sparse representations and to decode those back into the activations (classic compress decompress).
A single crosscoder is then trained to map activations of all pretrain checkpoints, creating a shared space
September 26, 2025 at 3:27 PM
Employing mechanistic interpretability to study how models learn, not just where they end up
2 papers find:
There are phase transitions where features emerge and stay throughout learning
🤖📈🧠
alphaxiv.org/pdf/2509.17196
@amuuueller.bsky.social @abosselut.bsky.social
alphaxiv.org/abs/2509.05291
September 26, 2025 at 3:27 PM
We also hope that attentive readers recognize our section titles are organized as a step-by-step plan!
September 24, 2025 at 6:08 PM
They found that it is really hard to predict what is helpful (I wonder if it is because helpful itself is quite noisy, how predictable is it in general? with the best information?)
But also that plans, even bad ones help LLMs' and humans performance (but slow them down)
September 24, 2025 at 6:08 PM
The authors tasked many people with solving complicated questions based on information from step by step plans. And checked which plan helps more taking into account if it helped strong solvers (with IRT).

arxiv.org/abs/2509.18632
@nbalepur.bsky.social
September 24, 2025 at 6:08 PM
Helpfulness is what we are after, and we test it by asking humans for preferences, or reward models.
and they fail😆

They show that humans are bad at predicting what is helpful, so are reward models (all close to chance).
Reward models don't even predict what helps LLMs
RL🤔
🤖📈🧠
#AI #LLM
September 24, 2025 at 6:08 PM
Good luck with the
@iclr_conf
writing
Know anyone who needs tips?
Want a graph checklist?
Know any good tips you wanna add?

The writing guide:
docs.google.com/document/d/1...
September 17, 2025 at 5:43 PM
This is obviously not sustainable, and kills the internet (see other papers by @shaynelongpre.bsky.social and @stellaathena.bsky.social )
They also foresee that the amount of unpaid labour would continue to grow, with the demand for data.
arxiv.org/pdf/2504.12427
September 12, 2025 at 2:20 PM
The most expensive part of training is the data, not the compute
Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h
Well, several scales more than the compute.
Luckily (?), companies don't pay for the data
🤖📈🧠
September 12, 2025 at 2:20 PM
A dataset of ancient Chinese writings to study with LLMs, including 170K sentences for pretraining
With 10K words, mapping to modern word (when applicable)
There are so many fascinating questions out there
www.arxiv.org/abs/2508.15791
August 25, 2025 at 8:09 PM
ChatGPT agrees with you ...
August 15, 2025 at 8:36 PM
Prob is well correlated with more training, and so is the loss... This is just like perplexity, and doesn't test knowledge\bias\...
As support, the wrong answer is highly correlated with the right answer, so most of the signal comes from the sentence and form, not knowledge.
August 14, 2025 at 8:15 PM
When we get further away from next token prediction, we get side effects that make a low correlation between flops in training and score.
For example, negative answers can be reranked among them and change whether the right answer is picked or accuracy ignores a 49-51 confidence.
August 14, 2025 at 8:15 PM
Given a dataset of multiple-choice questions, we can compute
🔻(log)probability of the right answer
🔻Probability of the right answer normalized by the probability of the rest of the answers
🔻A metric such as accuracy or Brier
Each step gets us further from next token pred.
August 14, 2025 at 8:15 PM