Taken together, our results reveal that co-activation of unsaid alternatives is a shared computational principle between human comprehension, human production, and artificial language modeling. Huge thanks to my co-authors: Daniel Friedman, Adeen Flinker and Ariel Goldstein 🙏
November 11, 2025 at 8:41 AM
Taken together, our results reveal that co-activation of unsaid alternatives is a shared computational principle between human comprehension, human production, and artificial language modeling. Huge thanks to my co-authors: Daniel Friedman, Adeen Flinker and Ariel Goldstein 🙏
Finally, inspired by theories of shared mechanisms between comprehension and production we tested encoding models trained on comprehension data on production (and vice versa) and found that they successfully generalized, preserving rank order and suggesting a shared neural code.
November 11, 2025 at 8:41 AM
Finally, inspired by theories of shared mechanisms between comprehension and production we tested encoding models trained on comprehension data on production (and vice versa) and found that they successfully generalized, preserving rank order and suggesting a shared neural code.
Even more surprisingly, although a speaker presumably knows what they’re about to say, we saw similar results in production - the brain still encodes multiple alternatives. This was true even when the actual next word was not among the LLM’s top-3 predictions.
November 11, 2025 at 8:41 AM
Even more surprisingly, although a speaker presumably knows what they’re about to say, we saw similar results in production - the brain still encodes multiple alternatives. This was true even when the actual next word was not among the LLM’s top-3 predictions.
Using ECOG recording from language areas (IFG, STG), encoding models trained on static embeddings (e.g. GloVe) of top-ranked LLM predictions sig. predicted neural activity. Going further, averaging embeddings of top-k predictions improved encoding performance up to k = 100! 🤯
November 11, 2025 at 8:41 AM
Using ECOG recording from language areas (IFG, STG), encoding models trained on static embeddings (e.g. GloVe) of top-ranked LLM predictions sig. predicted neural activity. Going further, averaging embeddings of top-k predictions improved encoding performance up to k = 100! 🤯
We found that top-ranked LLMs predictions are both recognized faster in a pre-registered priming experiment and produced with shorter word-gaps in free speech generation, indicating that the brain pre-activates those alternatives. We then turned to neural 🧠 data.
November 11, 2025 at 8:41 AM
We found that top-ranked LLMs predictions are both recognized faster in a pre-registered priming experiment and produced with shorter word-gaps in free speech generation, indicating that the brain pre-activates those alternatives. We then turned to neural 🧠 data.
We first needed a way to estimate the set of possible alternatives at each point in the conversation. We used the top-ranked predictions of LLMs and validated their cognitive relevance with two behavioral experiments.
November 11, 2025 at 8:41 AM
We first needed a way to estimate the set of possible alternatives at each point in the conversation. We used the top-ranked predictions of LLMs and validated their cognitive relevance with two behavioral experiments.