Anna (Anya) Ivanova
banner
neuranna.bsky.social
Anna (Anya) Ivanova
@neuranna.bsky.social
Language and thought in brains and in machines. Assistant Prof @ Georgia Tech Psychology. Previously a postdoc @ MIT Quest for Intelligence, PhD @ MIT Brain and Cognitive Sciences. She/her

https://www.language-intelligence-thought.net
You get a few free runs if you register with an edu email. I tried once, it was ok - I think a truly successful approach would require some prompt iteration, so multiple runs. Also of course the result will very much depend on the problem at hand.
November 17, 2025 at 4:27 PM
Much remains to be done on this front, ideas are welcome!

6/end
October 23, 2025 at 4:21 PM
Finally, we wanted to ensure that what we're showing is LLM patterns not TunedLens patterns - see Appendix for the control analyses we do! (+a comparison with LogitLens) 5/
October 23, 2025 at 4:20 PM
We then apply this approach to 3 case studies - prediction by part-of-speech, multi-token fact recall, and fixed-response question answering. Check the paper & Akshat's thread for details!

(look at this model predicting positive/negative sentiment - such a clear pattern!)

4/
October 23, 2025 at 4:20 PM
Most early layer predictions get overturned! It appears that these are statistical guesses, made when the model has not yet processed enough contextual information.

(flip rate = how often the model's final prediction differs from the current layer's prediction)

3/
October 23, 2025 at 4:19 PM
In early layers, most frequent tokens (e.g., "the") dominate predictions, whereas infrequent tokens gradually become more predicted later on.

2/
October 23, 2025 at 4:18 PM
Perhaps! But language contains a lot of information about the physical world, so in principle these are learnable from language alone
October 21, 2025 at 12:01 PM
Omg adorable! Thanks! :)
October 21, 2025 at 3:28 AM
September 29, 2025 at 5:39 PM
We have started with naturalistic audio stories and fMRI data, but the setup can be extended to other types of input modalities and brain data; feel free to contribute
github.com/GT-LIT-Lab/l...

2/
GitHub - GT-LIT-Lab/litcoder_core: LITcoder: A modular Python library for building, training, and benchmarking fMRI encoding models from language and speech features.
LITcoder: A modular Python library for building, training, and benchmarking fMRI encoding models from language and speech features. - GT-LIT-Lab/litcoder_core
github.com
September 29, 2025 at 5:34 PM