Saurabh
Saurabh
@saurabhr.bsky.social
Ph.D. in Psychology | Currently on Job Market | saurabhr.github.io
Brain-Score is a prominent and an essential benchmark for AI models. But why do these models have highly low behavior score even though they reach ceiling for neural score?

If an AI model which has a perfect neural as well as behavior scores, then will that model is a model of consciousness?
November 12, 2025 at 5:37 AM
A fun backstory for my paper: a Blade Runner-style test to distinguish humans from AI using only language. We used network science to probe their imagination and internal world models. This pic is from our first LLM tests.

#AI #BladeRunner #NetworkScience #NLP
October 10, 2025 at 12:13 AM
2. Clustering Alignment: LLM imagination networks often lacked the characteristic clustering seen in human data, frequently collapsing into only a single cluster, and lacked clustering alignment with humans. 🧵6/n
October 7, 2025 at 2:06 PM
My results showed that human IWMs were consistently organized, exhibiting highly significant correlations across local (Expected Influence, Strength) and global (Closeness) centrality measures. This suggests a general property of how IWMs are structured across human populations. 🧵4/n
October 7, 2025 at 2:05 PM
In this paper, we utilized imagination vividness ratings and network analysis to measure the properties of internal world models in natural and artificial cognitive agents.
(first three columns from left in the pic are imagination networks for VVIQ-2, next three columns for PSIQ) 🧵3/n
October 7, 2025 at 2:03 PM