Saurabh
@saurabhr.bsky.social
60 followers 240 following 12 posts
Ph.D. in Psychology | Currently on Job Market | Pursuing Consciousness, Reality Monitoring, World Models, Imagination with my life force. saurabhr.github.io
Posts Media Videos Starter Packs
Reposted by Saurabh
handle.invalid
New paper in Imaging Neuroscience by Viviana Greco, Penelope A. Lewis, et al:

Disarming emotional memories using targeted memory reactivation during rapid eye movement sleep

doi.org/10.1162/IMAG...
Reposted by Saurabh
matanmazor.bsky.social
Consciousness science as a marketplace of rationalizations

my commentary on @smfleming.bsky.social and @matthiasmichel.bsky.social's thought-provoking BBS paper, and more generally about the field.

osf.io/preprints/ps...
OSF
osf.io
Reposted by Saurabh
emollick.bsky.social
This paper shows that you can predict actual purchase intent (90% accuracy) by asking an off-the-shelf LLM to impersonate a customer with a demographic profile, giving it a product image & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
Reposted by Saurabh
Whereas believing Consciousness doesn't exist risks infinite loss (lack of humanity) if Consciousness does exist."

3/End-of-Post
"Betting on Consciousness's existence in NhA because believing in Consciousness offers potentially infinite gains (like humanity) and minimal losses if Consciousness doesn't exist, ...

2/n
Seeing all the consciousness and non-humans/AI (NhA) theories coming up, I feel like there will be a point of "Consciousness Pascal's Wager".

1/n
A fun backstory for my paper: a Blade Runner-style test to distinguish humans from AI using only language. We used network science to probe their imagination and internal world models. This pic is from our first LLM tests.

#AI #BladeRunner #NetworkScience #NLP
Reposted by Saurabh
drlaschowski.bsky.social
Imagine a brain decoding algorithm that could generalize across different subjects and tasks. Today, we’re one step closer to achieving that vision.

Introducing the flagship paper of our brain decoding program: www.biorxiv.org/content/10.1...
#neuroAI #compneuro @utoronto.ca @uhn.ca
Reposted by Saurabh
mariamaly.bsky.social
Are you an early career scholar interested in learning more about peer review?

Join us for our virtual @reviewerzero.bsky.social workshop! We will help you understand how peer review works and give advice on responding to reviewer comments.

9-10:30am PT / 12-1:30pm ET on October 30th. Register👇🏼
Welcome! You are invited to join a meeting: Peer Review 101. After registering, you will receive a confirmation email about joining the meeting.
Welcome! You are invited to join a meeting: Peer Review 101. After registering, you will receive a confirmation email about joining the meeting.
northwestern.zoom.us
Keep watching this space for more cool stuff in the upcoming weeks!!
These structural difference confirms that human and LLM agents possess distinct internal world models. Despite their linguistic capacity, LLMs lack the phenomenological structures reflected in human minds.
2. Clustering Alignment: LLM imagination networks often lacked the characteristic clustering seen in human data, frequently collapsing into only a single cluster, and lacked clustering alignment with humans. 🧵6/n
But LLMs? They demonstrate a fundamental structural failure:
1. Inconsistent Importance: LLM centrality correlations with humans were inconsistent and rarely survived statistical corrections 🧵5/n
My results showed that human IWMs were consistently organized, exhibiting highly significant correlations across local (Expected Influence, Strength) and global (Closeness) centrality measures. This suggests a general property of how IWMs are structured across human populations. 🧵4/n
In this paper, we utilized imagination vividness ratings and network analysis to measure the properties of internal world models in natural and artificial cognitive agents.
(first three columns from left in the pic are imagination networks for VVIQ-2, next three columns for PSIQ) 🧵3/n
The study was based on the idea that imagination may be involved in accessing internal world models, a concept previously proposed by leading AI researchers, such as Yutaka Matsuo and Yann LeCun. 🧵2/n
Reposted by Saurabh
jorge-morales.bsky.social
Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵
Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models
This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in th...
arxiv.org
Reposted by Saurabh
sampendu.bsky.social
Long time in the making: our preprint of survey study on the diversity with how people seem to experience #mentalimagery. Suggests #aphantasia should be redefined as absence of depictive thought, not merely "not seeing". Some more take home msg:
#psychskysci #neuroscience

doi.org/10.1101/2025...
Reposted by Saurabh
biorxiv-neursci.bsky.social
Tension shapes memory: Computational insights into neural plasticity https://www.biorxiv.org/content/10.1101/2025.08.20.671220v1
Reposted by Saurabh
mehr.nz
samuel mehr @mehr.nz · Aug 23
While we're on the subject of coffee, one of the espresso influencer gearheads posted this informative video about why different espresso drinks are called what they're called