Tobias Gerstenberg
@tobigerstenberg.bsky.social
3.3K followers 630 following 150 posts
Tea drinking assistant professor of cognitive psychology at Stanford. https://cicl.stanford.edu
Posts Media Videos Starter Packs
tobigerstenberg.bsky.social
This project was expertly led by David Rose (davdrose.github.io) in collaboration with @siyingzhg.bsky.social, Sophie Bridgers, Hyo Gweon, and myself.

📄 doi.org/10.31234/osf...
🔗https://github.com/cicl-stanford/counterfactual_development
tobigerstenberg.bsky.social
Getting more precise estimates about when counterfactual thinking develops allows us to better understand how it impacts other cognitive capacities.

We hope that the "dropping things" task will be used and adapted by others to study counterfactual thinking and its development 👍
tobigerstenberg.bsky.social
So what did we find? We tested 480 children and 91 adults online. Participants saw 4 (Exp 1) or 6 different scenarios. We find that children perform above chance when they're around 5 years of age. And we find a marked shift in performance around 7 years of age (where most children seem to get it).
tobigerstenberg.bsky.social
Three experiments rule out simpler explanations:

1️⃣ Different objects; children might answer based on preference.
2️⃣ Same objects; children might anticipate what would happen (hypothetical thinking).
3️⃣ Same objects, outcome revealed later; children need genuine counterfactual thinking.
tobigerstenberg.bsky.social
The "dropping things" task removes language and tests genuine counterfactual thinking.

Granny drops two objects: an 🥚 and a 🏀. Two friends catch them. Granny would like to thank them but only has one sticker. Who should she give it to? Not catching the 🥚 would have been worse, so "Suzy"!
tobigerstenberg.bsky.social
Estimates of when counterfactual thinking develops range from 2-12 years. Two potential reasons: language & reasoning.

💬 A question like: "Where would Peter have been if there hadn’t been a fire?” is difficult to understand!

🤔 Counterfactual and hypothetical thinking are different!
tobigerstenberg.bsky.social
🚨New Preprint: We develop a novel task that probes counterfactual thinking without using counterfactual language, and that teases apart genuine counterfactual thinking from related forms of thinking. Using this task, we find that the ability for counterfactual thinking emerges around 5 years of age.
tobigerstenberg.bsky.social
Thanks Evan Orticio (orticio.com) for sharing your fascinating work with us on how children and adults form beliefs without direct evidence.

In one super cool study, he shows how children become more diligent fact checkers in less reliable environments.

📃 orticio.com/assets/Ortic...
tobigerstenberg.bsky.social
I had a wonderful time visiting UC Irvine to give a talk in the cognitive science colloquium. Thank you @annaleshinskaya.bsky.social for being a fantastic host, and to all the other faculty, students, and postdocs I got to meet during my visit 🙏
Reposted by Tobias Gerstenberg
yangxiang.bsky.social
Now out in Cognition, work with the great @gershbrain.bsky.social @tobigerstenberg.bsky.social on formalizing self-handicapping as rational signaling!
📃 authors.elsevier.com/a/1lo8f2Hx2-...
tobigerstenberg.bsky.social
Yes, the physics engine will always simulate a full trajectory -- although we see in people's eye-movements that they sometimes only consider partial trajectories, and that they jump quickly from one critical point in the trajectory (e.g. an obstacle collision) to another point.
tobigerstenberg.bsky.social
Simulating world models supports strong multimodal inferences. Prior work modeled multimodal inference as optimal averaging. But in the "Sound + Ball Occluded" condition each modality alone is useless (only hearing sounds, or only seeing obstacles). Combining both sources reveals what happened!
tobigerstenberg.bsky.social
The Sequential Sampler accurately captures people's judgments and eye-movements across the three inference conditions.
tobigerstenberg.bsky.social
The model also predicts eye-movements. It assumes that people look at visual features of the scene, but also at dynamic features that are the consequence of mentally simulating how the ball would fall and collide with the obstacles and walls if it was dropped into the different holes.
tobigerstenberg.bsky.social
We develop a sequential sampling model which assumes that people simulate different possibilities proportional to their plausibility. The model performs Bayesian inference by conditioning on the available evidence step-by-step.
tobigerstenberg.bsky.social
In the inference task, participants get different combinations of evidence:

(1) No Sound + Ball Visible: Only see the ball.
(2) Sound + Ball Visible: Hear collisions, then see the ball.
(3) Sound + Ball Occluded: Hear collisions, don't see the ball.

We record judgments ⚖️ and eye-movements 👀.
tobigerstenberg.bsky.social
In the prediction task, participants click 10 times where the ball will allowing them to express their uncertainty in a structured way. Their predictions are very well explained (r=0.99) by a physics simulation model that assumes that people are unsure about how the ball drops and how it collides.
tobigerstenberg.bsky.social
We created "Plinko" - a physics reasoning task where people:

🔮 PREDICT where a ball will land (forward reasoning)
🕵️ INFER where a ball came from using visual + auditory cues (backward reasoning)
tobigerstenberg.bsky.social
🚨 NEW PREPRINT: Multimodal inference through mental simulation.

We examine how people figure out what happened by combining visual and auditory evidence through mental simulation.

Paper: osf.io/preprints/ps...
Code: github.com/cicl-stanfor...
tobigerstenberg.bsky.social
This is an epic paper!

I very much enjoyed chatting with @dyamins.bsky.social about the connections between world models and counterfactual simulation.
dyamins.bsky.social
Here is our best thinking about how to make world models. I would apologize for it being a massive 40-page behemoth, but it's worth reading. arxiv.org/pdf/2509.09737
arxiv.org
Reposted by Tobias Gerstenberg
rachitdubey.bsky.social
My lab at UCLA is hiring 1-2 PhD students this cycle!

Join us to work at the intersection of cognitive science and AI applied to pressing societal challenges like climate change.

More info about me: rachit-dubey.github.io

My lab: ucla-cocopol.github.io

Please help repost/spread the word!