Stefano Palminteri
banner
stepalminteri.bsky.social
Stefano Palminteri
@stepalminteri.bsky.social
Computational cognitive scientist interested in learning and decision-making in human and machiches
Research director of the Human Reinforcement Learning team
Ecole Normale Supérieure (ENS)
Institut National de la Santé et Recherche Médicale (INSERM)
Also found in the old sci-fi stash recently purchased in Bologna
The plot’s crux is an illustration of the alignment problem (an all-powerful AI with wildly misaligned goals). Basically, the paperclip maximiser has gone rogue.
(but do not expect great writing and depth of reflection)
November 10, 2025 at 8:25 AM
🇪🇺 I am a bit late for this, but is important:

R.I.P. Sofia Corradi (1934 – 2025), the beautiful mind behind the ERASMUS project, one of the most successful and beloved EU programme.

It has changed the life (and mind) of ~15 million Europeans (including mine).

en.wikipedia.org/wiki/Sofia_C...
October 27, 2025 at 8:17 PM
Just read this old-school sci-fi gem I found in a vintage bookstore in Bologna, where a Practical Philosopher Corps is deployed across the galaxy to assess sentience and cognition in alien species.
I guess the dream job for @birchlse.bsky.social @petergs.bsky.social
October 26, 2025 at 4:22 PM
At a time when prominent thinkers like @anilseth.bsky.social Seth and Ned Block advocate a "strategic withdrawal" toward biologism in considering consciousness beyond the human case, our contrarian proposal is a methodological behaviourist computationalism.
www.linkedin.com/posts/stefan...
October 26, 2025 at 1:22 PM
I think this is what we would have observed in Germain's and Constance's paper respectively if decay were true
October 20, 2025 at 3:37 PM
Exhibit #3: back to "stable" tasks, @constancedestais.bsky.social conditioned learning rates on confidence over time, and show that the asymmetry is still there. Indeed, it increases over time. Note that the model structure would have perfectly allowed for a "symmetric decaying" pattern 4/n
October 19, 2025 at 8:22 AM
Exhibit #2: learning rate bias has been reported (by us and other groups) in volatile tasks or conditions where, normatively, learning rates should not decay, and, perhaps more importantly, empirically, they indeed do not decay - if not, accuracy would not be above chance 3/n
doi.org/10.1016/j.ti...
October 19, 2025 at 8:22 AM
Exhibit #1: I was aware of this possibility since our first paper on the topic, and this is why we fitted separate learning rates in the first half and the second half of the learning phase. We found no evidence of decay and robust bias in both phases 2/n

www.nature.com/articles/s41...
October 19, 2025 at 8:22 AM
Thought experiments such as the Blockhead and Super-Super Spartans are often taken as “definitive” arguments against behavior-based inference of cognitive processes.
In our review -with @thecharleywu.bsky.social- we argue they may not be as definitive as originally thought.
October 9, 2025 at 12:34 PM
New (revised) preprint with @thecharleywu.bsky.social
We rethink how to assess machine consciousness: not by code or circuitry, but by behavioral inference—as in cognitive science.
Extraordinary claims still need extraordinary evidence.
👉 osf.io/preprints/ps...
#AI #Consciousness #LLM
October 8, 2025 at 9:02 AM
This book by @anilananth.bsky.social is great — perfect for those, like me, who have an intuitive and geometric grasp of math but unfortunately no formal training. Highly recommended!
October 1, 2025 at 3:47 PM
Braitenberg's Vehicles arrived yesterday and I'm already halfway through it. An amazingly funny, clear, and lucid treatment of the question of attributing higher cognitive functions to artificial systems. Obviously very timely for current debates in AI
September 11, 2025 at 8:01 AM
This is the link to the previous study that served to the bases for our recent @pnas.org study on the optimality of choice-confirmation bias and perseveration.

"Choice-Confirmation Bias and Gradual Perseveration in Human Reinforcement Learning"

Open here:
www.researchgate.net/publication/...
September 9, 2025 at 11:46 AM
After 6 wonderful years together, it’s time to say goodbye. Farewell to Magdalena Soukupova – brilliant scientist, pillar of the HRL team, and amazing human being (sic.). Whatever lab you join next will be very lucky to have you
August 26, 2025 at 6:54 AM
5/
🏛 Part 3 – RL in public policy
Despite being central in education, therapy & even marketing, RL is oddly underused in behavioral public policy compared to “nudges” or “boosts.”
We argue history & misconceptions are partly to blame.
August 12, 2025 at 7:50 AM
🎯 Part 2 – Reinforcement learning in depth
From the basics of action–outcome learning to the fine details of biases like:

Relative valuation (context-dependent outcome encoding)

Positivity bias (learning more from good than bad news)
August 12, 2025 at 7:50 AM
🧠 Part 1 – What’s a cognitive bias?
We propose a computational, value-free definition of bias — not as “errors” but as systematic deviations between reality and internal representation, which can sometimes help decision-making.
We also propose a Taxonomy for biases with the RL framework
August 12, 2025 at 7:50 AM
🚨 New paper out in Mind & Society!
Human reinforcement learning processes and biases: computational characterization and possible applications to behavioral public policy
🔗 link.springer.com/article/10.1...
August 12, 2025 at 7:50 AM
Last few days of #vacation in #sicily
July 29, 2025 at 6:53 AM
Lucky for you, lazy people at #RLDM2025, two of the best posters have apparently been put side-by-side: go check @maevalhotellier.bsky.social and @constancedestais.bsky.social posters!
June 11, 2025 at 9:20 AM
I am old enough to be quite confident that finishing the 3.5KM open water Montecristo's challenge in Marseille will be my greatest personal achievement of 2025 🏊🐟
race.ip-links.net/DEFI25/Resul...
June 10, 2025 at 8:25 AM
For those around, see you in London next week! 🎡
www.fil.ion.ucl.ac.uk/events/
May 16, 2025 at 11:29 AM
Check out @nicolasyax.bsky.social
thread about our paper (co-supervised by @pyoudeyer.bsky.social) where we show that evolutionary tree reconstruction can be successfully applied to map LLMs to map relations and predict their performance! Currently at @iclr-conf.bsky.social
April 29, 2025 at 11:38 AM
No single "litmus test" will decide machine consciousness.
Instead, we must build evidence over time, evolving our standards as AI systems themselves evolve.
Machine consciousness is not a yes/no switch — it's a growing body of behavioral clues and consensus about what behaviors matter.
April 14, 2025 at 9:04 AM
Our "Behavioral Inference Principle" suggests:
➡️ If a system behaves as if it's conscious, we should seriously consider that it might be.
Like in human cognitive science, observable behavior is our best (only) window into hidden mental states.
If you do not trust us, trust them
April 14, 2025 at 9:04 AM