Iris van Rooij 💭
@irisvanrooij.bsky.social
16K followers 1.1K following 1.6K posts
Professor of Computational Cognitive Science | @AI_Radboud | @[email protected] on 🦣 | http://cognitionandintractability.com | she/they 🏳️‍🌈
Posts Media Videos Starter Packs
Pinned
irisvanrooij.bsky.social
NEW paper! 💭🖥️

“Combining Psychology with Artificial Intelligence: What could possibly go wrong?”

— Brief review paper by @olivia.science & myself, highlighting traps to avoid when combining Psych with AI, and why this is so important. Check out our proposed way forward! 🌟💡

osf.io/preprints/ps...
Table 1
Typology of traps, how they can be avoided, and what goes wrong if not avoided. Note that all traps in a sense constitute category errors (Ryle & Tanney, 2009) and the success-to-truth inference (Guest & Martin, 2023) is an important driver in most, if not all, of the traps.
Reposted by Iris van Rooij 💭
spavel.bsky.social
There is a cult of action at the heart of tech.

This cult says: don't mind that the systems are broken. Don't try and fix them. You can just do things, using you ubermensch will.

AI has plugged into this cult to promise 10x-ing your action. But instead your will becomes subservient to the machine.
"Just doing things" is not a path to value
Action for the sake of action feels good, but the path of least resistance leads you to surrender your own agency.
productpicnic.beehiiv.com
Reposted by Iris van Rooij 💭
david-j-hensley.bsky.social
I am in a Facebook group where people ridicule AI slop. Someone shared a Sora video in which Hitler was giving a TED Talk about how he was forced into World War II by the Poles & the Brits. I reported it & FB said it was not "hateful." Also, Sora apparently has no qualms producing vids with Hitler.
Reposted by Iris van Rooij 💭
anthonymoser.com
Environmental racism includes poisoning Latine neighborhoods with chemical weapons
lyndab08.bsky.social
The deployment of tear gas throughout the Chicago area is a growing health concern. ICE is deploying tear gas in residential neighborhoods, near children, playgrounds, stores, etc. What are the health implications on our communities?
irisvanrooij.bsky.social
We naturally tend to project intentionality and agency onto things that look as if they have those, even if they clearly do not. This classic experiment provides a nice example m.youtube.com/watch?v=VTNm...
Heider and Simmel (1944) animation
YouTube video by Kenjirou
m.youtube.com
Reposted by Iris van Rooij 💭
billkristolbulwark.bsky.social
Pope Leo quotes Hannah Arendt:

“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction and the distinction between true and false no longer exist."

www.cbsnews.com/chicago/news...
Pope Leo calls for news agencies to stand as bulwark against "post-truths," lies and manipulation
Pope Leo XIV has encouraged international news agencies to stand firm as a bulwark against the "ancient art of lying" and manipulation.
www.cbsnews.com
Reposted by Iris van Rooij 💭
olivia.science
A Dutch newspaper is literally holding a poll on if a colleague, another professor should be fired because he's pro Palestine. Academic freedom is dying many deaths here. www.gelderlander.nl/home/ontslag...
Poll Poll results, 84% in favour of firing him
irisvanrooij.bsky.social
I do not really understand why …
Reposted by Iris van Rooij 💭
irisvanrooij.bsky.social
We used to speak of ‘sloppy science’ when there were QRPs. Now we have slop science 😔
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
irisvanrooij.bsky.social
AI “summaries” are not summaries but slop
irisvanrooij.bsky.social
Psychologist here. Indeed they are.
Reposted by Iris van Rooij 💭
fcracoalitionep.bsky.social
Science absolutely must be a cornerstone of democracy.
Reposted by Iris van Rooij 💭
rdrouyn.bsky.social
Portland Frog - Watercolor

Me
irisvanrooij.bsky.social
I know it is my limitation, but I just cannot wrap my head around someone like Alice Weidel
Reposted by Iris van Rooij 💭
Reposted by Iris van Rooij 💭
olivia.science
Without Critical AI Literacy (CAIL) in psychology (doi.org/10.31234/osf...) we risk the following:

1️⃣ misunderstanding statistical models, thinking correlation is causation;

2️⃣ confusing statistical models and cognitive models, undermining theory;

3️⃣ going against stated open science norms.

4/
The three aforementioned related themes sketched out in this
section, will play out in the AI-social psychology relationships we
will examine — namely:
a. misunderstanding of the statistical modelswhich constitute contemporary AI, leading to inter alia thinking that
correlation implies causation (Guest, 2025; Guest & Martin, 2023, 2025a, 2025b; Guest, Scharfenberg, & van Rooij,
2025; Guest, Suarez, et al., 2025);
b. confusion between statistical versus cognitive models
when it comes to their completely non-overlapping roles
when mediating between theory and observations (Guest
& Martin, 2021; Morgan & Morrison, 1999; Morrison &
Morgan, 1999; van Rooij & Baggio, 2021);
c. anti-open science practices, such as closed source code,
stolen and opaque collection and use of data, obfuscated
conflicts of interest, lack of accountability for models’
architectures, i.e. statistical methods and input-output
mappings are not well documented (Barlas et al., 2021;
Birhane & McGann, 2024; Birhane et al., 2023; Crane,
2021; Gerdes, 2022; Guest & Martin, 2025b; Guest, Suarez,
et al., 2025; Liesenfeld & Dingemanse, 2024; Liesenfeld et
al., 2023; Mirowski, 2023; Ochigame, 2019; Thorne, 2009). Being able to detect and counteract all these three together comprises the bedrock of skills in research methods in a time when AI
is used uncritically (see Table 1). The inverse: not noticing these
are at play, or even promoting them, could be seen as engaging in
questionable research practises (QRPs; Brooker & Allum, 2024;
Neoh et al., 2023; Rubin, 2023). Therefore, in the context of critical AI literacy for social psychology, and indeed cognitive, neuro-,
and psychological sciences in general, the three points above serve
as totemic touchstones, as litmus tests for checking somebody’s
literacy in AI (Guest, 2024; Guest & Martin, 2021, 2025a, 2025b; Guest, Scharfenberg, & van Rooij, 2025; Guest, Suarez, et al.,
2025; Suarez et al., 2025; van Rooij & Baggio, 2021; van Rooij
& Guest, 2025; van Rooij et al., 2024b). To wit, if somebody is
able to minimally articulate these three related issues, how they
manifest, and why they matter to our science, we can rest easy
they know the basics of how to critically evaluate AI products in
science.
Reposted by Iris van Rooij 💭
olivia.science
Table 1 in Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf... and above gives an overview of what kinds of statements could be encountered within psychology and AI use and how to react or reframe them, e.g. if you see the below, refer to section 2 for more info.

3/
extract from table 1 in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Section 2 in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 cont section 2 in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
olivia.science
First off, we present the concurrently unfolding so-called replication crises and intertwined historical events that lead to where we are in both social psychology (and psychology generally of course as well as other fields) and artificial intelligence. Against this backdrop is the present...

2/
intro in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 intro in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
homebrewandhacking.bsky.social
You assume that LLMs will get better. The science is that even in better than ideal conditions they need more information, unpolluted with AI, then there are atoms in the universe.

That seems hard to me, and explains their desperation a) for our work and b) for acceptance.

bsky.app/profile/iris...
irisvanrooij.bsky.social
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps...)

Below a thread summary 🧵1/n

#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of Al in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of Al, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Reposted by Iris van Rooij 💭
homebrewandhacking.bsky.social
Thankyou for this article. A real insight into the driving force, fear, behind this foolish adoption.

Your intuition that AI has peaked is good btw. A researcher has ball-parked it and, unless there is a paradigm shifting breakthrough, it's not possible to get to AGI. Mathematically speaking. 😀
irisvanrooij.bsky.social
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps...)

Below a thread summary 🧵1/n

#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of Al in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of Al, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Reposted by Iris van Rooij 💭
homebrewandhacking.bsky.social
Ooh very nice.

You might also be interested in the mathematical proof that LLMs can't achieve even dog-like intelligence without an unimaginable breakthrough or more data than atoms in the universe.

bsky.app/profile/iris...
irisvanrooij.bsky.social
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps...)

Below a thread summary 🧵1/n

#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of Al in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of Al, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Reposted by Iris van Rooij 💭
olivia.science
Oh better thread on makeism below, relevant to Feynman and in fact directly so are both papers; modern alchemy above and this: bsky.app/profile/iris...
irisvanrooij.bsky.social
Start of ACT 2: "Based on our analysis [in Act 1], we reject the view and associated project that we term ‘makeism’. See Box 2 for a definition." 16/n
Box 2 — What is makeism?
Makeism: The view that computationalism implies that (a) it is possible to (re)make cognition computation- ally; (b) if we (re)make cognition then we can explain and/or understand it; and possibly (c) explaining and/or understanding cognition requires (re)making cognition itself.
The methodology endorsed by makeists has been re- ferred to as the synthetic methodology or understand- ing by design and building (Bisig & Pfeifer, 2008; Pfeifer & Scheier, 2001); also see formal realism (Chirimuuta, 2021). A well-known quote from Feyn- man (1988), “what I cannot create, I do not under- stand”, is often used to support the idea of makeism in AI (e.g. Karpathy et al., 2016).
Note that it is especially easy for makeists to fall into map-territory confusion—mistaking their model- ing artefacts for cognition itself—due to the view that the made thing could be cognition.
Reposted by Iris van Rooij 💭
olivia.science
And also my paper here where I critique this methodology from a different angle bsky.app/profile/oliv...
olivia.science
I've felt for a while that a mainstream method, reverse engineering, in cognitive science & AI is incompatible w computationalism‼️ So I wrote "Modern Alchemy: Neurocognitive Reverse Engineering" w the wonderful Natalia S. & @irisvanrooij.bsky.social to elaborate: philsci-archive.pitt.edu/25289/
1/n
Abstract and title page of PDF Table 1 Table 2
Reposted by Iris van Rooij 💭