Iris van Rooij 💭
@irisvanrooij.bsky.social
16K followers 1.1K following 1.5K posts
Professor of Computational Cognitive Science | @AI_Radboud | @[email protected] on 🦣 | http://cognitionandintractability.com | she/they 🏳️‍🌈
Posts Media Videos Starter Packs
Pinned
irisvanrooij.bsky.social
NEW paper! 💭🖥️

“Combining Psychology with Artificial Intelligence: What could possibly go wrong?”

— Brief review paper by @olivia.science & myself, highlighting traps to avoid when combining Psych with AI, and why this is so important. Check out our proposed way forward! 🌟💡

osf.io/preprints/ps...
Table 1
Typology of traps, how they can be avoided, and what goes wrong if not avoided. Note that all traps in a sense constitute category errors (Ryle & Tanney, 2009) and the success-to-truth inference (Guest & Martin, 2023) is an important driver in most, if not all, of the traps.
Reposted by Iris van Rooij 💭
olivia.science
Without Critical AI Literacy (CAIL) in psychology (doi.org/10.31234/osf...) we risk the following:

1️⃣ misunderstanding statistical models, thinking correlation is causation;

2️⃣ confusing statistical models and cognitive models, undermining theory;

3️⃣ going against stated open science norms.

4/
The three aforementioned related themes sketched out in this
section, will play out in the AI-social psychology relationships we
will examine — namely:
a. misunderstanding of the statistical modelswhich constitute contemporary AI, leading to inter alia thinking that
correlation implies causation (Guest, 2025; Guest & Martin, 2023, 2025a, 2025b; Guest, Scharfenberg, & van Rooij,
2025; Guest, Suarez, et al., 2025);
b. confusion between statistical versus cognitive models
when it comes to their completely non-overlapping roles
when mediating between theory and observations (Guest
& Martin, 2021; Morgan & Morrison, 1999; Morrison &
Morgan, 1999; van Rooij & Baggio, 2021);
c. anti-open science practices, such as closed source code,
stolen and opaque collection and use of data, obfuscated
conflicts of interest, lack of accountability for models’
architectures, i.e. statistical methods and input-output
mappings are not well documented (Barlas et al., 2021;
Birhane & McGann, 2024; Birhane et al., 2023; Crane,
2021; Gerdes, 2022; Guest & Martin, 2025b; Guest, Suarez,
et al., 2025; Liesenfeld & Dingemanse, 2024; Liesenfeld et
al., 2023; Mirowski, 2023; Ochigame, 2019; Thorne, 2009). Being able to detect and counteract all these three together comprises the bedrock of skills in research methods in a time when AI
is used uncritically (see Table 1). The inverse: not noticing these
are at play, or even promoting them, could be seen as engaging in
questionable research practises (QRPs; Brooker & Allum, 2024;
Neoh et al., 2023; Rubin, 2023). Therefore, in the context of critical AI literacy for social psychology, and indeed cognitive, neuro-,
and psychological sciences in general, the three points above serve
as totemic touchstones, as litmus tests for checking somebody’s
literacy in AI (Guest, 2024; Guest & Martin, 2021, 2025a, 2025b; Guest, Scharfenberg, & van Rooij, 2025; Guest, Suarez, et al.,
2025; Suarez et al., 2025; van Rooij & Baggio, 2021; van Rooij
& Guest, 2025; van Rooij et al., 2024b). To wit, if somebody is
able to minimally articulate these three related issues, how they
manifest, and why they matter to our science, we can rest easy
they know the basics of how to critically evaluate AI products in
science.
Reposted by Iris van Rooij 💭
olivia.science
Table 1 in Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf... and above gives an overview of what kinds of statements could be encountered within psychology and AI use and how to react or reframe them, e.g. if you see the below, refer to section 2 for more info.

3/
extract from table 1 in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Section 2 in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 cont section 2 in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
olivia.science
First off, we present the concurrently unfolding so-called replication crises and intertwined historical events that lead to where we are in both social psychology (and psychology generally of course as well as other fields) and artificial intelligence. Against this backdrop is the present...

2/
intro in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 intro in Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
homebrewandhacking.bsky.social
You assume that LLMs will get better. The science is that even in better than ideal conditions they need more information, unpolluted with AI, then there are atoms in the universe.

That seems hard to me, and explains their desperation a) for our work and b) for acceptance.

bsky.app/profile/iris...
irisvanrooij.bsky.social
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps...)

Below a thread summary 🧵1/n

#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of Al in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of Al, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Reposted by Iris van Rooij 💭
homebrewandhacking.bsky.social
Thankyou for this article. A real insight into the driving force, fear, behind this foolish adoption.

Your intuition that AI has peaked is good btw. A researcher has ball-parked it and, unless there is a paradigm shifting breakthrough, it's not possible to get to AGI. Mathematically speaking. 😀
irisvanrooij.bsky.social
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps...)

Below a thread summary 🧵1/n

#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of Al in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of Al, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Reposted by Iris van Rooij 💭
homebrewandhacking.bsky.social
Ooh very nice.

You might also be interested in the mathematical proof that LLMs can't achieve even dog-like intelligence without an unimaginable breakthrough or more data than atoms in the universe.

bsky.app/profile/iris...
irisvanrooij.bsky.social
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps...)

Below a thread summary 🧵1/n

#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of Al in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of Al, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Reposted by Iris van Rooij 💭
olivia.science
Oh better thread on makeism below, relevant to Feynman and in fact directly so are both papers; modern alchemy above and this: bsky.app/profile/iris...
irisvanrooij.bsky.social
Start of ACT 2: "Based on our analysis [in Act 1], we reject the view and associated project that we term ‘makeism’. See Box 2 for a definition." 16/n
Box 2 — What is makeism?
Makeism: The view that computationalism implies that (a) it is possible to (re)make cognition computation- ally; (b) if we (re)make cognition then we can explain and/or understand it; and possibly (c) explaining and/or understanding cognition requires (re)making cognition itself.
The methodology endorsed by makeists has been re- ferred to as the synthetic methodology or understand- ing by design and building (Bisig & Pfeifer, 2008; Pfeifer & Scheier, 2001); also see formal realism (Chirimuuta, 2021). A well-known quote from Feyn- man (1988), “what I cannot create, I do not under- stand”, is often used to support the idea of makeism in AI (e.g. Karpathy et al., 2016).
Note that it is especially easy for makeists to fall into map-territory confusion—mistaking their model- ing artefacts for cognition itself—due to the view that the made thing could be cognition.
Reposted by Iris van Rooij 💭
olivia.science
And also my paper here where I critique this methodology from a different angle bsky.app/profile/oliv...
olivia.science
I've felt for a while that a mainstream method, reverse engineering, in cognitive science & AI is incompatible w computationalism‼️ So I wrote "Modern Alchemy: Neurocognitive Reverse Engineering" w the wonderful Natalia S. & @irisvanrooij.bsky.social to elaborate: philsci-archive.pitt.edu/25289/
1/n
Abstract and title page of PDF Table 1 Table 2
Reposted by Iris van Rooij 💭
Reposted by Iris van Rooij 💭
irisvanrooij.bsky.social
“When we engage with the public, we notice people think that AI, as a field or a technology, appeared on the scene in the last 3 years. And they experience confusion … when they discover the field and the technologies have existed for decades (…)”

zenodo.org/records/1706...

11/🧵
Against the Uncritical Adoption of 'AI' Technologies in Academia
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...
zenodo.org
Reposted by Iris van Rooij 💭
irisvanrooij.bsky.social
“(…) jargon infused with technology industry hype, such as shown in Table 1, does not meaningfully explain. (…) We strive to remain critical of the vocabulary the technology industry coopts and deploys, and to remain respectful of scientific terminology.”

zenodo.org/records/1706...

8/🧵
Reposted by Iris van Rooij 💭
olivia.science
olivia.science
important on LLMs for academics:

1️⃣ LLMs are usefully seen as lossy content-addressable systems

2️⃣ we can't automatically detect plagiarism

3️⃣ LLMs automate plagiarism & paper mills

4️⃣ we must protect literature from pollution

5️⃣ LLM use is a CoI

6️⃣ prompts do not cause output in authorial sense
5 Ghostwriter in the Machine
A unique selling point of these systems is conversing and writing in a human-like way. This is imminently understandable, although wrong-headed, when one realises these are systems that
essentially function as lossy2
content-addressable memory: when
input is given, the output generated by the model is text that
stochastically matches the input text. The reason text at the output looks novel is because by design the AI product performs
an automated version of what is known as mosaic or patchwork
plagiarism (Baždarić, 2013) — due to the nature of input masking and next token prediction, the output essentially uses similar words in similar orders to what it has been exposed to. This
makes the automated flagging of plagiarism unlikely, which is
also true when students or colleagues perform this type of copypaste and then thesaurus trick, and true when so-called AI plagiarism detectors falsely claim to detect AI-produced text (Edwards, 2023a). This aspect of LLM-based AI products can be
seen as an automation of plagiarism and especially of the research paper mill (Guest, 2025; Guest, Suarez, et al., 2025; van
Rooij, 2022): the “churn[ing] out [of] fake or poor-quality journal papers” (Sanderson, 2024; Committee on Publication Ethics, Either way, even if
the courts decide in the favour of companies, we should not allow
these companies with vested interests to write our papers (Fisher
et al., 2025), or to filter what we include in our papers. Because
it is not the case that we only operate based on legal precedents,
but also on our own ethical values and scientific integrity codes
(ALLEA, 2023; KNAW et al., 2018), and we have a direct duty to
protect, as with previous crises and in general, the literature from
pollution. In other words, the same issues as in previous sections
play out here, where essentially now every paper produced using
chatbot output must declare a conflict of interest, since the output text can be biased in subtle or direct ways by the company
who owns the bot (see Table 2).
Seen in the right light — AI products understood as contentaddressable systems — we see that framing the user, the academic
in this case, as the creator of the bot’s output is misplaced. The
input does not cause the output in an authorial sense, much like
input to a library search engine does not cause relevant articles
and books to be written (Guest, 2025). The respective authors
wrote those, not the search query!
Reposted by Iris van Rooij 💭
olivia.science
Yeah, there's AI and AI, exactly bsky.app/profile/oliv...
olivia.science
I split AI into 3 non-mutually exclusive types (see Table 1 above): displacement (harmful), enhancement (beneficial), and/or replacement (neutral) of human cognitive labour. More later possibly, but see Tables 2 to 4 (attached or here: arxiv.org/pdf/2507.19960) for the worked through examples. 2/n
table 2 from https://arxiv.org/pdf/2507.19960 table 3 from https://arxiv.org/pdf/2507.19960 table 4 from https://arxiv.org/pdf/2507.19960
Reposted by Iris van Rooij 💭
irisvanrooij.bsky.social
We used to speak of ‘sloppy science’ when there were QRPs. Now we have slop science 😔
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
davedun.bsky.social
Fantastic thread (and pre-print) on Critical AI Literacy in Psychology.

The final line from the introduction is brutal: “Ultimately, current AI is research malpractice.”
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Iris van Rooij 💭
irisvanrooij.bsky.social
AI “summaries” are not summaries but slop
Reposted by Iris van Rooij 💭
shampshire.bsky.social
I have endless circular arguments about this.

Them: “It’s written a summary.”

Me: “No, it’s written something tuned to look like a summary.”

Them: “But it looks like a summary.”

Me: <sigh>

We’re not used to computers lying to us.
Reposted by Iris van Rooij 💭
Reposted by Iris van Rooij 💭
mathienz.bsky.social
🎯💯 Science is political whether you like it or not 👇🏼 #StandUpForScience
Reposted by Iris van Rooij 💭
olivia.science
Yes, you get it. And the table is useful I believe for this analysis, the Pygmalion lens doi.org/10.31235/osf...
Reposted by Iris van Rooij 💭
patmat.bsky.social
2/2 It feels less like progress and more like a kind of ‘womb envy’ that replaces rather than values that essential human bond. The very idea of replacing that relationship feels almost like a form of child abuse, because it would inevitably affect every child involved.
Reposted by Iris van Rooij 💭
patmat.bsky.social
1/2 Concerning. Your paper made me think of how this dehumanization also appears in projects like artificial wombs or robotic babysitters — attempts to mechanize creation and care, downplaying the importance of the bonding needed to form a human being and properly wire its brain.