Walter Quattrociocchi
walter4c.bsky.social
Walter Quattrociocchi
@walter4c.bsky.social

Full Professor of Computer Science @Sapienza University of Rome.
Data Science, Complex Systems

Physics 35%
Communication & Media Studies 15%
New study on LLMs shows that while LLMs & humans converge on similar judgments of reliability of news media, they rely on very different underlying processes.

In delegating, are we confusing linguistic plausibility with epistemic reliability?

The age of "epistemia"

www.pnas.org/doi/epdf/10....

📢Research Highlights out today! We highlight work by @walter4c.bsky.social, @matteocinelli.bsky.social, and colleagues on how LLMs generate judgments about reliability and political bias, and how their procedures compare to human evaluation. www.nature.com/articles/s43... #cssky
How LLMs generate judgments - Nature Computational Science
Nature Computational Science - How LLMs generate judgments
www.nature.com

How LLMs generate judgments

www.nature.com/articles/s43...

"driven by lexical and statistical associations rather than deliberative reasoning"
How LLMs generate judgments - Nature Computational Science
Nature Computational Science - How LLMs generate judgments
www.nature.com

Data changed the info business model: confirmation → echo chambers → infodemics
LLMs drop cost of “knowledge-like” content to zero.
Result: Epistemia — when language sounds like knowledge.
Outsourcing shifts decisions from evidence → plausibility
PNAS:https://www.pnas.org/doi/10.1073/pnas.1517441113

Grokipedia is not the problem.
It’s the signal.
What we’re seeing isn’t about AI or neutrality — it’s the rise of the post-epistemic web.
The question isn’t: is it true?
The question is: who made the model?

Together, these papers suggest a transformation:
→ Knowledge is no longer verified, but simulated
→ Platforms no longer host views, they shape belief architectures
→ Truth is not disappearing. It’s being automated, fragmented, and rebranded

Paper 2 — Ideological Fragmentation of the Social Media Ecosystem
We analyzed 117M posts from 9 platforms (Facebook, Reddit, Parler, Gab, etc).
Some now function as ideological silos — not just echo chambers, but echo platforms.
www.nature.com/articles/s41...
Ideology and polarization set the agenda on social media - Scientific Reports
Scientific Reports - Ideology and polarization set the agenda on social media
www.nature.com

Paper 1 — The Simulation of Judgment in LLMs
We benchmarked 6 large language models against experts and humans.
They often agree on outputs — but not on how they decide.
Models rely on lexical shortcuts, not reasoning.
We called this epistemia.
www.pnas.org/doi/10.1073/...

We studied both, in two recent papers on
@PNASNews
and
@PNASNexus
:
Epistemia — the illusion of knowledge when LLMs replace reasoning with surface plausibility
Echo Platforms — when whole platforms, not just communities, become ideologically sealed

Two structural shifts are unfolding right now:
Platforms are fragmenting into echo platforms — entire ecosystems aligned around ideology.
LLMs are being used to simulate judgment — plausible, fluent, unverifiable.

#Grokipedia just launched.
An AI-built encyclopedia, pitched as a “neutral” alternative to Wikipedia.
But neutrality is not the point.
What happens underneath is.
👇

timely, considering Grokpedia and all the related implications.
One of the most-viewed PNAS articles in the last week is “The simulation of judgment in LLMs.” Explore the article here: https://ow.ly/7m2o50Xj6l1

For more trending articles, visit https://ow.ly/6hok50Xj6l3.
One of the most-viewed PNAS articles in the last week is “The simulation of judgment in LLMs.” Explore the article here: https://ow.ly/7m2o50Xj6l1

For more trending articles, visit https://ow.ly/6hok50Xj6l3.

Don’t know your approach.
Ours assumes that to understand the perturbation, you first need to operationalize the task and compare how humans and models diverge.
That’s the empirical ground — not a belief about what LLMs “are.”

“LLMs don’t understand.”
Of course. That was never the point.
The point is: we’re already using them as if they do —
to moderate, to classify, to prioritize, to decide.
That’s not a model problem.
It’s a systemic one.
The shift from verification to plausibility is real.
Welcome to Epistemia.

Coming from misinfo/polarization,
we’re not asking what LLMs are.
We’re asking: what happens when users start trusting them as if they were search engines?
We compare LLMs and humans on how reliability and bias are judged.
That’s where the illusion epistemia begins.

Yes, we include recent works on evaluation heuristics and bias in LLMs.
Our focus is on how LLMs outputs simulate judgment.
We compare LLMs and humans directly, under identical pipelines, on the same dataset.
May rely is empirical caution.
The illusion of reasoning is the point (not the premise).

Absolutely we build on that line.
What we address is how these dynamics unfold now, at scale, where reliability is operationalized.
The novelty isn’t saying “LLMs aren’t agents.”
It’s showing how and when humans treat them as if they were.
Plausibility replacing reliability. Epistemia.

Thank you for sharing.
We explore the perturbation introduced when judgment is delegated to LLMs.
We study how the concept of reliability is operationalized in (moderation, policy, ranking).
Epistemia is a name for judgment without grounding.
IMHO it is already here.
(a new layer of the infodemic).

LLMs can mirror expert judgment but often rely on word patterns rather than reasoning. A new study introduces epistemia, the illusion of knowledge that occurs when surface plausibility replaces verification. In PNAS: https://ow.ly/ry7S50Xcv9b