Sander van der Linden
banner
profsanderlinden.bsky.social
Sander van der Linden
@profsanderlinden.bsky.social

Professor of Social Psychology in Society at the University of Cambridge and Author of FOOLPROOF: Why We Fall for Misinformation and How to Build Immunity (2023) + The Psychology of Misinformation (2024). Bad News Game.

www.sandervanderlinden.com .. more

Sander L. van der Linden is a Dutch social psychologist and author who is Professor of Social Psychology at the University of Cambridge. He studies the psychology of social influence, risk, human judgment, and decision-making. He is known for his research on the psychology of social issues, such as fake news, COVID-19 conspiracy theories, and climate change denial. .. more

Sociology 26%
Political science 26%
It's disturbing to see many of the the major figures in moral cognition all over the Epstein files

He was friendly with a huge number of the leading figures in the field, including giving millions to their labs, long after he pled guilty to sexually abusing young girls
www.justice.gov/epstein
 
www.justice.gov

The University of Cambridge has an exciting Assistant, Associate, and Full Professorship of Public Policy open in the new Bennett School of Public Policy. It's a fantastic group with great people, they have a focus on digital policy too! Area open.

www.cam.ac.uk/jobs/term/De...
Here’s what @profsanderlinden.bsky.social has to say about my forthcoming book, Energy is Life. Thank you, Prof!

Released on Feb 3rd.

Pre-order a paperback or e-book in Europe >
amzn.eu/d/3UKW65E

Everywhere else >
www.amazon.com/Energy-Life-...
We are pleased to announce that a keynote speaker for the April Cambridge Disinformation Summit is 2001 Nobel Economist, Professor Joseph E. Stiglitz.

Professor Stiglitz will discuss systemic economic risks from disinformation and from platforms that amplify or monetize disinformation.
@profsanderlinden.bsky.social and I debunked this narrative a while ago.

I have trained and evaluated jet pilots and there is no evidence to support this rage bait.

Our article is linked here:

www.promarket.org/2024/02/06/h...
BREAKING: Chief Judge Schiltz sets a contempt hearing for Friday in one of the hundreds of habeas cases in Minnesota pending as a result of Operation Metro Surge, ordering acting ICE Director Todd Lyons personally to appear in court and face contempt. storage.courtlistener.com/recap/gov.us...

I think it will be very useful for those seeking to undermine democratic discourse

Our latest paper in @science.org warns about malicious AI swarms, agents capable of adaptive influence campaigns at scale. We already observed some in the wild (picture). AI is a real threat to democracy.
#SciencePolicyForum #ScienceResearch 🧪
Paper: doi.org/10.1126/scie...

In theory yes

Thanks! We do have a recent paper implementing this in Swedish schools www.tandfonline.com/doi/full/10.... but we didn't do longitudinal follow-ups. In general, we do know that people need "booster" shots to maintain the effect. How many before permanently "immunized" is an open question...
Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinformation techniques
Although the serious game Bad News has been used to inoculate citizens against misinformation, it has not been formally evaluated in traditional classrooms. We therefore evaluated its impact on 516...
www.tandfonline.com

Thanks for sharing!

Haha thanks and love that idea!
Prebunking misinformation techniques in social media feeds: Instagram field study misinforeview.hks.harvard.edu/article/preb...

"Instagram users in treatment group were significantly & substantially better than the control group in correctly identifying emotional manipulation in a news headline."
Prebunking misinformation techniques in social media feeds: Results from an Instagram field study | HKS Misinformation Review
Boosting psychological defences against misleading content online is an active area of research, but transition from the lab to real-world uptake remains a challenge. We developed a 19-second prebunki...
misinforeview.hks.harvard.edu

Lots of caveats of course, including opt-in biases to the campaign and quiz so operational challenges remain but we hope it's a useful guide for rolling out and evaluating inoculation campaigns on social media feeds! Made possible by the great work of our partner Reality Team @deblavoy.bsky.social!

Key findings:

✅ Inoculated users showed a 21 ppt increase in their ability to spot emotional manipulation (baseline ability was low)

✅ These effects remained detectable 5 months.

✅ We also increased information-seeking behavior, treatment users were 3x more likely to click on the ad.

3/3

We assigned users to a treatment or control group using birth month as a pseudo-randomization mechanism (assuming those born in e.g. May aren't more or less capable in spotting misinfo than those born in e.g. Dec). Those who engaged with the 19-sec ad were then targeted with a quiz vs control 2/3
New real-world field study *inoculating* against misinformation in live social media scroll feeds out in Harvard Misinfo Review @misinforeview.bsky.social

We targeted +375k users with a short ad on Insta using a novel quasi-experimental method (1/3)

misinforeview.hks.harvard.edu/article/preb...
Tremendously important work...👇

How malicious AI swarms can threaten democracy www.science.org/doi/10.1126/... @daniel-thilo.bsky.social @kunstjonas.bsky.social et al.

AI swarms are can be used to engineering a "synthetic
consensus" that can manipulate public opinion and perceptions.
NEW:

You think disinformation is bad now?

Wait until you hear about "AI swarms" and how they have the potential to upend democracy across the globe

www.wired.com/story/ai-pow...
AI-Powered Disinformation Swarms Are Coming for Democracy
Advances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And it’s virtually impossible to detect.
www.wired.com
We have a new paper in Science today on how malicious AI swarms can threaten democracy.

AI systems can already coordinate autonomously, infiltrate communities, and fabricate social consensus.

www.science.org/doi/10.1126/...

Led by @kunstjonas.bsky.social & @daniel-thilo.bsky.social!

I think Kai mentioned Science Magazine does not have such a policy but Joe said PNAS does, which was news to me.

Yeah I think that’s fair

Well but as the other colleague mentioned anything can be relevant to the company as they cover nearly all information flows so the scope becomes unwieldy.

p.s. I think it’s more common in medicine and adopting their norms is probably a good idea. Ive been excluded from things several times because I declared funding/collabs but then there were colleagues on the same committee who I know also receive funding so transparency norms clearly differ.

Yeah I’m not sure if past research funding from a social media company on say misinformation has anything to do with unrelated (unfunded) future papers on misinformation so I guess people can debate what’s reasonable but if the field decides this should be a norm I’m all for maximum transparency.

Agreed - I think it’s a good warning sign and Jon and I will be taking some of these recommendations on board for sure as good practice beyond what we do already - it’s a small effort and even though we don’t perceive COI on most work it’s always best to let the reader decide.

Reposted by Dean Eckles

To be fair I had never heard of the practice of disclosing past collaboration on future totally unrelated papers (I don’t know anyone who does that) but I’m not against it, I’m all for greater transparency. My bigger concern is with all the data that doesn’t even get published b/c of corporate COI..