Sander van der Linden
banner
profsanderlinden.bsky.social
Sander van der Linden
@profsanderlinden.bsky.social

Professor of Social Psychology in Society at the University of Cambridge and Author of FOOLPROOF: Why We Fall for Misinformation and How to Build Immunity (2023) + The Psychology of Misinformation (2024). Bad News Game.

www.sandervanderlinden.com .. more

Sander L. van der Linden is a Dutch social psychologist and author who is Professor of Social Psychology at the University of Cambridge. He studies the psychology of social influence, risk, human judgment, and decision-making. He is known for his research on the psychology of social issues, such as fake news, COVID-19 conspiracy theories, and climate change denial. .. more

Sociology 26%
Political science 26%

In theory yes

Thanks! We do have a recent paper implementing this in Swedish schools www.tandfonline.com/doi/full/10.... but we didn't do longitudinal follow-ups. In general, we do know that people need "booster" shots to maintain the effect. How many before permanently "immunized" is an open question...
Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinformation techniques
Although the serious game Bad News has been used to inoculate citizens against misinformation, it has not been formally evaluated in traditional classrooms. We therefore evaluated its impact on 516...
www.tandfonline.com

Thanks for sharing!

Haha thanks and love that idea!
Prebunking misinformation techniques in social media feeds: Instagram field study misinforeview.hks.harvard.edu/article/preb...

"Instagram users in treatment group were significantly & substantially better than the control group in correctly identifying emotional manipulation in a news headline."
Prebunking misinformation techniques in social media feeds: Results from an Instagram field study | HKS Misinformation Review
Boosting psychological defences against misleading content online is an active area of research, but transition from the lab to real-world uptake remains a challenge. We developed a 19-second prebunki...
misinforeview.hks.harvard.edu

Lots of caveats of course, including opt-in biases to the campaign and quiz so operational challenges remain but we hope it's a useful guide for rolling out and evaluating inoculation campaigns on social media feeds! Made possible by the great work of our partner Reality Team @deblavoy.bsky.social!

Key findings:

✅ Inoculated users showed a 21 ppt increase in their ability to spot emotional manipulation (baseline ability was low)

✅ These effects remained detectable 5 months.

✅ We also increased information-seeking behavior, treatment users were 3x more likely to click on the ad.

3/3

We assigned users to a treatment or control group using birth month as a pseudo-randomization mechanism (assuming those born in e.g. May aren't more or less capable in spotting misinfo than those born in e.g. Dec). Those who engaged with the 19-sec ad were then targeted with a quiz vs control 2/3
New real-world field study *inoculating* against misinformation in live social media scroll feeds out in Harvard Misinfo Review @misinforeview.bsky.social

We targeted +375k users with a short ad on Insta using a novel quasi-experimental method (1/3)

misinforeview.hks.harvard.edu/article/preb...
Tremendously important work...👇

How malicious AI swarms can threaten democracy www.science.org/doi/10.1126/... @daniel-thilo.bsky.social @kunstjonas.bsky.social et al.

AI swarms are can be used to engineering a "synthetic
consensus" that can manipulate public opinion and perceptions.
NEW:

You think disinformation is bad now?

Wait until you hear about "AI swarms" and how they have the potential to upend democracy across the globe

www.wired.com/story/ai-pow...
AI-Powered Disinformation Swarms Are Coming for Democracy
Advances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And it’s virtually impossible to detect.
www.wired.com
We have a new paper in Science today on how malicious AI swarms can threaten democracy.

AI systems can already coordinate autonomously, infiltrate communities, and fabricate social consensus.

www.science.org/doi/10.1126/...

Led by @kunstjonas.bsky.social & @daniel-thilo.bsky.social!

I think Kai mentioned Science Magazine does not have such a policy but Joe said PNAS does, which was news to me.

Yeah I think that’s fair

Well but as the other colleague mentioned anything can be relevant to the company as they cover nearly all information flows so the scope becomes unwieldy.

p.s. I think it’s more common in medicine and adopting their norms is probably a good idea. Ive been excluded from things several times because I declared funding/collabs but then there were colleagues on the same committee who I know also receive funding so transparency norms clearly differ.

Yeah I’m not sure if past research funding from a social media company on say misinformation has anything to do with unrelated (unfunded) future papers on misinformation so I guess people can debate what’s reasonable but if the field decides this should be a norm I’m all for maximum transparency.

Agreed - I think it’s a good warning sign and Jon and I will be taking some of these recommendations on board for sure as good practice beyond what we do already - it’s a small effort and even though we don’t perceive COI on most work it’s always best to let the reader decide.

Reposted by Dean Eckles

To be fair I had never heard of the practice of disclosing past collaboration on future totally unrelated papers (I don’t know anyone who does that) but I’m not against it, I’m all for greater transparency. My bigger concern is with all the data that doesn’t even get published b/c of corporate COI..

Yes agreed, I also mentioned to Kai that the majority of research on social media is highly damning in its conclusions. My understanding was that a decent % are not disclosing basic COI - I also think that going beyond that is a grey area though in principle I don't mind disclosing prior work.

I agree with this (there was no other way) but the finding that many colleagues are not disclosing basic funding seemed like a red flag to me that deserves some education (assuming the non disclosure was a benign oversight).
@profsanderlinden.bsky.social told me the definition was not unreasonable, but that norms on conflicts of interest are less established in social science than in biomedicine. “Maybe scientists who work on social media research need better education about the importance of transparency and COI.”
One of the issues that has come up again and again in my reporting on misinformation and social media is the massive influence social media companies have on research in the field.

Last night a preprint dropped that tries to get at this with some numbers. My piece in @science.org (and 🧪🧵 coming):
Nearly a third of social media research has undisclosed ties to industry, preprint claims
Industry-linked studies were also more likely to focus on particular topics, suggesting these ties may be skewing the field
www.science.org

Important paper, well done!
“While we cannot definitively show that industry funding in this area is redirecting attention from products to consumers, our results are consistent with this possibility.”

Preprint suggests “industry influence in social media research is extensive, impactful, and often opaque.”🧪
Nearly a third of social media research has undisclosed ties to industry, preprint claims
Industry-linked studies were also more likely to focus on particular topics, suggesting these ties may be skewing the field
www.science.org
📺 OMG—who made this?! Noah Wyle has a message for all you—vaccine skeptics…

Share with a friend who needs to hear it.

100%

It is quite something to even suggest that regulating an app that nudifies women and children on demand is an attack on free speech.
Musk says outcry over X's Grok service is 'excuse for censorship'
The government is urging Ofcom to use all its powers – up to and including an effective ban – against X.
www.bbc.co.uk