Yara Kyrychenko
yarakyrychenko.bsky.social
Yara Kyrychenko
@yarakyrychenko.bsky.social
PhD candidate @Cambridge @TheAlanTuringInstitute | Hope to make human-technology interactions more constructive | intergroup conflict, AI & LLMs, misinfo, social media | yarakyrychenko.github.io
Pinned
🚨New in Nature Computational Science! 🚨

Do large language models (LLMs) exhibit social identity biases like humans?

(co-lead by @tiancheng.bsky.social, together with @steverathje.bsky.social, Nigel Collier, @profsanderlinden.bsky.social, and Jon Roozenbeek)
1/
www.nature.com/articles/s43...
Generative language models exhibit social identity biases - Nature Computational Science
Researchers show that large language models exhibit social identity biases similar to humans, having favoritism toward ingroups and hostility toward outgroups. These biases persist across models, trai...
www.nature.com
Excited to share that I’ve been shortlisted for the Women of the Future Awards in Artificial Intelligence! 🎉

awards.womenofthefuture.co.uk/our-alumni-c...
2025 - Women of the Future Awards
awards.womenofthefuture.co.uk
September 25, 2025 at 11:06 AM
Reposted by Yara Kyrychenko
Congratulations to @yarakyrychenko.bsky.social for being short-listed as a finalist for the Women of the Future Awards in AI! Yara is a massively brilliant scholar so proud 👏 🥳

"For eighteen years, the awards have shone a light on trailblazing women"

awards.womenofthefuture.co.uk/our-alumni-c...
September 25, 2025 at 9:22 AM
Reposted by Yara Kyrychenko
Today (w/ @ox.ac.uk @stanford @MIT @LSE) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19  LLMs, 707 political issues.

We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more! 

🧵:
July 21, 2025 at 4:20 PM
Reposted by Yara Kyrychenko
The combination of #artificialintelligence and #socialmedia poses a threat to democracy.

Our new paper explains how AI swarms can fabricate grassroots consensus, fragment shared reality, engage in mass harassment, interfer with elections, and erode institutional trust: osf.io/preprints/os...
June 3, 2025 at 5:03 PM
Excited to be part of the Riga StratCom Dialogue this year!
May 30, 2025 at 9:29 AM
Reposted by Yara Kyrychenko
We wrote an article about the rise of AI slop and the 'enshittification' of the internet for @theconversation.com with the brilliant @yarakyrychenko.bsky.social

One of these days, these bots are gonna walk all over you!

theconversation.com/what-is-ai-s...
What is AI slop? Why you are seeing more fake photos and videos in your social media feeds
Cheap, low-quality AI-generated content is still extremely attention-grabbing – and thus lucrative for both creators and platforms.
theconversation.com
May 29, 2025 at 8:31 AM
Reposted by Yara Kyrychenko
Today at #TheWebConf: C3AI – Crafting & Evaluating Constitutions for AI

How should we actually write rules for AI?

C3AI provides a way to:
1. Design effective constitutions using psychology and public input.
2. Evaluate how well fine-tuned models actually follow the rules.
May 1, 2025 at 8:49 AM
Reposted by Yara Kyrychenko
Does communicating the scientific consensus on climate change inspire support for action? In a new meta-analysis of the GBM (n = 12,975) we find that scientific consensus increases support for climate action directly & indirectly across the political spectrum!

www.sciencedirect.com/science/arti...
April 11, 2025 at 7:57 AM
Reposted by Yara Kyrychenko
Who is susceptible to misinformation? We looked at >60,000 people from 24 countries who took our MIST test. Key results: Gen Z are more susceptible & those on the extreme right (but they don't know it).

Led by the brill @yarakyrychenko.bsky.social & Fritz Götz

authors.elsevier.com/sd/article/S...
April 8, 2025 at 7:30 AM
Reposted by Yara Kyrychenko
Here’s my account on Flashes, a photo sharing app. All my followers can automatically find me as soon as they log in!

This is what we mean when we say Bluesky is open: your identity and followers belong to you. It took 30s to sign up for this new, independent app, and everything is there.
March 29, 2025 at 10:17 PM
Reposted by Yara Kyrychenko
Are you bored this weekend?

Can I interest you in some of the freshest thinking coming out of the academy today?

Try ePODstemology!

Our latest episode is Cambridge's Yara Kyrychenko on socially responsible AI.
March 30, 2025 at 11:49 AM
Had a great time on ePODstemology talking with @markfabian.bsky.social about my research on socially responsible AI. Tune in! 🎙️🤖
March 27, 2025 at 4:59 PM
Reposted by Yara Kyrychenko
Great to see the Climate Change Committee (CCC) calling for public figures to “lead by example” on climate change, citing our research.

CCC says leading by example increases public buy-in and behaviour change @thecccuk.bsky.social

🚨So, we've made this new infographic showing how it works...
🧵
February 26, 2025 at 9:52 AM
Reposted by Yara Kyrychenko
Out in @naturehumbehav.bsky.social

Can people tell true from false news?

Yes! Our meta-analysis shows that people rate true news as more accurate than false news (d = 1.12) and were better at spotting false news than at recognizing true news (d = 0.32).

www.nature.com/articles/s41...
February 21, 2025 at 10:33 AM
Reposted by Yara Kyrychenko
Last year, we published a paper showing that AI models can "debunk" conspiracy theories via personalized conversations. That paper raised a major question: WHY are the human<>AI convos so effective? In a new working paper, we have some answers.

TLDR: facts

osf.io/preprints/ps...
February 18, 2025 at 4:30 PM
Reposted by Yara Kyrychenko
Absolute honor to receive the Vice-Chancellor's Research Impact Award. Thank you Prof Prentice & the whole panel. Our lab at Cambridge has been working many years empowering people to spot misinformation, reaching over half a billion people. Thanks for this recognition🙏 www.cam.ac.uk/public-engag...
February 13, 2025 at 12:25 PM
Reposted by Yara Kyrychenko
Systematic evidence will generate higher-quality behavioral insights that could be integrated into climate policy, life cycle assessments, and economic models.

We offered some suggestions for this in our paper.

@cameronbrick.bsky.social @colognaviktoria.bsky.social
www.nature.com/articles/s41...
Realizing the full potential of behavioural science for climate change mitigation - Nature Climate Change
Behavioural science offers valuable insights for mitigating climate change, but existing work focuses mostly on consumption and lacks coordination across disciplines. In this Perspective, the authors ...
www.nature.com
February 5, 2025 at 10:00 AM
Reposted by Yara Kyrychenko
Tiktok boosted Republican messages 11.8% more than Democrat messages during the 2024 U.S. Presidential Race according to this new study that employed bots to view ~394,000 videos.
February 1, 2025 at 8:38 PM
Reposted by Yara Kyrychenko
“There is no theory crisis in psychological science”

Few quotes…🧵
January 25, 2025 at 1:09 PM
Reposted by Yara Kyrychenko
🧪 "Humanity's Last Exam" sets a new benchmark for AI: 3,000 expert-crafted questions spanning 100+ subjects. Current LLMs perform poorly, revealing a gap in expert-level knowledge and calibration, but it would be difficult to build a harder test. 🩺💻 #MLSky
Humanity's Last Exam
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam, a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. The dataset consists of 3,000 challenging questions across over a hundred subjects. We publicly release these questions, while maintaining a private test set of held out questions to assess model overfitting.
lastexam.ai
January 23, 2025 at 5:22 PM
Reposted by Yara Kyrychenko
🚨 My team is hiring an undergraduate intern (UK universities only) - a great opportunity, if I can say so myself!

Please share widely.

Want more info? See here: www.linkedin.com/feed/update/...
January 8, 2025 at 8:45 AM
Reposted by Yara Kyrychenko
In this new article in American Psychologist we respond to critics in detail and clarify two key points for the field;

(1) The prevalence of misinformation in society is substantial when properly defined.

(2) Misinformation causally impacts attitudes and behaviors.

psycnet.apa.org/fulltext/202...
December 16, 2024 at 11:16 AM
Reposted by Yara Kyrychenko
📢 @savcisens.com discusses a recent study that shows that LLMs exhibit social identity biases similar to humans. www.nature.com/articles/s43...

🔓https://rdcu.be/d5owe
Large language models act as if they are part of a group - Nature Computational Science
An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased conten...
www.nature.com
January 2, 2025 at 2:58 PM
Thank @savcisens.com for the insightful review of our recent paper! Great new year’s present 🎉
January 3, 2025 at 6:09 PM
Reposted by Yara Kyrychenko
New work from my team at Anthropic in collaboration with Redwood Research. I think this is plausibly the most important AGI safety result of the year. Cross-posting the thread below:
December 18, 2024 at 5:47 PM