@merielcd.bsky.social
& Silvan Baier 🧵👇
With @mattgrizz.bsky.social @andyluttrell.bsky.social @chasmonge.bsky.social
www.nature.com/articles/s41...
With @mattgrizz.bsky.social @andyluttrell.bsky.social @chasmonge.bsky.social
www.nature.com/articles/s41...
www.pnas.org/doi/10.1073/...
www.pnas.org/doi/10.1073/...
“Why Reform Stalls: Justifications of Force Are Linked to Lower Outrage and Reform Support.”
Why do some cases of police violence spark reform while others fade? We look at how people explain them—through justification or outrage.
osf.io/preprints/ps...
“Why Reform Stalls: Justifications of Force Are Linked to Lower Outrage and Reform Support.”
Why do some cases of police violence spark reform while others fade? We look at how people explain them—through justification or outrage.
osf.io/preprints/ps...
Examining news on 7 platforms:
1)Right-leaning platforms=lower quality news
2)Echo-platforms: Right-leaning news gets more engagement on right-leaning platforms, vice-versa for left-leaning
3)Low-quality news gets more engagement EVERYWHERE - even BlueSky!
www.pnas.org/doi/10.1073/...
Examining news on 7 platforms:
1)Right-leaning platforms=lower quality news
2)Echo-platforms: Right-leaning news gets more engagement on right-leaning platforms, vice-versa for left-leaning
3)Low-quality news gets more engagement EVERYWHERE - even BlueSky!
www.pnas.org/doi/10.1073/...
Humans are imperfect decision-makers, and autonomous systems should understand how we deviate from idealized rationality
Our paper aims to address this! 👀🧠✨
arxiv.org/abs/2510.25951
a 🧵⤵️
Humans are imperfect decision-makers, and autonomous systems should understand how we deviate from idealized rationality
Our paper aims to address this! 👀🧠✨
arxiv.org/abs/2510.25951
a 🧵⤵️
Two processes explained this shift:
(1) within-user increases in moral language over time
(2) highly moralized users became more active while less moralized users disengaged osf.io/preprints/ps...
Two processes explained this shift:
(1) within-user increases in moral language over time
(2) highly moralized users became more active while less moralized users disengaged osf.io/preprints/ps...
😡 The most partisan users — those who love their party and despise the other — are more likely to post about politics
🥊 The result? A loud angry minority dominates online politics, which itself can drive polarization (see doi.org/10.1073/pnas...)
😡 The most partisan users — those who love their party and despise the other — are more likely to post about politics
🥊 The result? A loud angry minority dominates online politics, which itself can drive polarization (see doi.org/10.1073/pnas...)
t.co/UDZwJCqDw5
osf.io/preprints/ps...
IMO, thinking about identity in an instrumental way helps explain a lot of behavior that seems otherwise baffling.
osf.io/preprints/ps...
osf.io/preprints/ps...
There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.
There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.
👉theory-driven modeling: Hyowon Gweon
👉data-driven discovery: @clemensstachl.bsky.social
👉application: me
👉 panel: @steveread.bsky.social Sandra Matz, @markthornton.bsky.social Wil Cunningham
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
We find that AI sources are preferred over ingroup and outgroup sources--even when people know both are equally accurate (N = 1,600+): osf.io/preprints/ps...
We find that AI sources are preferred over ingroup and outgroup sources--even when people know both are equally accurate (N = 1,600+): osf.io/preprints/ps...
👉theory-driven modeling: Hyowon Gweon
👉data-driven discovery: @clemensstachl.bsky.social
👉application: me
👉 panel: @steveread.bsky.social Sandra Matz, @markthornton.bsky.social Wil Cunningham
👉theory-driven modeling: Hyowon Gweon
👉data-driven discovery: @clemensstachl.bsky.social
👉application: me
👉 panel: @steveread.bsky.social Sandra Matz, @markthornton.bsky.social Wil Cunningham
Because super-users are so active, they dominate our collective impression of the internet www.theguardian.com/books/2025/j...
Because super-users are so active, they dominate our collective impression of the internet www.theguardian.com/books/2025/j...
osf.io/preprints/ps...
In a sample of ~2 billion comments, social media discourse becomes more negative over time
Archival and experimental findings suggest this is a byproduct of people trying to differentiate themselves
Led by @hongkai1.bsky.social in his 1st year (!) of his PhD
Graduating to a hard top next session.
It’s fun to learn new things! Even if you look like a goober at the beginning. 🏄🏽♀️
Graduating to a hard top next session.
It’s fun to learn new things! Even if you look like a goober at the beginning. 🏄🏽♀️
Graduating to a hard top next session.
It’s fun to learn new things! Even if you look like a goober at the beginning. 🏄🏽♀️
This study reveals that LLM-based peer review relies heavily on author institution in its decisions.
arxiv.org/abs/2509.15122
This study reveals that LLM-based peer review relies heavily on author institution in its decisions.
arxiv.org/abs/2509.15122