William J. Brady
banner
williambrady.bsky.social
William J. Brady
@williambrady.bsky.social
Assistant prof @ Kellogg School of Management, Northwestern University. Studying emotion, morality, social networks, psych of tech. #firstgen college graduate
Pinned
👀New preprint! In 3 prereg experiments we study how engagement-based algorithms amplify ingroup, moral and emotional (IME) content in ways that disrupt social norm learning (and test one solution!) w/ @joshcjackson.bsky.social and my amazing lab managers
@merielcd.bsky.social
& Silvan Baier 🧵👇
Reposted by William J. Brady
Out now in Scientific Reports! Despite high correlations, ChatGPT models failed to replicate human moral judgments. We propose tests beyond correlation to compare LLM data and human data.

With @mattgrizz.bsky.social @andyluttrell.bsky.social @chasmonge.bsky.social

www.nature.com/articles/s41...
ChatGPT does not replicate human moral judgments: the importance of examining metrics beyond correlation to assess agreement - Scientific Reports
Scientific Reports - ChatGPT does not replicate human moral judgments: the importance of examining metrics beyond correlation to assess agreement
www.nature.com
November 24, 2025 at 3:51 PM
Reposted by William J. Brady
So there you have it, twin study estimates were greatly inflated, and molecular data sets the record straight. I walk through possible counter-arguments, but ultimately the uncomfortable truth is that genes contribute to traits much less than we always thought.
November 21, 2025 at 10:34 PM
Great work by @natematias.bsky.social & Megan Price: public involvement in AI is an important part of rigorous science. AI systems are sociotechnical, meaning that the lived experience of the public is essential for validation, etc.

www.pnas.org/doi/10.1073/...
How public involvement can improve the science of AI | PNAS
As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being ...
www.pnas.org
November 18, 2025 at 4:20 PM
Reposted by William J. Brady
New preprint out 📄
“Why Reform Stalls: Justifications of Force Are Linked to Lower Outrage and Reform Support.”

Why do some cases of police violence spark reform while others fade? We look at how people explain them—through justification or outrage.

osf.io/preprints/ps...
OSF
osf.io
November 17, 2025 at 4:32 PM
Reposted by William J. Brady
🚨Out in PNAS🚨
Examining news on 7 platforms:
1)Right-leaning platforms=lower quality news
2)Echo-platforms: Right-leaning news gets more engagement on right-leaning platforms, vice-versa for left-leaning
3)Low-quality news gets more engagement EVERYWHERE - even BlueSky!
www.pnas.org/doi/10.1073/...
November 14, 2025 at 2:35 PM
Reposted by William J. Brady
Excited to share a new preprint, accepted as a spotlight at #NeurIPS2025!

Humans are imperfect decision-makers, and autonomous systems should understand how we deviate from idealized rationality

Our paper aims to address this! 👀🧠✨
arxiv.org/abs/2510.25951

a 🧵⤵️
Estimating cognitive biases with attention-aware inverse planning
People's goal-directed behaviors are influenced by their cognitive biases, and autonomous systems that interact with people should be aware of this. For example, people's attention to objects in their...
arxiv.org
November 13, 2025 at 1:20 PM
✨New preprint! Why do people express outrage online? In 4 studies we develop a taxonomy of online outrage motives, test what motives people report, what they infer for in- vs. out-partisans, and how motive inferences shape downstream intergroup consequences. Led by @felix-chenwei.bsky.social 🧵👇
November 11, 2025 at 4:34 PM
Reposted by William J. Brady
These has been sharp rise in moralized language on social media

Two processes explained this shift:
(1) within-user increases in moral language over time
(2) highly moralized users became more active while less moralized users disengaged osf.io/preprints/ps...
November 5, 2025 at 4:12 PM
Reposted by William J. Brady
Posting is correlated with affective polarization:
😡 The most partisan users — those who love their party and despise the other — are more likely to post about politics
🥊 The result? A loud angry minority dominates online politics, which itself can drive polarization (see doi.org/10.1073/pnas...)
October 30, 2025 at 8:09 AM
Reminder to apply to the DRRC postdoc fellowship! Deadline is this week.
Are you interested in topics related to conflict and intergroup relations *broadly construed*? Come join us as a postdoc in the Dispute Research Research Center! This position is up to 3 years, comes with your own research funding, and a phenomenal network of past DRRC postdocs.
Apply now for Kellogg’s DRRC Postdoc Fellowship, which supports outstanding research in conflict and cooperation, offering dedicated time for scholarship, access to exceptional resources, and a vibrant academic community. Deadline: Nov 1.
t.co/UDZwJCqDw5
October 28, 2025 at 4:56 PM
Reposted by William J. Brady
Re-posting this because I really like it and I think we need to understand identity from a functionalist perspective more than ever.
osf.io/preprints/ps...
I wrote a chapter on a functionalist account of social identity.

IMO, thinking about identity in an instrumental way helps explain a lot of behavior that seems otherwise baffling.
osf.io/preprints/ps...
October 27, 2025 at 8:11 PM
Reposted by William J. Brady
1. We ( @jbakcoleman.bsky.social, @cailinmeister.bsky.social, @jevinwest.bsky.social, and I) have a new preprint up on the arXiv.

There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.
October 24, 2025 at 12:47 AM
Reposted by William J. Brady
Great piece on the absurdity of brute force multiverse analyses.

www.pnas.org/doi/10.1073/...
Robustness is better assessed with a few thoughtful models than with billions of regressions | PNAS
Robustness is better assessed with a few thoughtful models than with billions of regressions
www.pnas.org
October 22, 2025 at 5:29 PM
Reposted by William J. Brady
Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
www.sciencedirect.com
October 21, 2025 at 8:24 PM
Last call for data-blitz and poster submission for the Computational Psychology preconference @spspnews.bsky.social! See thread below for details and hope to see you in Chicago!
The computational psych preconference is back @spspnews.bsky.social for a full day! This year's lineup:

👉theory-driven modeling: Hyowon Gweon
👉data-driven discovery: @clemensstachl.bsky.social
👉application: me
👉 panel: @steveread.bsky.social Sandra Matz, @markthornton.bsky.social Wil Cunningham
October 20, 2025 at 3:04 PM
Reposted by William J. Brady
🚨 New preprint 🚨

Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.

Yet, people preferred sycophantic chatbots and viewed them as unbiased!

osf.io/preprints/ps...

Thread 🧵
October 1, 2025 at 3:16 PM
Reposted by William J. Brady
Our new paper finds that AI can overcome partisan #bias

We find that AI sources are preferred over ingroup and outgroup sources--even when people know both are equally accurate (N = 1,600+): osf.io/preprints/ps...
September 30, 2025 at 1:13 PM
The computational psych preconference is back @spspnews.bsky.social for a full day! This year's lineup:

👉theory-driven modeling: Hyowon Gweon
👉data-driven discovery: @clemensstachl.bsky.social
👉application: me
👉 panel: @steveread.bsky.social Sandra Matz, @markthornton.bsky.social Wil Cunningham
September 29, 2025 at 4:57 PM
Reposted by William J. Brady
Only a small % of people engage in toxic activity online, but they’re responsible for a disproportionate share of hostile or misleading content on nearly every platform

Because super-users are so active, they dominate our collective impression of the internet www.theguardian.com/books/2025/j...
Are a few people ruining the internet for the rest of us?
Why does the online world seem so toxic compared with normal life? Our research shows that a small number of divisive accounts could be responsible – and offers a way out
www.theguardian.com
July 13, 2025 at 3:32 PM
Reposted by William J. Brady
New dataset that describes social media activity of a very large group of US elected officials: www.nature.com/articles/s41...
The digitally accountable public representation database: online communication by U.S. officials - Scientific Data
Scientific Data - The digitally accountable public representation database: online communication by U.S. officials
www.nature.com
September 28, 2025 at 5:10 PM
And here we are
September 28, 2025 at 5:06 PM
Really cool work led by @hongkai1.bsky.social ✨ observational and experimental studies finding that differentiation helps explain the evolution of negative discourse online!
🚨New preprint🚨

osf.io/preprints/ps...

In a sample of ~2 billion comments, social media discourse becomes more negative over time

Archival and experimental findings suggest this is a byproduct of people trying to differentiate themselves

Led by @hongkai1.bsky.social in his 1st year (!) of his PhD
September 27, 2025 at 12:06 PM
@aoc.bsky.social come hang out at Northwestern and we'll take you surfing on the great lakes!
I have been taking surfing lessons. Still in the beginner kook trenches, but celebrating my last time on a foam board 😎

Graduating to a hard top next session.

It’s fun to learn new things! Even if you look like a goober at the beginning. 🏄🏽‍♀️
September 26, 2025 at 2:50 PM
Reposted by William J. Brady
I have been taking surfing lessons. Still in the beginner kook trenches, but celebrating my last time on a foam board 😎

Graduating to a hard top next session.

It’s fun to learn new things! Even if you look like a goober at the beginning. 🏄🏽‍♀️
September 26, 2025 at 1:07 AM
Reposted by William J. Brady
Yet again, machine learning — even gussied up via the transformer architecture — encodes and reinforces societal biases.

This study reveals that LLM-based peer review relies heavily on author institution in its decisions.

arxiv.org/abs/2509.15122
Prestige over merit: An adapted audit of LLM bias in peer review
Large language models (LLMs) are playing an increasingly integral, though largely informal, role in scholarly peer review. Yet it remains unclear whether LLMs reproduce the biases observed in human de...
arxiv.org
September 22, 2025 at 6:11 AM