Joe Bak-Coleman
banner
jbakcoleman.bsky.social
Joe Bak-Coleman
@jbakcoleman.bsky.social
Research Scientist at the University of Washington based in Brooklyn. Also: SFI External Applied Fellow, Harvard BKC affiliate. Collective Behavior, Statistics, etc..
In any event, this is the point of the preprint below. The asymmetry here makes it all too easy to distract, delay and bias science without even needing to corrupt a single scientist.

arxiv.org/abs/2510.19894
The Risks of Industry Influence in Tech Research
Emerging information technologies like social media, search engines, and AI can have a broad impact on public health, political institutions, social dynamics, and the natural world. It is critical to ...
arxiv.org
November 19, 2025 at 2:02 PM
As George Carlin said, "You don't need a formal conspiracy when interests converge."

A host of top-journal papers and the only real means of running these experiments is invariably going to be persuasive to earnest scientists. Rigor they need is design bias Facebook needs.
November 19, 2025 at 1:59 PM
It all looks independent! So independent, in fact, that the authors defending the collaboration with Meta affirmatively declare no competing interests. Even when a core argument they're making is that they were great about declaring competing interests...

www.pnas.org/doi/10.1073/...
November 19, 2025 at 1:57 PM
At the end of the day---the key question we might have (could Meta shift elections) is embroiled in a debate playing out in Science/PNAS and exists in a liminal space where no one is positioned to push that estimate either way. The academic scientists argue it out and they step aside.
November 19, 2025 at 1:57 PM
So where do we find ourselves? People point it out, the academic co-authors go on the defense.The academics aren't corrupt---who wouldn't defend five years of effort published in top journals. Meta obfuscates about who knew what. Absent data, we can't figure out how it impacted (say) vote shifts.
November 19, 2025 at 1:57 PM
Issues come up... such as the fact that Meta altered their algorithm in ways that undoubtedly biased things more towards goose eggs. This wasn't disclosed in peer review. They said they didn't know, Meta said they did.

www.science.org/doi/10.1126/...
Context matters in social media
Does the information that people see on social media influence their political views? Is it making people politically more divided? In July 2023, Science published three papers on an unprecedented stu...
www.science.org
November 19, 2025 at 1:57 PM
It all looks like rigor but its design bias. From the jump, Meta's got big papers with likely goose-eggs in the pipeline. You're not paying the academics involved for this (but see below), so you can claim independence. Of course, they're all getting a lifetime of papers in top journals.
November 19, 2025 at 1:57 PM
The FDR they use tries to keep false positives at 5%. The study has maybe 40-50% power to detect a change in vote share this big, with corrections. Sounds good, but it implicitly values meta's incentives (min. false positives) over societal ones (min. false negatives) at a rate of ~10:1.
November 19, 2025 at 1:57 PM
Let's take what (imo) is the focal question of the whole collaboration: Could it shift the vote? The work suggests a ~2.5% shift, enough to swing an election. Non-significant after multiple comparisons corrections. What do these multiple comparison corrections encode?

www.pnas.org/doi/10.1073/...
November 19, 2025 at 1:57 PM
Given the unique data/experimental access, these papers were *bound* to wind up in Nature/Science/PNAS and grab our collective attention. And, however, it came about, these studies are remarkably design-biased towards finding nulls for individual/societal effects, but not for platform exposure.
Moving towards informative and actionable social media research
Social media is nearly ubiquitous in modern life, raising concerns about its societal impacts-from mental health and polarization to violence and democratic disruption. Yet research on its causal effe...
arxiv.org
November 19, 2025 at 1:57 PM
And even if polling results aren’t altered…. Anyone can say they are!
November 19, 2025 at 12:19 PM
There’s little reason not to, it doesn’t preclude publication. It might pull one paper off of your peer review pile.

Failing to do so can result in a lot of things, from a correction to re-review or a retraction.
November 18, 2025 at 11:30 PM
It isn’t really a vibes based thing in the sense that you’re assessing if it impacted your judgement. It’s just a transparent declaration of facts that enable others to do so. Especially with things like coauthors, it’s just straightforward.
November 18, 2025 at 11:30 PM
OOOF
November 17, 2025 at 1:52 PM
archive.is
November 16, 2025 at 11:58 PM
Also in the ethics…. LLM persuasion papers often seem to have some pretty serious externalities on participants (convincing them of conspiracy theories) but only the benefits come into focus in discussion.
November 16, 2025 at 6:41 PM
Digging into the literature for a project trying to quantify this it certainly seems to be the case, and the folks who were close to social media companies are moving close to ai firms. You see it in the methods, conclusion, framing, etc…
November 16, 2025 at 6:38 PM
And my point of what seems empathy above is because understanding what causes Watsons in science is essential for ensuring we don’t get the next. It’s not deference to him… he needs excised.
November 16, 2025 at 3:12 AM
If you find your scientific identity tied to a finding, let it go or try to prove it wrong.
November 16, 2025 at 2:36 AM
I think there’s probably a value in reflecting on the way in which the hubris required to try and understand the world breeds arrogance, indifference and inflexibility. Weds us to our ideas as scientists in ways that we need to find them but need to divorce ourselves from once found.
November 16, 2025 at 2:35 AM