William Gunn
metasynthesis.net
William Gunn
@metasynthesis.net
Reposted by William Gunn
Join our Replicability Project: Health Behavior!

We have 55 replication studies underway, our target is 65-70.

We are only recruiting for secondary data replications--i.e., using existing data to test the original question.

Here's a list of studies we think could be feasible.

If interested...
Replications Sourcing Sheet
docs.google.com
December 18, 2025 at 2:29 PM
Reposted by William Gunn
Are you passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like? Come do a PhD with us.

Closing Date: 10 February 2026

Apply here aial.ie/hiring/phd-a...
December 17, 2025 at 6:52 PM
Single-blind!
December 17, 2025 at 8:14 PM
If you optimize for engagement and there's nobody empowered & willing to say, "We'll take the engagement hit & not launch this", you're going to get sycophancy. It's going to hit some people really hard and then you'll get hit hard with worse regulation than if you had showed any restraint or taste.
If we are banning cell phones for kids we need to be talking about banning chatbots for boomers
www.persuasion.community/p/my-chatgpt...
December 17, 2025 at 7:44 PM
Whenever you see an article that prints a quote from a spokesperson and then says something like "to better understand, we spoke to 18 current and former employees", you know they're about to throw down.
December 17, 2025 at 7:39 PM
Browser makers, particularly Chrome and Safari, have long wanted to remove the URL bar and disallow manual typing of URLs. They want the browser to become TV. I don't care if 99.99% of users never type a URL. They'll have to pry it from my cold dead hands.
December 16, 2025 at 5:35 PM
Reposted by William Gunn
AI-assisted coding: 10 simple rules to maintain scientific rigor www.thetransmitter.org/artificial-i... - my latest in @thetransmitter.bsky.social
AI-assisted coding: 10 simple rules to maintain scientific rigor
These guidelines can help researchers ensure the integrity of their work while accelerating progress on important scientific questions.
www.thetransmitter.org
December 16, 2025 at 1:48 PM
This feed has totally replaced the Discover feed for me. It seems to take popularity less into account and "show less like this" actually works.
Enjoying the For You feed? Give it a like ♡ to help more people discover it: bsky.app/profile/did:...

The more people use it -> the more feedback we get -> the better we can make it for you.
December 16, 2025 at 5:19 PM
Reposted by William Gunn
Nice write-up of the first wave of the AI Safety Red Team Clinic at @cornelltech.bsky.social.

We help nonprofits & public sector orgs stress-test their AI tools before they go live thru a *free* 6-week adversarial testing exercise.

We're looking for our next client! Please help spread the word.
‘Red team’ students stress-test NYC health department’s AI | Cornell Chronicle
People usually strive to be their true, authentic selves, but this fall, five master’s students at Cornell Tech adopted not only alter egos but also “bad intent,” in an effort to make AI safer for hea...
news.cornell.edu
December 16, 2025 at 4:36 PM
I've always thought these systems are bizarre. I know people want to be able to analyze search performance in ways they're accustomed to, but it's like going to all the trouble to make a car, then taking out the engine and bolting the frame to a horse. If you already have an index, I get it, but...
Insight: I've been quite negative about those simple "push natural language input to LLM to generate Boolean & run" systems than many librarians mostly because I have higher expectations than them.. (1)
December 16, 2025 at 5:16 PM
I wouldn't go as far as the author and say measurement itself is the problem (historians have been particularly ill-served by the institutionalization of citations) but the increased rate at which LLMs can create the structure but not the substance of a scientific publication is concerning.
Absolutely essential piece by @kevinbaker.bsky.social critiquing AI automation of science by pulling back the curtain on what modern science actually is. LLMs already replicate and accelerate its irrational bureaucratic structure.
Context Widows
or, of GPUs, LPUs, and Goal Displacement
artificialbureaucracy.substack.com
December 15, 2025 at 7:38 PM
Keep fighting the good fight, folks. It shouldn't be this hard.
December 15, 2025 at 6:35 PM
I'm glad Australia is doing an experiment for us.
A new PNAS paper finds that polarization increased immediately after the invention of smartphones and the advent of social media, which both appeared around the same year, 2008.
www.pnas.org/doi/10.1073/...
December 15, 2025 at 6:24 PM
Reposted by William Gunn
In the Messy Middle: Observations from the Front Line at the UKSG Forum - The Scholarly Kitchen
In the Messy Middle: Observations from the Front Line at the UKSG Forum - The Scholarly Kitchen
The UKSG Forum is "an entire 2-3 day conference stripped back to bare essentials and completed in just one day". Here are the key takeaways.
scholarlykitchen.sspnet.org
December 15, 2025 at 12:30 PM
Reposted by William Gunn
We're hiring interns in the Computational Social Science group at Microsoft Research NYC!

If you're interested in designing AI‑based systems and understanding their impact at both individual and societal scales, apply here by Jan 9, 2026: apply.careers.microsoft.com/careers/job/...
Research Intern - Computational Social Science | Microsoft Careers
Research Interns put inquiry and theory into practice. Alongside fellow doctoral candidates and some of the world's best researchers, Research Interns learn, collaborate, and network for life. Researc...
apply.careers.microsoft.com
December 15, 2025 at 4:33 PM
We can get the benefits while minimizing the harms if just a key participants would stop enabling it. If the AI industry can't get @hf.co & CivitAI to do literally anything at all to stop deepfakes, they're going to bring far more onerous regulation down on everyone.
🧵🧵🧵 In the past few months, I have looked at hundreds, maybe thousands, of AI porn images/videos (for science).

Here's what I learned from our investigation of over 50 platforms, sites, apps, Discords, etc., while writing this paper.

papers.ssrn.com/sol3/papers...
December 15, 2025 at 6:14 PM
Reposted by William Gunn
Download our large database of postdoc fellowships in all fields of research.

Database freely available to all; 281 fellowships.

Download here: research.jhu.edu/rdt/funding-...
December 13, 2025 at 1:38 PM
Serious conversations that are representative do happen. They mostly don't happen in public online fora.
Online political discussions are characterized by a minority of users dominate the discussion, while most remain silent.

Those who perceived a discussion as toxic/polarized tend to remain silent: But toxicity engages power users (namely men interested in politics) www.science.org/doi/10.1126/...
December 11, 2025 at 6:27 PM
Reposted by William Gunn
Christmas came early this year! Very happy to see our paper out in Science Advances. Led by @lfoswaldo.bsky.social, we ran a unique collective field-experiment on Reddit, to better understand who is participating in online debates and why.

Paper: www.science.org/doi/10.1126/...

And more below 👇
December 10, 2025 at 9:32 PM
Reposted by William Gunn
UK AISI is hiring for a technical research role on open-weight model safeguards.

www.aisi.gov.uk/careers
December 11, 2025 at 2:00 PM
Popularity is orthogonal to taste, and nowhere is that more true than with food: laurenleek.substack.com/p/how-google... The price of having a decent algorithmic social feed is constantly pressing "show less like this" on ragebait. The price of getting good food recommendations is ... ?
How Google Maps quietly allocates survival across London’s restaurants - and how I built a dashboard to see through it
I wanted a dinner recommendation and got a research agenda instead. Using 13000+ restaurants, I rebuild its ratings with machine learning and map how algorithmic visibility actually distributes power.
laurenleek.substack.com
December 9, 2025 at 7:35 PM
The big question: will "provably safe RL" in self-driving generalize? Are they solving the right problems with the right methods based on a accurate model of the world?
December 9, 2025 at 7:16 PM
Bold strategy from India: Nationwide license to copyrighted content for AI training in exchange for payments into a national collections body. techcrunch.com/2025/12/09/i... The simplicity and straightforwardness of the approach is a bit refreshing, I gotta say. Will it work for creators?
India proposes charging OpenAI, Google for training AI on copyrighted content | TechCrunch
India has given OpenAI, Google, and other AI firms 30 days to respond to its proposed royalty system for training on copyrighted content.
techcrunch.com
December 9, 2025 at 7:12 PM
Reposted by William Gunn
I wanted dinner recommendations so I scraped 13,000+ London restaurants and accidentally discovered Google Maps is running a shadow economy. Anyway here's a dashboard and a political economy thesis: open.substack.com/pub/laurenle...
How Google Maps quietly allocates survival across London’s restaurants - and how I built a dashboard to see through it
I wanted a dinner recommendation and got a research agenda instead. Using 13000+ restaurants, I rebuild its ratings with machine learning and map how algorithmic visibility actually distributes power.
open.substack.com
December 9, 2025 at 7:53 AM