William Gunn
metasynthesis.net
William Gunn
@metasynthesis.net
If I could tell everyone anything about social media, I would say:
#1. Take it WAY less seriously, no matter how much attention something is getting.
#2. Any recommendation feed will run out of good matches for your specific interests and just start showing you stuff that's popular.
November 28, 2025 at 3:46 AM
This stuff never makes sense until you realize that making sense is not what it's intended to do.
I thought people were exaggerating when they said so-called leftists on here were getting mad about lab grown meat, but nope.

When you become indistinguishable from the conspiratorial freaks I grew up around in the south, you don’t get to call yourself a leftist anymore.
November 28, 2025 at 3:33 AM
What You Don't Know About AI Use Of Water Will Shock You!
Debunking AI’s Environmental Panic - with Andy Masley pca.st/episode/54e370… #AI #environment (good episode)
November 28, 2025 at 3:26 AM
Reposted by William Gunn
Debunking AI’s Environmental Panic - with Andy Masley pca.st/episode/54e370… #AI #environment (good episode)
November 28, 2025 at 2:27 AM
Reposted by William Gunn
Do you want to fund AI alignment research?

The AISI Alignment Team and I have reviewed >800 Alignment Project Applications from 42 countries, and we have ~100 that are very promising. Unfortunately, this means we have a £13-17M funding gap! Thread with details! 🧵
I am very excited that AISI is announcing over £15M in funding for AI alignment and control, in partnership with other governments, industry, VCs, and philanthropists!

Here is a 🧵 about why it is important to bring more independent ideas and expertise into this space.

alignmentproject.aisi.gov.uk
The Alignment Project by AISI — The AI Security Institute
The Alignment Project funds groundbreaking AI alignment research to address one of AI’s most urgent challenges: ensuring advanced systems act predictably, safely, and for society’s benefit.
alignmentproject.aisi.gov.uk
November 27, 2025 at 6:25 PM
When people mimic the "stochastic parrots" they're trying to criticize...
Reminds me of a webinar & someone drops a "comment not question" saying we should distinguish neural search from generative ai cos only former can do evidence retrieval? Er The webinar literally showed screening with LLMs! I know people want villianize "gen ai" but can they think before speaking?
November 27, 2025 at 6:26 PM
Reposted by William Gunn
METRICS is accepting applications for the 2026–27 postdoctoral fellowship in meta-research at Stanford. Deadline: Feb 15, 2026. Start date will be around Oct 1, 2026 (+/- 2 month flexibility). See: metrics.stanford.edu/postdoctoral... #MetaResearch #postdoc
Postdoctoral Fellowship Announcement 2026-27
metrics.stanford.edu
November 26, 2025 at 10:05 PM
Sometimes people say things because they think they're true, sometimes they say them to indicate which side they're on. If you confuse which is which, you get what you see in this thread.
Is this platform still massively against AI or has it moved more towards acceptance?
November 26, 2025 at 5:06 PM
Reposted by William Gunn
🤔💭What even is reasoning? It's time to answer the hard questions!

We built the first unified taxonomy of 28 cognitive elements underlying reasoning

Spoiler—LLMs commonly employ sequential reasoning, rarely self-awareness, and often fail to use correct reasoning structures🧠
November 25, 2025 at 6:26 PM
Reposted by William Gunn
Agentic AI systems can plan, take actions, and interact with external tools or other agents semi-autonomously. New paper from CSA Singapore & FAR.AI highlights why conventional cybersecurity controls aren’t enough and maps agentic security frameworks & some key open problems. 👇
November 25, 2025 at 7:41 PM
Reposted by William Gunn
I’m pleased to share the Second Key Update to the International AI Safety Report, which outlines how AI developers, researchers, and policymakers are approaching technical risk management for general-purpose AI systems.
(1/6)
November 25, 2025 at 12:06 PM
Reposted by William Gunn
Interestingly, high self-reported confidence is associated with lower accuracy. This is dissimilar to most of the literature.

In our setting, we can track individual forecasters over time. And thus we can observe: this result is driven by overconfident forecasters.
November 24, 2025 at 3:43 PM
Reposted by William Gunn
🏆 Institutional: The Brazilian Reproducibility Initiative is a nationwide effort to evaluate research results in laboratory biology & the largest coordinated replication effort in the field worldwide, showcasing the potential of country-level research improvements. @redebrrepro.bsky.social (3/5)
November 24, 2025 at 10:00 AM
William's Law: Any content moderation plan that doesn't account for motivated reasoning by the people in charge of the plan will eventually expose them to risk. How long it takes depends on what senior leadership signals they want to hear.
November 24, 2025 at 1:41 AM
Good thread. You should judge people by more important actions (and also understand that many people won't have that history and will use their opinion of your appearance to form a first impression, whether they should or not).
But on issues such as respectability and morality, I think you should judge people by their deeper, more important actions. That doesn't mean how they dress, but how they treat others on a more meaningful level.

I will end with something I wrote five years ago about the messy nature of dress codes.
November 24, 2025 at 1:31 AM
Reposted by William Gunn
It’s really hard to defend industry academic collaborations with meta as earnest, if they’re internally burying evidence of harm.

www.reuters.com/sustainabili...
November 23, 2025 at 6:22 PM
Aaron is not a hype guy.
Google NotebookLM is the most impressive google product in years (the last one was Google photos). If you still one of those who think "AI" is all hype, please try Google NotebookLM
November 23, 2025 at 2:27 AM
I've been saying forever that microscopes, plate readers, etc need to digitally sign output. It's not just about verification, it would make reproducibility way easier if machine-generated metadata came with the data.
I tried an even harder example on Gemini Pro image generation and this is quite scary/amazing. I asked for a microscopy image of around 20 HeLa cells, GFP tagged 20% nuclear, 10% membrane, +1 nuclear staining, + overlap. Image below and prompt in the following post.
November 23, 2025 at 2:26 AM
AGI inevitability is really just "manifesting" for AI enthusiasts. It's their form of Prosperity Gospel. If they believe hard enough, AGI will come and solve all the problems. They'd probably be happier if they just hired a dom. Doms for e/accs - new EA cause area?
November 23, 2025 at 2:22 AM
Reposted by William Gunn
I tried the updated Gemini image generator on scientific related image prompts that have failed in the past and I am really impressed by the quality of the outputs. The first is drawing a diagram for a pocket prediction algorithm using Voronoi diagram, Delaunay triangulation and alpha shapes
November 21, 2025 at 1:14 PM
It really got the postures of everyone spot on.
This is nano banana pro, honestly I thought it would block this.
November 21, 2025 at 2:14 AM
Reposted by William Gunn
holy heck, there's really no other word for it
November 20, 2025 at 6:32 PM
Reposted by William Gunn
It was a three-man team that took the Grok hardpoint that day, and two of us were just there to guard the poet.Whistler was a sonnet slinger fresh from the Guatemalan meter wars, still waking up with the taste of a bloody metonym on his tongue.
November 20, 2025 at 8:29 PM
Reposted by William Gunn
Reimagining Scholarly Publishing Workflow: A High-Level Map of What Changes Next - The Scholarly Kitchen
Reimagining Scholarly Publishing Workflow: A High-Level Map of What Changes Next - The Scholarly Kitchen
Rather than just bolting on AI to existing publication workflows,there is a real opportunity to rethink and redesign them for human–AI collaboration. Some thoughts on what that looks like in practice.
scholarlykitchen.sspnet.org
November 20, 2025 at 12:30 PM