PauseAI
banner
pauseai.bsky.social
PauseAI
@pauseai.bsky.social
Community of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superhuman AI until it's safe.

https://pauseai.info
The American public are worried about us losing control to superhuman AI.
November 14, 2025 at 1:12 PM
Two news stories from last week.

The public understand we're barrelling towards disaster with the race to build superintelligence, and yet Big Tech continues to lobby against regulation.

Politicians need to listen to us, not AI companies.
November 12, 2025 at 4:43 PM
Hank Green discusses the letter signed by over 60 UK politicians demanding Google address their violation of the Frontier AI Safety Commitments.
November 7, 2025 at 6:30 PM
"Nobody wants their life, their family, their world to be destroyed."

The global movement to stop the development of superintelligent AI is growing – but it needs to grow faster. We don't know how much time we have.
November 5, 2025 at 4:55 PM
Bernie Sanders and Steve Bannon agree on about two things.

1) 2+2=4.
2) The AI industry's race to replace humans needs to be stopped.
November 4, 2025 at 6:06 PM
Applications are still open for PauseCon Brussels! (details below)
October 29, 2025 at 10:21 AM
Americans understand that racing to build superintelligence with no guardrails could end in disaster.
October 27, 2025 at 12:37 PM
Huge news this week!

Experts, Nobel laureates, politicians, artists, and tens of thousands of ordinary people have called for a ban on the development of superintelligent AI, at least until there is broad scientific consensus that it will be done safely and controllably and there is public buy-in.
October 24, 2025 at 5:42 PM
MI5 Director General says it would be "reckless" to ignore the danger of AI systems that may evade human control.
October 16, 2025 at 4:50 PM
Over 240 of you are saying NO to the race to build superintelligent AI.
October 16, 2025 at 1:50 PM
At these events, we're asking people to Say No to Superintelligent AI by adding their face to the growing collage showcasing a unified stance against unregulated AI development.

We're almost at 200 people already! Take part here: pauseai.info/sayno
October 4, 2025 at 5:41 PM
Today, as part of our ongoing campaign, two more readings of If Anyone Builds It, Everyone Dies will take place.
October 4, 2025 at 5:41 PM
Frontier AI companies will be required to publicly disclose their safety measures, after a historic AI safety bill has been passed in California.
October 1, 2025 at 5:30 PM
We say NO to the race to build superintelligent AI.

In just 5 days, over 150 of you have joined our campaign showcasing a united stance against reckless AI development.
September 30, 2025 at 3:35 PM
A Vatican roundtable on artificial intelligence concluded that no one should be allowed to develop superintelligent AI until there's a consensus that it will be safe.
September 29, 2025 at 2:19 PM
Two new signatories: 63 UK politicians have now signed PauseAI's open letter on Google DeepMind's broken promises.
September 26, 2025 at 4:18 PM
PauseAI UK spoke about their recent campaigns, including our letter signed by over 60 politicians calling on Google DeepMind to address their violation of the Frontier AI Safety Commitments.
September 24, 2025 at 12:04 PM
ControlAI spoke about how the growing political movement against unregulated frontier AI development emerged from frustration with the AI safety field's reluctance to engage politicians and the public.
September 24, 2025 at 12:04 PM
We read a few sections from the book, including Chapter 14: "Where There's Life, There's Hope", which calls on ordinary people to use their vote, attend protests, and discuss the extinction threat posed by superintelligent AI with friends and family.
September 24, 2025 at 12:04 PM
London's unofficial launch party for If Anyone Builds It, Everyone Dies.
September 24, 2025 at 12:04 PM
That's what over 200 Nobel laureates, AI experts, and former world leaders are urging. They warn that our "window for meaningful intervention" is closing as AI could "soon far surpass human capabilities".
September 23, 2025 at 5:30 PM
Governments must enforce AI red lines before the end of 2026 to prevent "universally unacceptable risks".
September 23, 2025 at 5:30 PM
We can't just close our eyes and hope this problem will go away by itself. We need to act now.
September 22, 2025 at 5:30 PM
New Gallup poll: 80% of Americans want the government to prioritise AI safety over improving AI capabilities as quickly as possible 📊
September 19, 2025 at 3:26 PM
The time is right for a broad social movement against superintelligent AI, says physicist Max Tegmark.
September 19, 2025 at 9:21 AM