PauseAI
banner
pauseai.bsky.social
PauseAI
@pauseai.bsky.social
720 followers 34 following 230 posts
Community of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superhuman AI until it's safe. https://pauseai.info
Posts Media Videos Starter Packs
MI5 Director General says it would be "reckless" to ignore the danger of AI systems that may evade human control.
To help us grow further to 300, simply upload a selfie of yourself using the link below. It only takes 30 seconds!

👉 pauseai.info/sayno
Stop Superintelligence
Join the photo petition to say no to the race to build superintelligent AI
pauseai.info
Over 240 of you are saying NO to the race to build superintelligent AI.
At these events, we're asking people to Say No to Superintelligent AI by adding their face to the growing collage showcasing a unified stance against unregulated AI development.

We're almost at 200 people already! Take part here: pauseai.info/sayno
Today, as part of our ongoing campaign, two more readings of If Anyone Builds It, Everyone Dies will take place.
Frontier AI companies will be required to publicly disclose their safety measures, after a historic AI safety bill has been passed in California.
We're asking people to say no to superintelligent AI at our events celebrating the release of New York Times Best Seller If Anyone Builds It, Everyone Dies. This week, we have book readings coming up in San Francisco and Berlin.

See the full list of book events here 👉 pauseai.info/if-anyone-bu...
If Anyone Builds It, Everyone Dies
PauseAI events in support of If Anyone Builds It, Everyone Dies
pauseai.info
To take part, simply upload an image of yourself and join the growing movement in support of international regulation.

Take 30 seconds to stand up to AI companies.👉 pauseai.info/sayno
Stop Superintelligence
Join the photo petition to say no to the race to build superintelligent AI
pauseai.info
We say NO to the race to build superintelligent AI.

In just 5 days, over 150 of you have joined our campaign showcasing a united stance against reckless AI development.
The working group, which included AI experts such as Stuart Russell and Yoshua Bengio, wrote a global appeal calling for an international treaty and an end to the 'irresponsible' AI race.

Read the 'Fraternity in the age of AI' report here: coexistence.global
Fraternity in the age of AI - Coexistence
Fraternity in the age of AI Our global appeal for peaceful human coexistence and shared responsibility Rome, September 12, 2025 to His Holiness Pope Leo XIVto all Global Leadersto all People of Good…
coexistence.global
A Vatican roundtable on artificial intelligence concluded that no one should be allowed to develop superintelligent AI until there's a consensus that it will be safe.
George Freeman MP and Delyth Jewell MS are calling on Google to address their violation of the Frontier AI Safety Commitments.

Read Time's article on our open letter here: time.com/7313320/goog...
Exclusive: 60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge
The cross-party group warns that Google’s conduct “sets a dangerous precedent.”
time.com
Two new signatories: 63 UK politicians have now signed PauseAI's open letter on Google DeepMind's broken promises.
This event in London was the first of many. We're holding several events over the coming weeks to celebrate the launch of the book, and to turn concern into action.

You can find more details on our Luma page. luma.com/PauseAI
PauseAI Events (Global) · Events Calendar
View and subscribe to events from PauseAI Events (Global) on Luma. Also includes events from adjacent organizations. Add yours using the "+ Add Event" button below!
luma.com
PauseAI UK spoke about their recent campaigns, including our letter signed by over 60 politicians calling on Google DeepMind to address their violation of the Frontier AI Safety Commitments.
ControlAI spoke about how the growing political movement against unregulated frontier AI development emerged from frustration with the AI safety field's reluctance to engage politicians and the public.
We read a few sections from the book, including Chapter 14: "Where There's Life, There's Hope", which calls on ordinary people to use their vote, attend protests, and discuss the extinction threat posed by superintelligent AI with friends and family.
London's unofficial launch party for If Anyone Builds It, Everyone Dies.
That's what over 200 Nobel laureates, AI experts, and former world leaders are urging. They warn that our "window for meaningful intervention" is closing as AI could "soon far surpass human capabilities".
Governments must enforce AI red lines before the end of 2026 to prevent "universally unacceptable risks".
From an article by David written two years ago, but just as relevant today.
www.newscientist.com/article/2369...
We can't just close our eyes and hope this problem will go away by itself. We need to act now.