ControlAI
banner
controlai.com
ControlAI
@controlai.com
We work to keep humanity in control.

Subscribe to our free newsletter: https://controlai.news

Join our discord at: https://discord.com/invite/ptPScqtdc5
AI godfather Geoffrey Hinton explains why he signed the call to ban superintelligence, which now has over 120,000 supporters.

"If we know we can't do it safely, we should stop.

And maybe if that knowledge is widely percolated to the public, we will be able to stop."
November 14, 2025 at 2:35 PM
Microsoft AI CEO Mustafa Suleyman says recursively self-improving AIs that can specify their own goals could be built in just a few years time.

In the same interview, he said such AIs would be "very dangerous" and should never be built.
November 12, 2025 at 5:52 PM
"The reason I'm so concerned about AI safety is that one of the possibilities is the Terminator scenario."
— Elon Musk
November 12, 2025 at 3:45 PM
Microsoft AI CEO Mustafa Suleyman says that smarter-than-human AIs capable of self-improvement, complete autonomy, or independent goal setting would be "very dangerous" and should never be built.

He says others in the field "just hope" that such an AI would not harm us.
November 11, 2025 at 11:22 AM
Microsoft AI CEO Mustafa Suleyman says he's seeing lots of indications that people want to build superintelligence to replace or threaten our species.
November 10, 2025 at 4:43 PM
Earlier this week, King Charles gave Nvidia CEO Jensen Huang a copy of his 2023 AI Safety Summit speech.

In his speech, the King said that there is a "clear imperative" to ensure that AI remains safe and secure, and that countries need to work together to ensure this.
November 9, 2025 at 5:35 PM
There's a very simple argument for why developing superintelligence ends badly.

Conjecture CEO Connor Leahy: "If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?"
November 7, 2025 at 4:07 PM
"No one can deny that this is real. "

Conjecture CEO Connor Leahy says the coalition calling for a ban on the development of superintelligence makes it harder and harder to ignore the danger of smarter-than-human AI.
November 6, 2025 at 9:16 PM
AI godfather and Nobel Prize winner Geoffrey Hinton says AI companies are much more concerned with racing each other than ensuring that humanity actually survives.
November 5, 2025 at 5:17 PM
AI godfather Geoffrey Hinton says countries will collaborate to prevent AI taking over.

"On AI taking over they will collaborate 'cause nobody wants that. The Chinese Communist Party doesn't want AI to take over. Trump doesn't want AI to take over. They can collaborate on that."
November 5, 2025 at 11:09 AM
Why is AI different from other technologies?

AI godfather Geoffrey Hinton points out that humans were always in charge.

" We control the steam engine. This isn't like that."

Hinton also says that AI will soon be smarter than humans.
November 4, 2025 at 2:52 PM
AI researcher Nate Soares says developing an AI is much more like growing an organism than writing code.
November 3, 2025 at 7:25 PM
Senator Bernie Sanders: AI is like a meteor coming to this planet.

Sanders adds that he's worried about the development of superintelligence, which we could lose control of.
November 2, 2025 at 5:40 PM
OpenAI's Chief Scientist Jakub Pachocki says superintelligence could be developed in less than a decade.

Superintelligent AI would be vastly smarter than humans across virtually all cognitive domains, and experts warn it could lead to human extinction.
November 1, 2025 at 10:26 AM