Existential Risk Observatory
banner
xrobservatory.bsky.social
Existential Risk Observatory
@xrobservatory.bsky.social
Reducing existential risk by informing the public debate. We propose a Conditional AI Safety Treaty: https://time.com/7171432/conditional-ai-safety-treaty-trump/
Two weeks ago, Geoffrey Hinton informed a New Zealand audience that AI could kill their children. The presenter announced the part as: "They call it p(doom), don't they, the probability that AI could wipe us out. On the BBC recently you gave it a 10-20% chance".
June 11, 2025 at 10:13 PM
It is now public knowledge that multiple LLMs significantly larger than GPT-4 have been trained, but they have not performed much better. That means scaling laws have broken down. What does this mean for existential risk?
November 22, 2024 at 1:23 PM
Today, we propose the Conditional AI Safety Treaty in TIME as a solution to AI's existential risks.

AI poses a risk of human extinction, but this problem is not unsolvable. The Conditional AI Safety Treaty is a global response to avoid losing control over AI.

How does it work?
November 22, 2024 at 12:21 PM