Tolga Bilge
banner
tolgabilge.bsky.social
Tolga Bilge
@tolgabilge.bsky.social
AI policy researcher @controlai.com | aitreaty.org & taisc.org | Superforecaster

linkedin.com/in/tolga-bilge
newsletter.tolgabilge.com
Reposted by Tolga Bilge
Top AI experts say AI poses an extinction risk on par with nuclear war.

Prohibiting the development of superintelligence can prevent this risk.

We’ve just launched a new campaign to get this done.
September 12, 2025 at 2:09 PM
The AI plateau:
August 12, 2025 at 1:07 AM
Reposted by Tolga Bilge
The future is not set, nor are commitments made by AI companies.

We've been compiling a growing list of examples of AI companies saying one thing, and doing the opposite:
controlai.news/p/art...
Artificial Guarantees 2: Judgment Day
The future is not set, nor are commitments made by AI companies.
controlai.news
February 14, 2025 at 4:07 PM
Reposted by Tolga Bilge
UK POLITICIANS DEMAND REGULATION OF POWERFUL AI

TODAY: Politicians across the UK political spectrum back our campaign for binding rules on dangerous AI development.

This is the first time a coalition of parliamentarians have acknowledged the extinction threat posed by AI.
1/6
February 6, 2025 at 12:56 PM
Did Sam Altman lie to President Trump?

What are the facts?
— Trump announced Stargate
— Elon Musk says they don’t have the money
— Nadella says his $80b is for Azure
— Trump doesn’t know if they have it
— Reporting suggests they may only have $52b

newsletter.tolgabilge.com/p/stargate-g...
Stargate-gate: Did Sam Altman Lie to President Trump?
Jobs, risks, and whether they have the money.
newsletter.tolgabilge.com
February 1, 2025 at 12:23 AM
Reposted by Tolga Bilge
We've just launched an open call for binding rules on dangerous AI development.

Top AI scientists, and even the CEOs of the biggest AI companies themselves, have warned that AI threatens human extinction.

The time for action is now. Sign below 👇
controlai.com/public...
Public Statement | ControlAI
At ControlAI we are fighting to keep humanity in control.
controlai.com
January 24, 2025 at 6:21 PM
Reposted by Tolga Bilge
Know them by their deeds, not their words.

AI companies often say one thing and do the opposite. We’ve been watching closely, and have been compiling a list of examples:

controlai.news/p/art...
Artificial Guarantees
Shifting baselines and shattered promises.
controlai.news
January 16, 2025 at 6:11 PM
Reposted by Tolga Bilge
We need a treaty to establish common redlines on AI.

AI development is advancing rapidly, and we may soon have AI systems that surpass humans in intelligence, yet we have no way to control them. Our very existence is at stake.

This could be the biggest deal in history.

🧵
January 14, 2025 at 5:40 PM
Reposted by Tolga Bilge
Google DeepMind's Chief AGI Scientist says there's a 50% chance that AGI will be built in the next 3 years.

This was in reference to a prediction he made back in 2011. He also thought there was a 5 to 50% chance of human extinction within a year of human-level AI being built!
January 10, 2025 at 5:48 PM
Reposted by Tolga Bilge
The New Year is upon us, and it is a time when many are making predictions about how AI will continue to develop.

We've collected some predictions for AI in 2025, by Elon Musk, Sam Altman, Dario Amodei, Gary Marcus, and Eli Lifland.

Get them in our free weekly newsletter 👇
controlai.news/p/the...
The Unknown Future: Predicting AI in 2025
Artificial General Intelligence, workforce disruption, and dangerous capabilities.
controlai.news
January 9, 2025 at 5:19 PM
Reposted by Tolga Bilge
Last year, OpenAI's chief lobbyist said that OpenAI is not aiming to build superintelligence.

Her boss, Sam Altman, is now bragging about how OpenAI is rushing to create superintelligence.
January 7, 2025 at 2:21 PM
Reposted by Tolga Bilge
Two years of AI politics — where we started, where we stand, and where we’re heading:

newsletter.tolgabilge.com/p/two-years-of-ai-politics-past-present
Two Years of AI Politics: Past, Present, and Future
Despite early success, the situation has worsened, and it’s probably going to get even worse.
newsletter.tolgabilge.com
December 31, 2024 at 5:11 AM
Two years of AI politics — where we started, where we stand, and where we’re heading:

newsletter.tolgabilge.com/p/two-years-of-ai-politics-past-present
Two Years of AI Politics: Past, Present, and Future
Despite early success, the situation has worsened, and it’s probably going to get even worse.
newsletter.tolgabilge.com
December 31, 2024 at 5:11 AM
Reposted by Tolga Bilge
📩 ControlAI Weekly Roundup: Time to Unplug?

1️⃣ Voters back AI policy focus on preventing extreme risks
2️⃣ Meta asks the government to block OpenAI's for-profit switch
3️⃣ Eric Schmidt warns there's a time to unplug AI

Get our free newsletter:
controlai.news/p/con...
ControlAI Weekly Roundup #9: Time to Unplug?
Voters back an AI policy focus on preventing extreme risks, Meta asks the government to block OpenAI switching to a for-profit, and Eric Schmidt warns there’s a time to consider unplugging AI systems.
controlai.news
December 19, 2024 at 7:53 PM
Reposted by Tolga Bilge
One of the weird things about the world today is that the idea of 'AGI' is now regularly being talked about in e.g. policy contexts. But seems v clear that most policymakers' notion of AGI & its implications is vastly underpowered compared to that of the ppl trying to build AGI.
December 16, 2024 at 10:40 AM
Reposted by Tolga Bilge
📩 ControlAI Weekly Roundup: Sneaky Machines

1️⃣ OpenAI launches o1, in tests tries to avoid shutdown
2️⃣ Google DeepMind launches Gemini 2.0
3️⃣ Comments by incoming AI czar David Sacks on AGI threat resurface

Get our free newsletter here 👇
controlai.news/p/sub...
ControlAI Weekly Roundup #8: Sneaky Machines
OpenAI launches o1, which in tests tried to avoid shutdown, Google DeepMind launches Gemini 2.0, and comments by incoming US AI czar David Sacks expressing concern about the threat from AGI resurface.
controlai.news
December 12, 2024 at 5:54 PM
Reposted by Tolga Bilge
Current AI research leads to extinction by godlike AI.

Creating AGI depends simply on enabling it to perform the intellectual tasks that we can.

Once AI can do that, we are on a path to godlike AI — systems so beyond our reach that they pose the risk of human extinction.
🧵
December 10, 2024 at 4:38 PM
Reposted by Tolga Bilge
📩 ControlAI Weekly Roundup: AI Accelerates Cyberattacks

1️⃣ AI assists hackers mine sensitive data
2️⃣ Google DeepMind predicts weather more accurately than leading system
3️⃣ xAI plans massive expansion of its Memphis supercomputer

Get our free newsletter here 👇
controlai.news/p/con...
ControlAI Weekly Roundup #7: AI Accelerates Cyberattacks
AI is assisting hackers mine sensitive data for phishing attacks, Google DeepMind predicts weather more accurately than leading system, and xAI plans a massive expansion of its Memphis supercomputer.
controlai.news
December 5, 2024 at 6:03 PM
Reposted by Tolga Bilge
Recent polling by the AI Policy Institute — clear majorities of Americans say:
⬥ AI labs can't police themselves, more regulation is needed
⬥ They support AI Safety Institute testing of AI models, and this should be mandatory
⬥ AI safety testing is more important than US-China competition
November 28, 2024 at 12:40 PM
Reposted by Tolga Bilge
We're starting to see people wake up to the risks. Serious people, who aren't talking their own books, and who are oath-sworn to do the best for their countries, and who feel compelled to speak out.
Lord Knight of Weymouth, speaking in the House of Lords, warns of the threat from superintelligent AI: "AI could pose an extinction risk to humanity, as recognised by world leaders, AI scientists, and leading AI company CEOs themselves."
November 26, 2024 at 6:50 PM
Reposted by Tolga Bilge
📩 ControlAI Weekly Roundup: US-China Detente or AGI Suicide Race?

1️⃣ Biden and Xi agree AI shouldn’t control nuclear weapons
2️⃣ A US government commission recommends a race to AGI
3️⃣ Bengio writes about advances in the ability of AI to reason

controlai.news/p/con...
ControlAI Weekly Roundup #5: US-China Detente or AGI Suicide Race?
Biden and Xi agree AI shouldn’t control nuclear weapons, a US government commission recommends a race to AGI, and Yoshua Bengio writes about advances in the ability of AI to reason.
controlai.news
November 21, 2024 at 6:08 PM
psa: likes are public here
November 18, 2024 at 10:23 PM