Michael Huang
banner
michaelhuang.bsky.social
Michael Huang
@michaelhuang.bsky.social
Reduce extinction risk by pausing frontier AI unless provably safe @pauseai.bsky.social and banning AI weapons @stopkillerrobots.bsky.social | Reduce suffering @postsuffering.bsky.social

https://keepthefuturehuman.ai
This is great, but will SB 53 be Congress-proof?
July 10, 2025 at 10:46 AM
There was someone even more pessimistic than the pessimist…
December 1, 2024 at 2:24 PM
“The participants tried to drill down further into the technical aspects of monitoring those red lines – if they made their way into a treaty, how could they be enforced… They talked about the need for ‘early-warning’ mechanisms, and about how to set the thresholds for safe design of AI models.”
November 30, 2024 at 12:55 PM
“They also discussed prevention mechanisms, such automatic audits of all AI models above a certain size, requiring developers to publish mathematical proofs that their AI couldn’t breach the red lines, and programming AIs to obey certain rules.”
November 30, 2024 at 9:21 AM
“AI shouldn’t have the capacity to improve autonomously or replicate itself, or have the capability to seek more power in pursuit of its goal. AI shouldn’t be used to develop weapons of mass destruction, chemical or biological agents; nor should it be used to execute cyber-attacks.“
November 30, 2024 at 9:20 AM
“Computer scientists are reaching out across the geopolitical divide to try to stop an apocalypse.”

Inside the AI back-channel between China and the West www.economist.com/1843/2024/11...
November 30, 2024 at 9:19 AM