Michael J. D. Vermeer
banner
mjdvermeer.bsky.social
Michael J. D. Vermeer
@mjdvermeer.bsky.social
Sr. Physical Scientist @RAND.org
Research Sci & Tech policy and NatSec implications of emerging technologies.
Opinions are my own.
https://www.rand.org/about/people/v/vermeer_michael_j_d.html
Bottom line: all is not lost. Even if perfect containment is impossible, layered safeguards grounded in computation, information theory, and thermodynamics could likely give humanity time and tools to respond to even threats from a superintelligent AI adversary. (3/3)
October 13, 2025 at 6:10 PM
Many assume a superintelligent AI could bypass any human-imposed limits. We challenge that assumption. Using standard security engineering approaches – threat modeling, protocols, primitives, and layered defenses – humans can impose real costs and barriers. (2/3)
October 13, 2025 at 6:10 PM
Thanks! Depends on the type of risk you're talking about! For catastrophic risks, I think very long-term. There are plenty of other lesser risks that are short-term or already here though.
September 26, 2025 at 2:46 PM
I also wrote a more accessible discussion of the report that was just published in @sciam.bsky.social called "Could AI Really Kill Off Humans?" Take a look!
www.scientificamerican.com/article/coul...
Could AI Really Kill Off Humans?
Many people believe AI will one day cause human extinction. A little math tells us it wouldn’t be that easy
www.scientificamerican.com
May 9, 2025 at 7:36 PM