Caroline Baxter
banner
convergingriskslab.bsky.social
Caroline Baxter
@convergingriskslab.bsky.social
Director, Converging Risks Lab, within the Council on Strategic Risks.
Former DoD and RAND policy wonk.
Come for the foresight, stay for the fun.
Yes, I took that photo. It was even cooler than you think.

https://councilonstrategicrisks.org/crl/
Pinned
Bluesky! Hello! I direct the Converging Risks Lab, part of the @councilonstrategicrisks.org. I work at the intersection of emerging tech, climate change, grand strategy, and national security/defense policy. You can read my latest below. Let's talk convergence.

www.justsecurity.org/121289/ai-hi...
AI’s Hidden National Security Cost
The same AI tools marketed as efficiency boosters could undermine the United States' ability to think critically and respond rapidly.
www.justsecurity.org
Reposted by Caroline Baxter
This is happening because OpenAI - a company that does not have anything even approaching a viable business model and is setting money on fire - used borrowed money to advance buy all of the raw materials for RAM production so competitors couldn’t get it
this RAM crisis feels like it’s going to ruin consumer electronics over the next decade. Bloomberg reports that Sony is considering pushing the PS6 release to 2028 or even 2029 www.bloomberg.com/news/article...
February 16, 2026 at 1:01 PM
At last - an email sign-off that conveys the full spectrum of my personality.
February 16, 2026 at 1:42 AM
“The senior administration official argued there is considerable gray area […] and that it's unworkable for the Pentagon to have to negotiate individual use-cases.”

www.axios.com/2026/02/15/c...
Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute
Anthropic has not agreed to the Pentagon's terms and defense officials are getting fed up after months of difficult negotiations.
www.axios.com
February 15, 2026 at 9:19 PM
One of my research obsessions is state capacity and the concept of capacity tipping points. I was mulling over a framework idea tonight and this tweet I saw - and screencapped - three years ago popped back into my head. And framework aside, this is a pretty solid foundation.
February 15, 2026 at 3:48 AM
When I said on a big plenary panel stage a few years ago “Let’s not out-China China,” this is partly what I had in mind.
Hegseth to US military: if you get into MIT, we won't let you attend because of the risk that you will be exposed to ideas hostile to MAGA when you study there.
February 14, 2026 at 11:01 PM
Roses are red
Harvard is woke
The Pentagon buys
What the Pentagon broke
Roses are red
Violets are less so
I once saw an Osprey
Fly over a Costco
February 14, 2026 at 6:30 PM
Roses are red
Violets are less so
I once saw an Osprey
Fly over a Costco
February 14, 2026 at 6:26 PM
The current commander of the 31st MEU went through the program at JHU I oversaw—and Mattis created—when I was DASD. DoD gets asked all the time to prove they’re educating steely-eyed killers—I testified on the Hill about this!—but the proof isn’t found in a smaller list of approved schools. /1
February 13, 2026 at 11:20 PM
Beyond the vomitous amoralism (“We will launch during a dynamic political environment where civil society groups [are] focused on other concerns") lies a helluva intel collection tool. If randos can pull up invisible dossiers, who cares if you remove your work badge.

www.nytimes.com/2026/02/13/t...
Meta Plans to Add Facial Recognition Technology to Its Smart Glasses
www.nytimes.com
February 13, 2026 at 4:41 PM
Have we forgotten how COVID torched military readiness, cost $12 billion in lost labor in 2022 alone, and made a generation of American children backslide in education? That the flu costs the economy about $7 billion/year in lost labor? That pandemics crash the stock market? What a stupid own goal.
February 13, 2026 at 2:02 PM
If you believe that faulty intelligence lies in bad assumptions more than bad data, then pushing AI into intel tradecraft holds risk. A good analyst knows the limits of their conclusions - but a flip-flopping LLM is just trying to deliver what the user wants.

www.randalolson.com/2026/02/07/t...
The "Are You Sure?" Problem: Why Your AI Keeps Changing Its Mind
Ask your AI 'are you sure?' and watch it flip. Models fold 60% of the time because we trained them to please, not push back. The fix isn't better prompts.
www.randalolson.com
February 13, 2026 at 1:39 PM
February 13, 2026 at 2:58 AM
Nick “I’ve never had so much fun losing in my life” Baumgartner is the kind of American we need more of. #Olympics
February 13, 2026 at 2:03 AM
Tag yourself - I'm the fourth one
Happy birthday to one of my favourite haters, Charles Darwin
February 12, 2026 at 7:55 PM
Nothing starts the day on a better note than having to explain a TEL to your five year old because he saw one of your challenge coins. “Well, kiddo, when two nations don’t like each other very much…” 😵‍💫
February 12, 2026 at 1:15 PM
Reposted by Caroline Baxter
NEW: OpenAI has disbanded its Mission Alignment team and transferred employees to other teams. Joshua Achiam, a leading voice on safety issues at the company, will become its chief futurist www.platformer.news/openai-missi...
Exclusive: OpenAI disbanded its mission alignment team
Joshua Achiam will become the company's chief futurist
www.platformer.news
February 11, 2026 at 4:26 PM
Not to be a scold, but if you had a hand in building the 20th century's most consequential technology and are now really worried about humanity, demand to be impaneled at a live televised Senate hearing about AI. Don't just give us a poem and wish us all good luck.

futurism.com/artificial-i...
Anthropic Researcher Quits in Cryptic Public Letter
An Anthropic safety researcher just announced his resignation from the company in a latter warning of a world "in peril."
futurism.com
February 11, 2026 at 7:40 PM
So, not great. But it's super not great for national security.

Why? Three reasons.

1) Our military training ranges are in the hottest areas of the US, and our force generation frameworks depend on 24/7/365 outdoor access. Too hot to be outside? Spend $ to train indoors or reduce end strength.
npr.org NPR @npr.org · 5d
The Environmental Protection Agency is eliminating a Clean Air Act finding from 2009 that is the basis for much of the federal government's actions to rein in climate change. n.pr/4asvw30
Trump's EPA plans to end a key climate pollution regulation
The Environmental Protection Agency is eliminating a Clean Air Act finding from 2009 that is the basis for much of the federal government's actions to rein in climate change.
n.pr
February 11, 2026 at 3:37 PM
If the Trump Administration cares about America being “first” and all that connotes, then forcing the military to source its electricity from dying coal plants isn’t the way to go. The last time coal made up more than 40% of annual electricity generation was 2011. It can’t keep up with demand.
February 11, 2026 at 3:26 AM
Americans have tended to believe the US spends way more money on foreign aid than it does because of our effective soft power (surely we must be spending more than 1% of our budget to get this result). It was the cheapest form of influence there was. Everything else is more expensive & less durable.
people born in america probably don’t realize how cool america used to be. the death of US soft power is going to have long term consequences we are only beginning to see
February 11, 2026 at 12:35 AM
It may be small, but it's one of the consequences of I wrote about last year. "It is as impossible to perform only important experiments as it is impossible to only play winning lottery numbers. Trial and error isn't waste - it's the work."

councilonstrategicrisks.org/2025/02/26/t...
February 10, 2026 at 9:39 PM
Bluesky! Hello! I direct the Converging Risks Lab, part of the @councilonstrategicrisks.org. I work at the intersection of emerging tech, climate change, grand strategy, and national security/defense policy. You can read my latest below. Let's talk convergence.

www.justsecurity.org/121289/ai-hi...
AI’s Hidden National Security Cost
The same AI tools marketed as efficiency boosters could undermine the United States' ability to think critically and respond rapidly.
www.justsecurity.org
February 10, 2026 at 9:18 PM
A whole lot of people who don't understand AI telling a whole lot of other people to use AI - all the time, without guardrails - is a recipe for disaster. DoD's approach of "any lawful use” without any constraints is particularly worrying.

www.washingtonpost.com/technology/2...
Trump set off a surge of AI in the federal government. See what happened.
The Trump administration is accelerating AI adoption across government, embedding the technology in policing, health care, defense and science.
www.washingtonpost.com
February 10, 2026 at 9:08 PM