Ollie Stephenson
technolliegist.bsky.social
Ollie Stephenson
@technolliegist.bsky.social
Associate Director of AI and Emerging Technology Policy, @scientistsorg.bsky.social. Views are my own.
If you want to help shape how the U.S. anticipates and governs advanced AI and make a future that's safer for everyone, I'd encourage you to take a look.

Apply by December 15. More info and application form available here:
fas.org/career/senio...
Senior Manager, AI Safety and Security Policy
As Senior Manager, AI Safety and Security Policy, you will drive ambitious efforts to turn cutting-edge technical insights into real policy impact—shaping how the U.S. anticipates and manages the chal...
fas.org
November 18, 2025 at 5:16 PM
11/n At FAS we’ll keep working with scientists & policymakers to craft AI policy that serves everyone.
July 25, 2025 at 3:46 AM
10/n 🔎 Bottom Line: To reap AI’s benefits we must trust it—we need more research, careful adoption & strong guardrails for high‑risk uses. The plan has bright spots but backslides on bias & climate and collides with deep staffing/funding cuts in government.
July 25, 2025 at 3:46 AM
9/n Also disappointing: deleting climate‑change references. AI uses a lot of energy and we can’t manage what we don’t measure. Our AI & Energy Policy Sprint shows how to track AI’s footprint and use AI to fight climate change: fas.org/accelerator/...
POLICY SPRINT: AI & Energy
From using AI to optimize power grids to accelerating clean energy R&D, AI holds huge potential, while also introducing new challenges related to climate, equity, infrastructure, security, and sustain...
fas.org
July 25, 2025 at 3:46 AM
8/n ❌ The Ugly:
AI bias is real & measurable. Yet the plan tells NIST to drop “diversity, equity & inclusion” from its AI Risk Mgmt Framework and requires federal models be “free from ideological bias.” Lots depends on implementation but this is hiding real problems.
July 25, 2025 at 3:46 AM
7/n Without national regs, state experiments are how we learn what responsible AI looks like. A regulatory Wild West won’t build public trust.
July 25, 2025 at 3:46 AM
6/n ⚠️ The Bad
Last month the Senate stripped a clause from OBBBA that would have restricted state AI rules. The plan tries again to block state guardrails even as Congress sets no federal standard.
July 25, 2025 at 3:46 AM
5/n ➡️ Focused Research Organizations (FROs): They tackle narrow, high‑impact problems that are a poor fit for startups. FAS first championed FROs in 2020, and we think this is their first federal embrace. We've publish a list of promising FRO ideas here: fas.org/initiative/f...
Focused Research Organizations - Federation of American Scientists
Not all scientific challenges can be met by academia and industry. This is where Focused Research Organizations can bridge the gap.
fas.org
July 25, 2025 at 3:46 AM
4/n ➡️ Security measures: Steps on cybersecurity, biosecurity, secure‑by‑design AI & incident response aim to stop harms before they freeze innovation.
July 25, 2025 at 3:46 AM
3/n ➡️ Broad R&D agenda: Beyond interpretability, the plan backs research on robustness, controllability, new AI paradigms & an evaluation ecosystem.
July 25, 2025 at 3:46 AM
2/n 🚀 The Good
➡️ Interpretability: We need to see inside AI's black box. With FAS AI Fellow Matteo Pistillo, we've drafted a federal roadmap to advance AI interpretability: fas.org/publication/...
Accelerating AI Interpretability
If AI systems are not always reliable and secure, this could inhibit their adoption, especially in high-stakes scenarios, potentially compromising American AI leadership.
fas.org
July 25, 2025 at 3:46 AM