Adrian Thinnyun
banner
adrianthinnyun.com
Adrian Thinnyun
@adrianthinnyun.com
Horizon Junior Fellow, Center for Security and Emerging Technology (CSET)

https://cset.georgetown.edu/staff/adrian-thinnyun/
My last recommendation is to support the development of evaluations for AI capabilities and risks. The AI Action Plan already includes this, but it should go one step further and consider restricting models that fail to meet industry-standard performance levels of safety. (7/7)
July 28, 2025 at 3:13 PM
My second recommendation is to push AI companies to share safety-relevant knowledge with each other and other relevant stakeholders. This would involve mandating reporting requirements, disclosure of unexpected capabilities in new models, and sharing threat intelligence. (6/7)
July 28, 2025 at 3:13 PM
My first recommendation is to require AI companies to adhere to their own risk management plans. Companies like OpenAI and Anthropic have already published frameworks describing their planned risk mitigations, but these need to be made legally binding to have any effect. (5/7)
July 28, 2025 at 3:13 PM
At the same time, AI is advancing too rapidly for government to keep up with traditional regulation. The solution is to promote industry self-regulation – make AI companies figure out the best way to keep their products safe and then make sure they actually follow through. (4/7)
July 28, 2025 at 3:13 PM
It's true that we need to increase AI adoption, but quite simply – people don't want to use things that aren't guaranteed to work! There's still too many hallucinations, security concerns, and other liabilities for companies to feel confident relying on AI for important tasks. (3/7)
July 28, 2025 at 3:13 PM
These standards, if implemented, would go a long way towards mitigating the potential risks of AI and increasing public trust and confidence in using it, allowing us to realize its benefits sooner than we could otherwise. [3/4]
March 17, 2025 at 8:34 PM
Specifically, AISI should develop standards on topics such as model training, pre-release internal & external security testing, cybersecurity practices, if-then commitments, AI risk assessments, and processes for testing and re-testing systems as they change over time. [2/4]
March 17, 2025 at 8:34 PM