The Midas Project Watchtower
safetychanges.bsky.social
The Midas Project Watchtower
@safetychanges.bsky.social
We monitor AI safety policies from companies and governments for substantive changes.

Anonymous submissions: https://forms.gle/3RP2xu2tr8beYs5c8

Run by
@TheMidasProject.bsky.social
On the whole, it's good that Google is continuing to update its risk management policies, and they seem to treat the issue with much more seriousness than some competitors.

Read the full diff at our website: www.themidasproject.com/watchtower/g...
Google | The Midas Project
Updated their Frontier Safety Framework
www.themidasproject.com
September 25, 2025 at 6:10 PM
Remember that in 2024 Google promised to *define* specific risk thresholds, not explore illustrative examples.
September 25, 2025 at 6:10 PM
Additionally, as pointed out by Zach Stein-Perlman of AI Lab Watch, the CCLs for misalignment, which used to be a concrete (albeit initial) approach, are now described as "exploratory" and "illustrative."
September 25, 2025 at 6:10 PM
Similarly, for ML R&D, models that "can" accelerate AI development no longer require RAND SL 3. Only models that have been used for this purpose count. But this is a strange ordering -- shouldn't the safeguards precede the deployment (and even the training) of such a model?
September 25, 2025 at 6:10 PM
But it's weakened in other ways.

Critical capability levels, which previously focused on capabilities (e.g. "can be used to cause a mass casualty event") now seems to rely on anticipated outcomes (e.g. "resulting in additional expected harm at severe scale")
September 25, 2025 at 6:10 PM
In their blog post, Google describes this as a strengthening of the policy.

And in some ways, it is: they define a new harmful manipulation risk category, and they even soften the claim from v2 that they would only follow their promise if every other company does so as well.
September 25, 2025 at 6:10 PM
(Removed)
March 5, 2025 at 4:56 AM
(Removed)
March 5, 2025 at 4:56 AM
The smaller changes made to Anthropic's practices:

(Added)
March 5, 2025 at 4:56 AM
The good news is that the details they provide on internal practices have changed very little (select screenshots included in rest of thread).

Now all they need to do is provide transparency on *all* the commitments they've made + when they are choosing to abandon any.
March 5, 2025 at 4:56 AM
Most surprisingly, there is now no record of the former commitments on Anthropic's transparency center, a web resource they launched to track their compliance with voluntary commitments and which they describe as "raising the bar on transparency."
March 5, 2025 at 4:56 AM
In fact, post-election, multiple tech companies confirmed their commitments hadn't changed.

Perhaps they understood that the commitments were not contingent on whatever way the political winds blow, but made to the public at large.

fedscoop.com/voluntary-ai...
March 5, 2025 at 4:56 AM
While there is a new administration in office, nothing in the commitments suggested that the promise was (1) time-bound or (2) contingent on the party affiliation of the sitting president.
March 5, 2025 at 4:56 AM
The White House Voluntary Commitments, made in 2023, were a pledge to conduct pre-deployment testing, share information on AI risk management frameworks, invest in cybersecurity, implement bug bounties, and publicly report capabilities and limitations.

bidenwhitehouse.archives.gov/briefing-roo...
FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI | The White House
Voluntary commitments – underscoring safety, security, and trust – mark a critical step toward developing responsible AIBiden-Harris Administration will
bidenwhitehouse.archives.gov
March 5, 2025 at 4:56 AM
To view all of the policies released by AI companies, and their scorecards, check out our full report at

www.seoul-tracker.org
Seoul Commitment Tracker
Tracking the progress of voluntary commitments made at the 2023 AI Safety Summit in Seoul, South Korea
www.seoul-tracker.org
February 14, 2025 at 11:17 PM
Company: xAI
Date: February 10, 2025
Change: Released Risk Management Framework draft
URL: x.ai/documents/20...

xAI's policy is stronger than others in terms of using specific benchmarks, but lacks threshold details, and provides no mitigations.
February 14, 2025 at 11:17 PM
Company: Amazon
Date: February 10, 2025
Change: Released their Frontier Model Safety Framework
URL: amazon.science/publications...

Like Microsoft, Amazon's policy also goes through the motions while setting vague thresholds that aren't clearly connected to specific mitigations
February 14, 2025 at 11:17 PM
Company: Microsoft
Date: February 8, 2025
Change: Released their Frontier Governance Framework
URL: cdn-dynmedia-1.microsoft.com/is/content/m...

Microsoft's policy is an admirable effort, but as with others, needs further specification. Mitigations should also be connected to specific thresholds
February 14, 2025 at 11:17 PM
Company: Cohere
Date: February 7th, 2025
Change: Released their "Secure AI Frontier Model Framework"
URL: cohere.com/security/the...

Cohere's framework mostly neglects the most important risks. Like G42, they are not developing frontier models, which makes this more understandable.
February 14, 2025 at 11:17 PM
Company: G42
Date: February 6, 2025
Change: Released their "Frontier AI Framework"
URL: g42.ai/application/...

G42's policy is surprisingly strong for a non-frontier lab. It's biggest issues are a lack of specificity and not defining future thresholds for catastrophic risks.
February 14, 2025 at 11:17 PM
Company: Google
Date: February 4, 2025
Change: Released v2 of their Frontier Safety Framework
URL: deepmind.google/discover/blo...

v2 of the framework improves Google's policy in some areas while weakening it in others, most notably no longer promising to adhere to it if others are not.
February 14, 2025 at 11:17 PM