Mackenzie Jackson
@advocatemack.bsky.social
DevRel @AikidoSecurity - Kiwi living in the Netherlands
Cudo's for the quick disclosure. I can tell you how rare this is.
July 19, 2025 at 9:38 AM
Cudo's for the quick disclosure. I can tell you how rare this is.
The takeaway?
We’re not getting rid of API keys — we’re just raising the stakes.
And when those keys unlock your most intelligent systems, the risks become existential.
We’re not getting rid of API keys — we’re just raising the stakes.
And when those keys unlock your most intelligent systems, the risks become existential.
May 13, 2025 at 1:49 PM
The takeaway?
We’re not getting rid of API keys — we’re just raising the stakes.
And when those keys unlock your most intelligent systems, the risks become existential.
We’re not getting rid of API keys — we’re just raising the stakes.
And when those keys unlock your most intelligent systems, the risks become existential.
🔐 So, what can you do?
Use dynamic secrets (e.g., HashiCorp Vault, SPIFFE)
Continuously monitor Git repos
Apply zero-trust access to model endpoints
AI security isn't optional. It's foundational.
Use dynamic secrets (e.g., HashiCorp Vault, SPIFFE)
Continuously monitor Git repos
Apply zero-trust access to model endpoints
AI security isn't optional. It's foundational.
May 13, 2025 at 1:49 PM
🔐 So, what can you do?
Use dynamic secrets (e.g., HashiCorp Vault, SPIFFE)
Continuously monitor Git repos
Apply zero-trust access to model endpoints
AI security isn't optional. It's foundational.
Use dynamic secrets (e.g., HashiCorp Vault, SPIFFE)
Continuously monitor Git repos
Apply zero-trust access to model endpoints
AI security isn't optional. It's foundational.
💡 The real problem:
Compromising AI models = Compromising internal systems.
Yesterday's breach was about server access.
Today's breach? It's through your AI layer.
Compromising AI models = Compromising internal systems.
Yesterday's breach was about server access.
Today's breach? It's through your AI layer.
May 13, 2025 at 1:49 PM
💡 The real problem:
Compromising AI models = Compromising internal systems.
Yesterday's breach was about server access.
Today's breach? It's through your AI layer.
Compromising AI models = Compromising internal systems.
Yesterday's breach was about server access.
Today's breach? It's through your AI layer.
🧠 Private AI models are now the crown jewels.
They know your data. They are your data.
And yet, we’re still protecting them with static API keys casually left in public repos.
They know your data. They are your data.
And yet, we’re still protecting them with static API keys casually left in public repos.
May 13, 2025 at 1:49 PM
🧠 Private AI models are now the crown jewels.
They know your data. They are your data.
And yet, we’re still protecting them with static API keys casually left in public repos.
They know your data. They are your data.
And yet, we’re still protecting them with static API keys casually left in public repos.
This wasn’t just a leak — it’s a preview of future breaches.
When your internal LLMs are trained on proprietary data, they become an attack vector.
Just like breaching a company’s internal network — but smarter.
When your internal LLMs are trained on proprietary data, they become an attack vector.
Just like breaching a company’s internal network — but smarter.
May 13, 2025 at 1:49 PM
This wasn’t just a leak — it’s a preview of future breaches.
When your internal LLMs are trained on proprietary data, they become an attack vector.
Just like breaching a company’s internal network — but smarter.
When your internal LLMs are trained on proprietary data, they become an attack vector.
Just like breaching a company’s internal network — but smarter.
🔓 Access to 60+ fine-tuned AI models — including unreleased versions of Grok.
Models trained for Tesla, SpaceX, and X (Twitter) were exposed.
That means: sensitive corporate data was just a few prompts away.
Models trained for Tesla, SpaceX, and X (Twitter) were exposed.
That means: sensitive corporate data was just a few prompts away.
May 13, 2025 at 1:49 PM
🔓 Access to 60+ fine-tuned AI models — including unreleased versions of Grok.
Models trained for Tesla, SpaceX, and X (Twitter) were exposed.
That means: sensitive corporate data was just a few prompts away.
Models trained for Tesla, SpaceX, and X (Twitter) were exposed.
That means: sensitive corporate data was just a few prompts away.
Security researcher Philippe Caturegli and GitGuardian uncovered an API key publicly exposed on GitHub — by an xAI developer.
The key stayed live for 2 months. What it unlocked? 😬
The key stayed live for 2 months. What it unlocked? 😬
May 13, 2025 at 1:49 PM
Security researcher Philippe Caturegli and GitGuardian uncovered an API key publicly exposed on GitHub — by an xAI developer.
The key stayed live for 2 months. What it unlocked? 😬
The key stayed live for 2 months. What it unlocked? 😬
View the full report here -> www.aikido.dev/blog/meet-in...
Aikido Intel, AI-powered open-source security feed
Aikido launches Intel, the AI-powered open-source security threat feed that identifies vulnerabilities in projects before disclosed
www.aikido.dev
December 17, 2024 at 2:23 PM
View the full report here -> www.aikido.dev/blog/meet-in...