When an agent sees a trigger image it's instructed to execute malicious code and then share the image on social media to trigger other users' agents
This is a chance to talk about agent security 👇
Our latest research exposes critical security risks in AI assistants. An attacker can hijack them by simply posting an image on social media and waiting for it to be captured. [1/6] 🧵
When an agent sees a trigger image it's instructed to execute malicious code and then share the image on social media to trigger other users' agents
This is a chance to talk about agent security 👇
Our latest research exposes critical security risks in AI assistants. An attacker can hijack them by simply posting an image on social media and waiting for it to be captured. [1/6] 🧵
Our latest research exposes critical security risks in AI assistants. An attacker can hijack them by simply posting an image on social media and waiting for it to be captured. [1/6] 🧵
Introducing 𝗚-𝗡𝗟𝗟, a theoretically grounded and highly efficient uncertainty estimate, perfect for scalable LLM applications 🚀
Dive into the paper: arxiv.org/abs/2412.15176 👇
Introducing 𝗚-𝗡𝗟𝗟, a theoretically grounded and highly efficient uncertainty estimate, perfect for scalable LLM applications 🚀
Dive into the paper: arxiv.org/abs/2412.15176 👇
Introducing 𝗚-𝗡𝗟𝗟, a theoretically grounded and highly efficient uncertainty estimate, perfect for scalable LLM applications 🚀
Dive into the paper: arxiv.org/abs/2412.15176 👇