DP
banner
dprudente.bsky.social
DP
@dprudente.bsky.social
Building a compliance tool with sveltekit 🧡 → coming Q1 2026

sec + compliance insights |
occasional movie/gadget ramblings
Impressive!
December 14, 2025 at 7:24 AM
The saddest thing is knowing most of people who (will) use these tools are too lazy to do such a thorough analysis (or even search for one like this)...
hypocrisy + stupidity altogether
December 14, 2025 at 6:55 AM
So:
* Config your AI assistant to require manual approval for editing any .vscode, .idea, or similar file
* Don`t let the AI process files from untrusted sources, as filenames and content can be malicious prompts
* Audit and restrict the AI's built-in tools (least privilege principle)
December 12, 2025 at 8:32 AM
Treating the agent as a new, privileged user in the IDE. The minimal action is to config your AI tool to require a human-in-the-loop for file operation outside a strict, predefined workspace scope, especially for config files. Stop it from editing dotfiles or IDE settings without explicit approval.
December 11, 2025 at 10:46 AM
The research found this affected millions of users across GH Copilot, Cursor, Claude Code, and others. For a small team, risk is high; you are likely using these tools for productivity, making you an unwitting participant in an automated attack chain. Mitigation is not hard, check them in the link
December 11, 2025 at 10:46 AM
The problem exists because AI agents were added to established IDEs that were not designed for autonomous action. A compromised AI can now misuse benign, legacy IDE features, like JSON schema validation or settings files, to exfiltrate data or execute code.
December 11, 2025 at 10:46 AM
Amazing read!!! Thanks for sharing it.
December 9, 2025 at 1:12 PM