Dominik Kundel
banner
dkundel.com
Dominik Kundel
@dkundel.com
Formerly Emerging Tech & AI @ Twilio / ex-DevRel🥑 - JavaScript Hacker - 🎓 MBA Berkeley Haas - 🥃📷 http://instagram.com/cocktail.and.code - he/him - Opinions my own
Thank you 😊
February 19, 2025 at 7:42 PM
📌
December 1, 2024 at 5:08 PM
Incredible photos 👏👏
November 29, 2024 at 11:53 PM
For example labelers like the Pronoun labeler or the GitHub Contributor labeler are fun:
bsky.app/profile/pron...
bsky.app
November 28, 2024 at 5:36 PM
Bluesky is an excellent rabbit hole of potential 😂
November 28, 2024 at 5:35 PM
I just use the pinned feed :) it's not great but works 😅

blueskydirectory.com/feeds/pins
November 28, 2024 at 5:29 PM
Have you looked into custom feeds yet? They are great as well! I love the Quiet Posters one for example and I love that you can build your own
November 27, 2024 at 12:01 AM
💸 No. 10 - Unbounded Consumption

Unrestricted LLM usage can lead to denial-of-service attacks or excessive costs.

🔐 Mitigation tips: Implement rate limits, monitor resource usage, and throttle requests.
November 21, 2024 at 8:34 PM
🤥 No. 9 - Misinformation

LLMs may propagate false or harmful content from biased or unverified sources.

🔐 Mitigation tips: Use fact-checking workflows and curate trusted data sources.
November 21, 2024 at 8:34 PM
🔢 No. 8 - Vector and Embedding Weaknesses

Unsecured embeddings may expose models to poisoning or unauthorized access.

🔐 Mitigation tips: Encrypt embeddings, validate inputs, and restrict database access.
November 21, 2024 at 8:34 PM
🔍 No. 7 - System Prompt Leakage

Exposure of system prompts can reveal application logic or sensitive information.

🔐 Mitigation tips: Avoid storing sensitive data in prompts; encrypt or obfuscate key instructions.
November 21, 2024 at 8:34 PM
🪓 No. 6 - Excessive Agency

Granting too much autonomy to LLMs can enable harmful or unintended actions.

🔐 Mitigation tips: Limit permissions, add user oversight, and enforce action constraints.
November 21, 2024 at 8:34 PM
🤝 No. 5 - Improper Output Handling

Unsanitized outputs can lead to XSS, SQL injection, or system-level attacks.

🔐 Mitigation tips: Sanitize outputs and enforce encoding based on context (HTML, SQL, etc.).
November 21, 2024 at 8:34 PM
☠️ No. 4 - Data and Model Poisoning

Compromised datasets or tampered models lead to biased outputs or hidden backdoors.

🔐 Mitigation tips: Vet datasets, track transformations, and validate outputs against trusted sources.
November 21, 2024 at 8:34 PM
⛓️ No. 3 - Supply Chain Risks

Third-party dependencies or tampered models can introduce vulnerabilities in LLMs.

🔐 Mitigation tips: Audit dependencies, enforce provenance checks, and validate model integrity.
November 21, 2024 at 8:34 PM
🤐 No. 2 - Sensitive Information Disclosure

LLMs can leak private data or proprietary information via crafted queries or poor sanitization.

🔐 Mitigation tips: Mask sensitive data, restrict access, and monitor logs for leaks.
November 21, 2024 at 8:34 PM
💉 No. 1 - Prompt Injection

Attackers manipulate LLM prompts to alter behavior, bypass security, or gain unauthorized control.

🔐 Mitigation tips: Use input validation, output constraints, and enforce principle of least privilege.
November 21, 2024 at 8:34 PM
Moin
November 20, 2024 at 8:02 PM