Jo Peterson
banner
cleartech.bsky.social
Jo Peterson
@cleartech.bsky.social
Engineer who helps clients scope, source and vet solutions in #Cloud #cloudsecurity #aisecurity| #ai|Tech analyst| VP Cloud and Security|USAF vet| 📚Learning from CIOs and CISOs on the daily| 💕 of NY Times Spelling 🐝
Linked In: LinkedIn.com|in|JoPeterson1
📌 Q: What does “precision” refer to in AI?

A: In AI, "precision" refers to a metric that measures how many of a model's positive predictions are actually correct

V/ VISO

#ai #aisecurity
February 27, 2025 at 11:43 AM
📌 Q: What is prompt injection in Agentic AI?

A: In agentic AI, "prompt injection" refers to a security vulnerability where a malicious user manipulates the input prompt given to an AI system, essentially "injecting" harmful instructions to trick AI

v/ Cisco.bsky.social
#aisecurity #agenticai
February 21, 2025 at 12:21 PM
📌 Q: How can data cleaning boost AI model accuracy?

A: Data cleaning is crucial in AI because the quality of data directly impacts the accuracy and reliability of AI models

#datacleaning #ai
February 20, 2025 at 12:41 PM
📌 Q: Can Large Language Models (LLMs) alter data?

A: Yes, LLMs (Large Language Models) can indirectly alter data by generating new information or modifying existing data based on the prompts and context provided.

v/ @nexla.bsky.social

#aisecurity #ai
February 19, 2025 at 12:04 PM
📌 Q: How do you restrict access to Large Language Models (LLMs)?

A: To restrict access to LLMs, implement access controls like (RBAC), multi-factor authentication (MFA), and user authentication systems, limiting who can interact with LLM

v/ Exabeam.bsky.social

#aisecurity #cloudai #cyberai
February 18, 2025 at 11:56 AM
📌 Q: Can #AI models get stuck?

A: Yes, AI models can essentially get "stuck" in a state where they repeatedly generate similar outputs or fail to learn effectively

v/ Infobip

#cloud #cloudsecurity #cloudai #aisecurity
February 14, 2025 at 11:44 AM
📌 Q: How do you secure private AI?

A: To secure private AI, you need:

✅strict access controls
✅data encryption
✅model watermarking
✅secure network infrastructure
✅data anonymization,
✅robust privacy policies,
✅regular security audits

v/ Surgere

#cloud #cloudsecurity #cloudai #aisecurity
February 13, 2025 at 12:04 PM
📌 Q: What is a false positive in AI?

A: A "false positive" in AI refers to when an AI system incorrectly identifies something as belonging to a specific category, like flagging human-written text as AI-generated

v/ Originality.ai

#cloud #cloudsecurity #cybersecurity #aisecurity
February 12, 2025 at 12:42 PM
📌 Q: What is an AI 🤖 privacy issue?

A: An AI privacy issue refers to the potential for artificial intelligence systems to violate personal privacy by collecting, storing, and analyzing personal data without user knowledge, consent, or control

v/ IBM

#cloud #cloudsecurity #cloudai #aisecurity
February 11, 2025 at 12:10 PM
📌 Q: What is over privilege in an AI 🤖 system?

A: “Over privilege" in an AI system refers to a situation where an AI model or component has been granted excessive access to data or functionalities

v/ @oneidentity.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity
February 10, 2025 at 10:59 AM
📌 Q: How does agentic AI handle inputs?

A: Agentic AI handles inputs by autonomously processing information from various sources, including environmental data, user interactions, and internal knowledge bases

v/ @ibm.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity
February 7, 2025 at 1:12 PM
📌 Q: What are common data leak vulnerabilities in LLMs?

A:
✅ Incomplete or improper filtering of sensitive information

✅ Overfitting or memorization of sensitive data

✅ Unintended disclosure of confidential information

v/ OWASP® Foundation

#cloud #cloudsecurity #cloudai #aisecurity
February 6, 2025 at 12:03 PM
📌 Q: What’s the difference between public and private AI?

A: Public AI operates on hyperscale cloud-based platforms and is accessible to multiple businesses

Private AI is tailored and confined to a specific organisation.

v/ ComputerWeekly.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity
February 5, 2025 at 10:59 AM
📌 Q: What is a walled garden approach in AI?

A: A "walled garden" approach in AI refers to a closed ecosystem where a single entity controls all aspects of an AI system

v/ Iterate.ai

#cloud #cloudsecurity #cloudai #aisecurity
February 4, 2025 at 12:12 PM
📌 Q: What is an unknown threat in AI security?

A: An "unknown threat" in AI security refers to a cyber threat that hasn't been previously identified or documented, meaning it lacks a known signature

v/ @zscaler.bsky.social

#cloud #cloudsecurity #aisecurity
February 3, 2025 at 1:23 PM
📌 Q: What is AI model collapse?

A: AI model collapse is a process where generative AI models trained on AI-generated data begin to perform poorly.

v/ @appinventiv.bsky.social

#cloud #cloudsecurity #cybersecurity #aisecurity
January 30, 2025 at 1:19 PM
📌 Q: What is Adaptive authentication in AI security?

A: Adaptive authentication in AI security is a dynamic authentication method that uses machine learning and contextual data to assess the risk of a login attempt

v/ OneLogin by One Identity

#cloud #cloudsecurity #cloudai #aisecurity
January 29, 2025 at 11:32 AM
💡Happy to announce that I’ve been invited to participate in the AI Safety Executive Leadership Council for the Cloud Security Alliance
#cloud #cloudsecurity #cloudai #cybersecurity #aisecurity
January 27, 2025 at 3:33 PM
📌 Q: How often should you refresh your cybersecurity policy?

A: A cybersecurity policy should be refreshed at least once a year

v/ @carbide.bsky.social

#cloud #cloudsecurity #cybersecurity #cloudai #aisecurity
January 27, 2025 at 11:56 AM
📌 Q: What is an insider threat in AI security?

A: An "insider threat" in AI security refers to a situation where someone with authorized access to an organization's AI systems misuses that access to harm the organization

v/@vectraai.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity
January 24, 2025 at 1:44 PM
📌 Get up to speed with CMMC--Join us for a Webinar--January 28th, 1PM EST

As a IT or Security leader, will your business be impacted by the new CMMC rules?

Join Trustwave and Clarify360 for a webinar

When: 01/28/2025
Time: 1PM EST

👉 Register here: www.eventbrite.com/e/accelerate...
January 23, 2025 at 2:29 PM
📌 Q: What is confabulation on the part of a Large Language Model (LLM)?

A: Confabulation on the part of a Large Language Model (LLM) is the generation of output that is not based on real-world input or information

v/@owasp.bsky.social

#cloud #cloudsecurity #cloudai #cyberai #ai
January 23, 2025 at 12:18 PM
📌 Q: What are backdoor attacks?

A: Backdoor attacks are a type of cybersecurity threat that involves creating a hidden entry point into a system or network that can be exploited by an attacker to gain unauthorized access.

v/ @nightfallai.bsky.social

#cloud #cloudsecurity #cloudai #ai
January 22, 2025 at 1:56 PM
📌 Guess what y'all? Dan Södergren has included a quote from me in his upcoming book--How to “Survive and Thrive” In 2025. A Leader’s Guide to the Times of AI

Sign up for waitlist = Free Book!

✅ Click here--https://www.aileadershipcourse.com/

#ai #aisecurity #aiautomation #aiworkflows
January 21, 2025 at 2:15 PM
📌 Q: What is model fuzzing in AI?

A: Model Fuzzing is a testing technique used to identify vulnerabilities and weaknesses in machine learning models by inputting random, unexpected, or malformed data to observe how the model responds.

v/ @appsoc

#cloud #cloudsecurity #cloudai #aisecurity
January 21, 2025 at 11:58 AM