Linked In: LinkedIn.com|in|JoPeterson1
A: In AI, "precision" refers to a metric that measures how many of a model's positive predictions are actually correct
V/ VISO
#ai #aisecurity
A: In AI, "precision" refers to a metric that measures how many of a model's positive predictions are actually correct
V/ VISO
#ai #aisecurity
A: In agentic AI, "prompt injection" refers to a security vulnerability where a malicious user manipulates the input prompt given to an AI system, essentially "injecting" harmful instructions to trick AI
v/ Cisco.bsky.social
#aisecurity #agenticai
A: In agentic AI, "prompt injection" refers to a security vulnerability where a malicious user manipulates the input prompt given to an AI system, essentially "injecting" harmful instructions to trick AI
v/ Cisco.bsky.social
#aisecurity #agenticai
A: Data cleaning is crucial in AI because the quality of data directly impacts the accuracy and reliability of AI models
#datacleaning #ai
A: Data cleaning is crucial in AI because the quality of data directly impacts the accuracy and reliability of AI models
#datacleaning #ai
A: Yes, LLMs (Large Language Models) can indirectly alter data by generating new information or modifying existing data based on the prompts and context provided.
v/ @nexla.bsky.social
#aisecurity #ai
A: Yes, LLMs (Large Language Models) can indirectly alter data by generating new information or modifying existing data based on the prompts and context provided.
v/ @nexla.bsky.social
#aisecurity #ai
A: To restrict access to LLMs, implement access controls like (RBAC), multi-factor authentication (MFA), and user authentication systems, limiting who can interact with LLM
v/ Exabeam.bsky.social
#aisecurity #cloudai #cyberai
A: To restrict access to LLMs, implement access controls like (RBAC), multi-factor authentication (MFA), and user authentication systems, limiting who can interact with LLM
v/ Exabeam.bsky.social
#aisecurity #cloudai #cyberai
A: Yes, AI models can essentially get "stuck" in a state where they repeatedly generate similar outputs or fail to learn effectively
v/ Infobip
#cloud #cloudsecurity #cloudai #aisecurity
A: Yes, AI models can essentially get "stuck" in a state where they repeatedly generate similar outputs or fail to learn effectively
v/ Infobip
#cloud #cloudsecurity #cloudai #aisecurity
A: To secure private AI, you need:
✅strict access controls
✅data encryption
✅model watermarking
✅secure network infrastructure
✅data anonymization,
✅robust privacy policies,
✅regular security audits
v/ Surgere
#cloud #cloudsecurity #cloudai #aisecurity
A: To secure private AI, you need:
✅strict access controls
✅data encryption
✅model watermarking
✅secure network infrastructure
✅data anonymization,
✅robust privacy policies,
✅regular security audits
v/ Surgere
#cloud #cloudsecurity #cloudai #aisecurity
A: A "false positive" in AI refers to when an AI system incorrectly identifies something as belonging to a specific category, like flagging human-written text as AI-generated
v/ Originality.ai
#cloud #cloudsecurity #cybersecurity #aisecurity
A: A "false positive" in AI refers to when an AI system incorrectly identifies something as belonging to a specific category, like flagging human-written text as AI-generated
v/ Originality.ai
#cloud #cloudsecurity #cybersecurity #aisecurity
www.askwoody.com/2025/back-to...
#programming #Learntocode #Computing @AskWoody
www.askwoody.com/2025/back-to...
#programming #Learntocode #Computing @AskWoody
A: An AI privacy issue refers to the potential for artificial intelligence systems to violate personal privacy by collecting, storing, and analyzing personal data without user knowledge, consent, or control
v/ IBM
#cloud #cloudsecurity #cloudai #aisecurity
A: An AI privacy issue refers to the potential for artificial intelligence systems to violate personal privacy by collecting, storing, and analyzing personal data without user knowledge, consent, or control
v/ IBM
#cloud #cloudsecurity #cloudai #aisecurity
A: “Over privilege" in an AI system refers to a situation where an AI model or component has been granted excessive access to data or functionalities
v/ @oneidentity.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
A: “Over privilege" in an AI system refers to a situation where an AI model or component has been granted excessive access to data or functionalities
v/ @oneidentity.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
A: Agentic AI handles inputs by autonomously processing information from various sources, including environmental data, user interactions, and internal knowledge bases
v/ @ibm.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
A: Agentic AI handles inputs by autonomously processing information from various sources, including environmental data, user interactions, and internal knowledge bases
v/ @ibm.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
A:
✅ Incomplete or improper filtering of sensitive information
✅ Overfitting or memorization of sensitive data
✅ Unintended disclosure of confidential information
v/ OWASP® Foundation
#cloud #cloudsecurity #cloudai #aisecurity
A:
✅ Incomplete or improper filtering of sensitive information
✅ Overfitting or memorization of sensitive data
✅ Unintended disclosure of confidential information
v/ OWASP® Foundation
#cloud #cloudsecurity #cloudai #aisecurity
A: Public AI operates on hyperscale cloud-based platforms and is accessible to multiple businesses
Private AI is tailored and confined to a specific organisation.
v/ ComputerWeekly.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
A: Public AI operates on hyperscale cloud-based platforms and is accessible to multiple businesses
Private AI is tailored and confined to a specific organisation.
v/ ComputerWeekly.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
It’s an honor and I’m in amazing company
Read: 📚 www.engati.com/blog/linkedi...
#cloud #cloudsecurity #cloudai #aisecurity
It’s an honor and I’m in amazing company
Read: 📚 www.engati.com/blog/linkedi...
#cloud #cloudsecurity #cloudai #aisecurity
A: A "walled garden" approach in AI refers to a closed ecosystem where a single entity controls all aspects of an AI system
v/ Iterate.ai
#cloud #cloudsecurity #cloudai #aisecurity
A: A "walled garden" approach in AI refers to a closed ecosystem where a single entity controls all aspects of an AI system
v/ Iterate.ai
#cloud #cloudsecurity #cloudai #aisecurity
A: An "unknown threat" in AI security refers to a cyber threat that hasn't been previously identified or documented, meaning it lacks a known signature
v/ @zscaler.bsky.social
#cloud #cloudsecurity #aisecurity
A: An "unknown threat" in AI security refers to a cyber threat that hasn't been previously identified or documented, meaning it lacks a known signature
v/ @zscaler.bsky.social
#cloud #cloudsecurity #aisecurity
Interested in knowing where the Platform as a Service (PaaS) space is headed?
When: 2/12
Time: 12PM EST
👉 Register here: www.brighttalk.com/webcast/1998...
🛎️ Subscribe: www.brighttalk.com/channel/19985
#cloud #cloudsecurity #aisecurity
Interested in knowing where the Platform as a Service (PaaS) space is headed?
When: 2/12
Time: 12PM EST
👉 Register here: www.brighttalk.com/webcast/1998...
🛎️ Subscribe: www.brighttalk.com/channel/19985
#cloud #cloudsecurity #aisecurity
A: AI model collapse is a process where generative AI models trained on AI-generated data begin to perform poorly.
v/ @appinventiv.bsky.social
#cloud #cloudsecurity #cybersecurity #aisecurity
A: AI model collapse is a process where generative AI models trained on AI-generated data begin to perform poorly.
v/ @appinventiv.bsky.social
#cloud #cloudsecurity #cybersecurity #aisecurity
A: Adaptive authentication in AI security is a dynamic authentication method that uses machine learning and contextual data to assess the risk of a login attempt
v/ OneLogin by One Identity
#cloud #cloudsecurity #cloudai #aisecurity
A: Adaptive authentication in AI security is a dynamic authentication method that uses machine learning and contextual data to assess the risk of a login attempt
v/ OneLogin by One Identity
#cloud #cloudsecurity #cloudai #aisecurity
A: Adversarial machine learning (AML) is a technique that uses malicious inputs to trick or mislead a machine learning (ML) model.
v/ @crowdstrike.bsky.social
#cloud #cloudsecurity #cybersecurity #cloudai
A: Adversarial machine learning (AML) is a technique that uses malicious inputs to trick or mislead a machine learning (ML) model.
v/ @crowdstrike.bsky.social
#cloud #cloudsecurity #cybersecurity #cloudai
#cloud #cloudsecurity #cloudai #cybersecurity #aisecurity
#cloud #cloudsecurity #cloudai #cybersecurity #aisecurity
A: A cybersecurity policy should be refreshed at least once a year
v/ @carbide.bsky.social
#cloud #cloudsecurity #cybersecurity #cloudai #aisecurity
A: A cybersecurity policy should be refreshed at least once a year
v/ @carbide.bsky.social
#cloud #cloudsecurity #cybersecurity #cloudai #aisecurity
A: An "insider threat" in AI security refers to a situation where someone with authorized access to an organization's AI systems misuses that access to harm the organization
v/@vectraai.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
A: An "insider threat" in AI security refers to a situation where someone with authorized access to an organization's AI systems misuses that access to harm the organization
v/@vectraai.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
As a IT or Security leader, will your business be impacted by the new CMMC rules?
Join Trustwave and Clarify360 for a webinar
When: 01/28/2025
Time: 1PM EST
👉 Register here: www.eventbrite.com/e/accelerate...
As a IT or Security leader, will your business be impacted by the new CMMC rules?
Join Trustwave and Clarify360 for a webinar
When: 01/28/2025
Time: 1PM EST
👉 Register here: www.eventbrite.com/e/accelerate...