AI & SaaS Security Glossary

AI Data Leakage
The unintended exposure of sensitive, confidential, or proprietary data through the use of AI systems, including prompts, outputs, or integrations.

AI Governance
The frameworks, policies, and controls used to ensure AI systems are deployed and used securely, responsibly, and in compliance with internal and external requirements.

AI Security Posture Management (AI-SPM)
A security approach focused on discovering, assessing, and mitigating risks associated with enterprise AI usage, configurations, and data flows.

API Security
The practice of protecting application programming interfaces from unauthorized access, abuse, and data exposure.

Cloud Access Security Broker (CASB)
A security solution that provides visibility and control over data and user activity across cloud and SaaS applications.

Data Classification
The process of identifying and categorizing data based on sensitivity, value, and regulatory requirements to apply appropriate security controls.

Data Exfiltration
The unauthorized transfer of data from an organization’s environment to an external system or user.

Data Loss Prevention (DLP)
Security technologies and policies designed to detect and prevent the unauthorized sharing or leakage of sensitive data.Enterprise AIAI tools and platforms adopted by organizations for business use, often involving proprietary data, workflows, and integrations.

Generative AI (GenAI)
A class of AI models that generate new content—such as text, code, images, or audio—based on patterns learned during training.

Identity and Access Management (IAM)
Systems and processes used to manage user identities, authentication, and access rights across applications and data.Insider RiskSecurity risks that originate from within an organization, whether intentional or accidental, involving employees, contractors, or partners.

Least Privilege
A security principle that limits user and system access rights to only what is necessary to perform assigned tasks.

LLM (Large Language Model)
A type of AI model trained on large volumes of text data to understand and generate human-like language.

Model Drift
The degradation of an AI model’s performance or reliability over time due to changes in data, usage patterns, or context.

Prompt
The input provided to an AI system that guides how it generates a response or output.

Prompt Injection
A technique in which malicious or unintended instructions are embedded in prompts to manipulate AI behavior or outputs.

Regulatory Compliance
The process of meeting legal and industry requirements related to data protection, privacy, and security, such as GDPR or SOC 2.

Risk Posture
An organization’s overall exposure to security risks based on its controls, behaviors, and threat environment.

SaaS Sprawl
The uncontrolled growth of SaaS applications within an organization, often leading to visibility, security, and compliance challenges.

SaaS Security
The practice of securing data, users, configurations, and integrations across Software-as-a-Service applications.

Shadow AI
The use of AI tools or models by employees without formal approval, governance, or visibility from security teams.

Sensitive Data
Information that requires protection due to legal, regulatory, or business impact, such as PII, financial data, source code, or intellectual property.

Security Misconfiguration
Incorrect or suboptimal settings in applications or systems that create security vulnerabilities or increase risk.

SOC 2
A compliance framework that evaluates an organization’s controls related to security, availability, confidentiality, and privacy.

Supply Chain Risk
Security risks introduced through third-party vendors, tools, or integrations that have access to organizational data or systems.Threat VectorThe path or method used by an attacker or risk event to compromise systems, data, or users.

Usage Visibility
The ability to see how applications, tools, and data are being accessed and used across the organization.

User Behavior Analytics (UBA)
Security techniques that analyze user activity patterns to detect anomalies or potential threats.Zero TrustA security model that assumes no user or system should be trusted by default, requiring continuous verification of identity and access.

Build Your AI Guardrails Now

Gain the visibility and control you need to guide AI use with confidence.

Harmonic Security company logo
As every employee adopts AI in their work, organizations need control and visibility. Harmonic delivers AI Governance and Control (AIGC), the intelligent control layer that secures and enables the AI-First workforce. By understanding user intent and data context in real time, Harmonic gives security leaders all they need to help their companies innovate at pace.
© 2026 Harmonic Security