Glossary of AI Terms
We’ve curated here a glossary of key terms of relevance to Knostic to aid in understanding Enterprise AI Search and AI identity and Access Management (AI IAM)
Anomalous Interactions
Unexpected or irregular behaviors detected in systems, potentially signaling security threats.
Data Leakage
The unauthorized transmission or exposure of sensitive information to unintended recipients.
Dynamic Policies
Access control rules that adapt to changing roles, permissions, and data flows within an organization.
Exposure Mapping
A Knostic tool that traces data access boundaries, identifying oversharing and undersharing to refine permissions.
Flowbreaking
An AI attack class where malicious inputs disrupt expected LLM outputs by attacking the wider application surrounding the model, potentially leading to security risks.
Identity and Access Management (IAM)
Systems and frameworks that manage user identities and control access to organizational resources.
Jailbreaking
Jailbreaking in the context of generative AI (GenAI) refers to techniques that exploit vulnerabilities in AI systems to bypass their safety mechanisms, ethical guidelines, or restrictions. This is achieved through carefully crafted prompts, adversarial inputs, or roleplay scenarios that manipulate the model into generating outputs it is designed to avoid. Common examples include the “DAN” (Do Anything Now) prompt and multi-shot jailbreaks. Jailbreaking can result from architectural or training flaws and poses risks such as spreading misinformation, producing harmful content, or enabling malicious activities
Knowledge Segmentation
The process of dividing organizational data into contextual boundaries to control access and enhance security, similar in concept to network segmentation.
LLM
Large language models (LLMs) process and generate human-like text based on input.
Built on deep learning architectures, particularly transformer networks, they are trained on vast datasets comprising text from the internet, books, articles, and other written materials. This extensive training enables them to understand context, grasp nuances in language, and respond to prompts with coherence and relevance. LLMs can perform a wide range of tasks, from answering questions and writing essays to generating code. Their ability to understand and produce language has made them valuable tools for applications in natural language processing, content creation, customer service, and education.
Machine Learning (ML)
A subset of AI focusing on algorithms that enable systems to learn from data and improve performance over time.
Need-to-Know Principle
A security concept ensuring users access only the data necessary for their roles, reducing exposure risks.
Oversharing
Allowing excessive or unauthorized access to sensitive information, increasing the risk of security breaches.
Prompt
Manual query via the UI of an LLM or LLM-based tool like Microsoft O365 Copilot.
Prompt Injection
Prompt injection is a prompt where attackers use carefully crafted inputs to manipulate a large language model (LLM) into behaving in an undesired manner. This manipulation tricks the LLM into executing the attacker's intentions. This threat becomes particularly concerning when the LLM is integrated with other tools such as internal databases, APIs, or code interpreters, creating a new attack surface.
Question
Most common form of a prompt. Usually a plain english query sent as a Prompt to the LLM-based tool. All Questions are Prompts, but not all Prompts are Questions.
Subtopic
A subtopic is a topic subject to, dependent, or otherwise hierarchically related to a parent topic. Example: Compensation is a subtopic of the Finance topic.
Topic
A topic is any high-level descriptor or name of a type of business activity or concept. For example, HR and Finance are topics. See also Subtopics.
Toxicity
A jailbroken large language model (LLM) behaving unpredictably can pose significant risks, potentially endangering an organization, its employees, and its customers. Repercussions range from embarrassing social media posts to negative customer experiences, and may even include legal complications. To safeguard against such issues, it’s crucial to implement protective measures. These measures are often referred to as toxicity detection.
Key Concerns:
Toxicity: Preventing harmful or offensive content
Bias: Ensuring fair and impartial interactions
Racism: Avoiding racially insensitive or discriminatory content
Brand Reputation: Maintaining a positive public image
Inappropriate Content: Filtering out unsuitable material which could be offensive.
Undersharing
Restricting access to critical data, which can hinder productivity and operational efficiency.
Learn more
For all digital transformation projects, LLM access and enterprise AI search are top priorities.
Let’s talk about your use case.