Anomalous Interactions
Unexpected or irregular behaviors detected in systems, potentially signaling security threats.
Exposure Mapping
A Knostic tool that traces data access boundaries, identifying oversharing and undersharing to refine permissions.
Identity and Access Management (IAM)
Systems and frameworks that manage user identities and control access to organizational resources.
Jailbreaking
Jailbreaking in the context of generative AI (GenAI) refers to techniques that exploit vulnerabilities in AI systems to bypass their safety mechanisms, ethical guidelines, or restrictions. This is achieved through carefully crafted prompts, adversarial inputs, or roleplay scenarios that manipulate the model into generating outputs it is designed to avoid. Common examples include the “DAN” (Do Anything Now) prompt and multi-shot jailbreaks. Jailbreaking can result from architectural or training flaws and poses risks such as spreading misinformation, producing harmful content, or enabling malicious activities.
Knowledge Segmentation
The process of dividing organizational data into contextual boundaries to control access and enhance security, similar in concept to network segmentation.
LLM
Large language models (LLMs) process and generate human-like text based on input. Built on deep learning architectures, particularly transformer networks, they are trained on vast datasets comprising text from the internet, books, articles, and other written materials. This extensive training enables them to understand context, grasp nuances in language, and respond to prompts with coherence and relevance. LLMs can perform a wide range of tasks, from answering questions and writing essays to generating code. Their ability to understand and produce language has made them valuable tools for applications in natural language processing, content creation, customer service, and education.
Oversharing
A subset of AI focusing on algorithms that enable systems to learn from data and improve performance over time.
Prompt
Manual query via the UI of an LLM or LLM-based tool like Microsoft O365 Copilot.
Prompt Injection
Prompt injection is a prompt where attackers use carefully crafted inputs to manipulate a large language model (LLM) into behaving in an undesired manner. This manipulation tricks the LLM into executing the attacker's intentions. This threat becomes particularly concerning when the LLM is integrated with other tools such as internal databases, APIs, or code interpreters, creating a new attack surface.
Question
Most common form of a prompt. Usually a plain english query sent as a Prompt to the LLM-based tool. All Questions are Prompts, but not all Prompts are Questions.
Topic
A topic is any high-level descriptor or name of a type of business activity or concept. For example, HR and Finance are topics. See also Subtopics.
Toxicity
A jailbroken large language model (LLM) behaving unpredictably can pose significant risks, potentially endangering an organization, its employees, and its customers. Repercussions range from embarrassing social media posts to negative customer experiences, and may even include legal complications. To safeguard against such issues, it’s crucial to implement protective measures. These measures are often referred to as toxicity detection.
Key Concerns:
Toxicity: Preventing harmful or offensive content
Bias: Ensuring fair and impartial interactions
Racism: Avoiding racially insensitive or discriminatory content
Brand Reputation: Maintaining a positive public image
Inappropriate Content: Filtering out unsuitable material which could be offensive.
Knostic is the comprehensive impartial solution to stop data leakage.
Copyright © 2025. All rights reserved