 
AI – Cybersecurity Glossary: 12 Terms IT Wish You Knew
In 2023 we published the Artificial Intelligence Glossary: The Top 15 Definitions You Need To Know, and promised to update the list as AI evolved. With cyber crime predicted to grow 15% annually over the next 5 years, arming yourself and your stakeholders with relevant terms is your first line of defense to ensuring your business stays protected. This leads us to our last post for Cybersecurity Awareness Month, a new list of a dozen terms IT pros wish you knew about AI and cybersecurity. Let’s dive in:
1. Adversarial Machine Learning (AML)
Adversarial Machine Learning (AML) is a subfield within artificial intelligence and machine learning (ML). The field encompasses the methods for creating adversarial attacks and designing defenses to protect against them. AML aims to understand vulnerabilities and develop models that are more robust to attacks. Source: datacamp
2. Agentic AI
agentic
adjective | uh-JEN-tik
Able to accomplish results with autonomy, used especially in reference to artificial intelligence. Source: Merriam-Webster
Agentic AI is an autonomous AI system that can act independently to achieve pre-determined goals. These systems can perform complex tasks (without constant human oversight) while making independent contextual decisions. AI agents can communicate with each other and other software systems to automate existing business processes while learning from their environment and adapting to changing conditions. Source: AWS
Cyber Security Connection: The opportunity with an AI agent that can plan, act, reason and adapt is powerful, but with this autonomy comes risk. McKinsey & Company does an excellent job of outlining why security cannot be an afterthought when deploying Agentic AI solutions.
3. AI Attack Surface
AI attack surface is the collection of the potential ways an AI system can be breached or manipulated. This includes data, infrastructure, applications, users, and every component that interacts with or powers AI like training data, models, APIs, pipelines, and more. Attack surfaces can scale unpredictably because of these systems being developed with input from multiple teams and deployed quickly. Source: Wiz.io
4. AI Data Poisoning
AI Data Poisoning is an example of an Adversarial Attack, deliberately corrupting training datasets to compromise machine learning models. Unlike conventional cyberattacks targeting operational systems, data poisoning occurs during the AI development lifecycle, making detection exceptionally challenging. These attacks exploit the fundamental learning mechanisms of machine learning models, enabling threat actors to manipulate AI behavior from within. Source: Abnormal AI
Software Developer Tip: NIST has released a guideline paper meant to assist developers in mitigating the risks associated with adversarial machine learning.
5. AI Governance
AI Governance encompasses the policies, procedures, and ethical considerations required to oversee the development, deployment, and maintenance of AI systems. Governance erects guardrails, ensuring that AI operates within legal and ethical boundaries, while aligning with organizational values and societal norms. AI governance framework provides a structured approach to addressing transparency, accountability, and fairness, as well as setting standards for data handling, model explainability, and decision-making processes. Source: Palo Alto Networks
Cyber Security Connection: Through AI Governance, organizations facilitate responsible AI innovation while mitigating risks related to bias, privacy breaches, and security threats.
6. AI Security Platforms
AI Security Platforms are a unified way to secure third-party and custom-built AI applications.These systemscentralize visibility, enforce usage policies, and protect against AI-specific. In the day and age of prompt injection attacks, data leakage, and rogue agent actions AI Security Platforms ensure CIOs can monitor AI activity with consistent guardrails. Source: Gartner
7. Anomaly Detection
AI Anomaly Detection (AD) is the process of identifying data patterns that deviate from anticipated behavior. These irregularities (anomalies or outliers) can indicate fraud, technical failures, or peculiar changes in user behaviour. Modern approaches use artificial intelligence with machine learning and deep learning techniques versus fixed rules of traditional detection. AD systems analyze historical data to establish what “normal” looks like and then compare new inputs against that baseline making it possible to uncover subtle deviations. Source: TechMagic
8. Deepfake
Deepfake is a form of artificial intelligence (AI) used to create convincing hoax images, sounds, and videos. Deepfakes often stitch together hoaxed images and sounds using machine learning algorithms creating people and/or events that do not exist. Source: Fortinet
Cyber Security Risks: Deepfake technology is accelerating the spread of disinformation and business leaders should be worried. A recent Gartner survey revealed that 62% of organizations surveyed have experienced at least one deepfake attack in the last 12 months.
9. Foundational Model
Researchers coined the term foundation model to describe ML models trained on a broad spectrum of generalized and unlabeled data. These models can perform a wide variety of general tasks such as understanding language, generating text and images, and conversing in natural language. Foundation Models are a form of generative AI producing output from one or more inputs (prompts) in the form of human language instructions. Models are based on complex neural networks including generative adversarial networks (GANs), transformers, and variational encoders(VAE). Source: AWS
Cyber Security Connection: Generative AI, comes with security risks including deepfake generation, automated phishing attacks, malicious code engineering, data poisoning etc. the top 10 risks are listed here.
10. Large Language Model (LLM)
LLM is a type of artificial intelligence model that utilizes machine learning techniques to understand and generate human language. LLMs can be incredibly valuable for companies and organizations looking to automate and enhance various aspects of communication and data processing.
Cyber Security Connection: AI governance is crucial for the responsible development and oversight of LLMs, ensuring they align with organizational values and legal requirements. As AI regulations rapidly evolve, organizations must prioritize compliance with data privacy laws (like GDPR and HIPAA) and new AI-specific mandates, which often dictate strong risk management, data governance, human oversight, and robust cybersecurity for AI systems. Source: RedHat
11. Penetration Testing
Penetration testing (or pen testing) is a security exercise where a cyber-security expert attempts to find and exploit vulnerabilities in a computer system. Pen testing helps an organization discover vulnerabilities or flaws in their systems that they might not have otherwise been able to find helping stop attacks before they start. Source: CloudFlare
AI Connection: The complexity of threats in today’s digital landscape benefits from the combination of AI driven penetration testing with advanced security testing techniques. This is not only fiscally responsible but necessary for predicting threats in real time. Check out a top 10 list of tools here.
12. Social Engineering
Social engineering is a manipulation technique that exploits human psychology to gain access to confidential information or perform unauthorized actions. Social engineers use deceit to trick individuals into giving up sensitive information such as passwords or financial details.
Cyber Security Connection: The introduction of AI into social engineering marks a serious turning point. AI can create more human-like interactions to deceive victims effectively, but this technology also provides new opportunities for defense. The development of sophisticated detection algorithms, predictive analytics, and automated responses can be used to thwart potential threats. Source: Lakera
Final Thoughts
Whether your company is an early adopter of AI or just beginning the exploratory process, staying up to date on terms that are the building blocks of this technology is prudent.
Language is forever changing, perhaps not as fast as AI, but regardless, change is imminent. Keep an eye on our blog as we do our best to keep you up to date on new or important AI terms impacting your business.
 
 
