
The OWASP top 10 AI vulnerabilities are a commonly used search phrase for guidance on security risks in LLM and generative AI systems. Originally launched as the OWASP Top 10 for Large Language Model Applications (LLMs), the initiative later expanded into the broader OWASP GenAI Security Project, which helps teams assess risks such as prompt injection, sensitive information disclosure, supply chain exposure, excessive agency, and improper output handling.
To reduce attack surface and better protect stakeholders and users from cyberattacks, security and DevSecOps teams should consider integrating OWASP’s AI security guidance into system architecture, secure development lifecycle (SDLC) controls, and operational governance. The earlier teams apply these practices during development, the better they will be able to manage AI-specific risks. This guide explores the OWASP AI Top 10 framework in detail, including its risk categories and how teams can map vulnerabilities to DevSecOps controls.
People often search for “OWASP top 10 AI vulnerabilities,” but the official OWASP resource is the OWASP Top 10 for LLMs and GenAI Apps, part of the OWASP GenAI Security Project. The project provides guidance for identifying and mitigating security risks in LLM-powered and generative AI applications, including the latest 2025 Top 10 risks and mitigation strategies.
Resources may still reference the framework’s older names, such as the OWASP Top 10 for LLMs v1.1 (2023) or 2023-24 LLM Top 10. OWASP maintains both archived versions and the most current guidance on its project pages, so teams can track how AI security risks and mitigation practices have evolved.
As generative AI ecosystems and attack surfaces expanded, OWASP’s guidance evolved to address broader risks.
Originally, the OWASP Top 10 for LLM Applications was a focused project on risks to LLM applications. As generative AI systems became more complex and integrated into broader software environments, the initiative expanded into the OWASP GenAI Security Project.
According to OWASP, the LLM Top 10 remains a core component of the broader GenAI Security Project, while the broader project provides additional guidance about generative AI security and safety across multiple initiatives. It also provides resources for LLM apps and agentic AI security.
OWASP is also developing related frameworks, such as the OWASP Top 10 for Agentic Applications (2026), which identifies the most critical security risks facing agentic AI and autonomous systems. While related, it is separate from the LLM and GenAI Top 10.
The OWASP Top 10 for LLM Applications framework identifies systemic weaknesses that can arise in AI-powered applications, rather than focusing on isolated model issues. The framework groups risk across multiple layers of the AI application stack, including:
Unlike traditional OWASP web vulnerability categories, which primarily focus on application code flaws, generative AI risks often span architecture, orchestration, data pipelines, and runtime behavior in addition to code flaws.
The OWASP Top 10 for LLMs (2025) identifies the most critical security risks affecting generative AI systems. These are:
Every new AI integration expands the attack surface, often in ways security teams are not yet fully equipped to monitor. Here are several ways AI components increase exposure.
Ultimately, teams should keep in mind that many security incidents stem from integration and workflow weaknesses, not only from model design.
When attackers exploit LLM and GenAI vulnerabilities, multiple systems may be affected beyond the model itself:
Many real-world incidents stem from integration weaknesses, trust boundary failures, or unsafe automation paths, rather than flaws in the underlying model alone. As generative AI systems become more deeply embedded in business workflows, these risks may increasingly resemble full-stack security issues.
To mitigate generative AI security risks, teams must bake safeguards throughout the software development lifecycle (SDLC). Here’s how they can accomplish that.
AI-enabled systems should be designed with clear trust boundaries between users, models, tools, and data sources. This includes:
Teams also need to validate both model inputs and outputs to prevent manipulation and unsafe execution paths. At a minimum, they should:
Besides validating both model inputs and outputs, teams should also implement strong access controls. This is because AI systems often interact with multiple services and data sources, which means a larger attack surface. Here’s how teams can get started doing this:
Generative AI systems frequently rely on external models, frameworks, and plugins, which introduce supply chain risks. As such, security teams should do the following to decrease the attack surface:
Besides performing security testing on the model behavior, security teams should also extend testing to the surrounding application code. Specifically, they should:
Finally, since AI systems behave dynamically, security teams must implement continuous monitoring and incident readiness to maintain a tough security posture. They should:
Despite having unique weaknesses that traditional software lack, AI applications are still software applications that rely on APIs, open-source libraries, CI/CD pipelines, and the authentication and authorization mechanisms.
Because of this, traditional application security practices such as static and composition analysis (SCA) remain critical for reducing AI-related exposure at the code and dependency levels. However, traditional AppSec tools alone are not sufficient to address all AI-specific threats, such as prompt injection or excessive model agency that may emerge from runtime behavior and AI orchestration logic. As a result, organizations must combine traditional AppSec controls with AI-specific security practices to fully reduce risk in generative AI environments.
Kiuwan supports AI-powered security testing through Static Application Security Testing (SAST) and Software Composition Analysis (SCA). With these tools, organizations can reduce risk in the application and software supply chain layers of AI-enabled systems, as well as gain structured visibility into code and dependency risks.
Static Application Security Testing (SAST) analyzes application code to identify code-level vulnerabilities and insecure integration patterns in application code surrounding AI functionality. Integrated with CI/CD workflows, it supports secure SDLC workflows and CI/CD integrations across 30-plus programming languages.
Software Composition Analysis (SCA) detects vulnerable open-source components and helps monitor supply chain exposure across dependencies. This visibility supports SBOM visibility and governance workflows.
Delivered through Sembi, Kiuwan SAST and SCA enable security-by-design across AI-powered applications. They help teams enforce policy-driven remediation workflows and align security practices with organizational standards and processes. Teams also receive actionable remediation guidance.
The OWASP Top 10 for Large Language Model (LLM) Applications focuses on risks specific to generative AI systems, such as prompt injection, model misuse, and insecure integrations with external tools and data sources. In contrast, the traditional OWASP Top 10 for web apps focuses on classic software vulnerabilities like injection flaws and security misconfigurations. Although some of these risks apply to AI-powered systems, the LLM Top 10 specifically addresses new attack surfaces created by AI orchestration layers, retrieval systems, model outputs, and autonomous actions.
No, “OWASP top 10 AI vulnerabilities” is a commonly used search phrase, but it’s not the official OWASP project name. The official framework is the OWASP Top 10 for Large Language Model Applications.
Earlier versions of the framework focused mostly on risks affecting LLM-powered applications, with initial releases appearing in 2023 and updates following shortly after. The 2025 version expands the project’s scope to better reflect the broader generative AI ecosystem. It incorporates lessons learned from real-world deployments and highlights risks that emerge across model orchestration, retrieval systems, agentic workflows, integrations, and runtime operations.
Static analysis can identify code-level weaknesses, but generally can’t detect prompt injection vulnerabilities. Instead, teams should use prompt isolation, input validation, output filtering, and runtime monitoring to detect and mitigate prompt manipulation attacks.
There is no single risk that applies to all LLM-powered applications. However, many experts believe that prompt injection and excessive model trust are probably the most significant threats. LLM applications often rely on model output to drive workflows, access data, or trigger actions, so if malicious inputs manipulate the model’s behaviour, they can bypass safeguards, expose sensitive data, and even cause unintended system actions.
DevSecOps teams should treat AI deployments as part of the software supply chain and application security lifestyle. This means they should incorporate frameworks like the OWASP LLM Top 10 into security reviews and development processes to help systematically manage AI-related risks.
Yes, AI introduces new compliance or audit obligations, especially when it processes sensitive data or affects regulated decisions. Security frameworks such as the OWASP LLM Top 10 can help organizations document risks and mitigation strategies for compliance audits.
No, the OWASP Top 10 for Agentic Applications is a separate but related framework. The LLM Top 10 focuses on risks affecting LLM-powered applications and generative AI systems, but the Agentic Applications Top 10 addresses security risks specific to autonomous AI agents.
As AI becomes more widespread, attack surfaces will continue to expand across prompts, outputs, models, retrieval layers, and integrations as organizations embed generative AI into critical applications, workflows, and data pipelines. To protect stakeholders and data, organizations must follow frameworks like the OWASP top 10 AI vulnerabilities framework, which provides structured risk guidance for LLM and GenAI application security.
To adopt this framework successfully, security teams should combine architecture controls, validation, access controls, governance, and continuous testing. They should adopt tools that combine static analysis, SCA, and SBOM visibility to reduce exposure before deployment, identify insecure integrations early, and improve ongoing supply chain governance across AI-enabled systems.
Try Kiuwan today to see how our tools can strengthen your DevSecOps teams. Our free 14-day trial includes guided integration into your CI/CD pipeline and DevOps environment, a compliance overview, vulnerability scanning, and support for over 30 programming languages.