OWASP Releases Expanded Guidance for Generative and Agentic AI Security
The Open Web Application Security Project (OWASP) has published a significantly updated set of security recommendations targeting organizations that are adopting artificial intelligence systems. Arriving just four months after the previous edition, the new release reflects both the accelerating pace of AI adoption and the mounting security challenges that come with it. The expanded guidance splits its focus across two distinct tracks: one dedicated to securing generative AI (GenAI) and large language models (LLMs), and a second focused on agentic AI systems — a recognition that these two categories demand fundamentally different defensive approaches.
Alongside these two solution guides, OWASP also published its inaugural GenAI Data Security risks listing, which catalogs 21 potential data issues introduced by AI systems. These include sensitive data leakage, exposure of agent identities and credentials, and unsanctioned data flows stemming from shadow AI deployments.
A Fast-Moving Field Forces Rapid Updates
The sheer velocity of change in the AI landscape is reflected in the pace of OWASP's publishing schedule. According to Scott Clinton, co-lead of the OWASP GenAI Security Project, the number of solution providers covered in the matrix has grown from 50 to more than 170 in just a few months.
"When we first started, we were publishing it every quarter because things were moving so incredibly fast. The industry is kind of still moving quickly, solutions are still coming in, but it's not quite at the same pace."
Clinton noted that OWASP plans to shift to a six-month release cadence going forward, suggesting that while the ecosystem remains dynamic, the explosive early growth is beginning to stabilize somewhat.
From LLMs to Swarms: The Expanding Attack Surface
A range of real-world incidents has underscored just how difficult it is for enterprises to secure their AI environments. AI agents have been observed ignoring security boundaries in pursuit of task completion, and the emergence of so-called "swarms" — collections of AI agents working in concert — has introduced even greater security complexity. Infrastructure layers supporting these systems, such as Model Context Protocol (MCP) servers, remain highly insecure according to subject matter experts.
The scale of AI deployment is difficult to overstate. Sai Modalavalasa, chief architect at AI-security firm Straiker and a contributor to the OWASP GenAI Security Project, illustrated the scope of the problem with a striking comparison: a 10,000-employee company might once have operated between 30 and 100 applications, but now runs tens of thousands of AI applications when individual LLM calls that generate scripts to gather data are counted.
"Without visibility and observability, literally, you're shooting in the dark. Unlike application security, in the world of AI, you cannot put a finger on when you say visibility because it's all over the map."
Modalavalasa emphasized that the first requirement for any organization is gaining the ability to observe what AI agents are actually doing within their networks and systems — a deceptively difficult task given the distributed and opaque nature of modern AI deployments.
Why GenAI and Agentic AI Require Separate Treatment
When OWASP first began developing its top 10 list for LLM risks, protocols such as MCP and Agent2Agent (A2A) did not yet exist. The emergence of these new interaction protocols means that GenAI and agentic AI systems now operate under entirely different communication paradigms, necessitating different solution sets for each.
"When we first started doing the first top 10 list, MCP didn't exist, A2A didn't exist. We'll have more protocols coming up that are helping to build applications as we get more complex. The multi-agent architectures almost guarantee that we're going to continue to see some separation there between them."
This recognition is why OWASP has moved to a multipronged framework, treating GenAI and LLMs as one domain and agentic AI as another, each with its own guidance document and corresponding tools matrix. The growing roster of companies offering solutions tailored specifically to agentic AI systems reflects how seriously vendors are now taking this distinction.
Mapping Security to the AI Development Lifecycle
The two solution reports are designed to function as a roadmap, showing how the security of LLMs, GenAI, and agentic AI must evolve as an integrated part of DevOps and SecOps workflows throughout the software development and deployment cycle. Coverage spans both commercial and open source tools and addresses security challenges that are unique to AI-based ecosystems, including:
- Goal drift — AI systems gradually deviating from their intended objectives
- Prompt injection — adversarial inputs designed to manipulate model behavior
- Inter-agent collusion — multiple AI agents coordinating in unintended or harmful ways
- Unsafe tool execution — agents invoking external tools or APIs without adequate safeguards
Clinton described the overarching goal as connecting emerging market solutions to an evolving definition of the AI-aware software development life cycle, and then mapping both of those to the documented risks in OWASP's catalogues.
21 Data Security Risks Organizations Must Manage
The third document in OWASP's latest release is the GenAI Data Security risks list, which enumerates 21 specific risks that organizations need to address as part of their data security posture. The framework covers activities ranging from discovering AI systems and associated activity, to classifying data and AI assets, establishing governance policy, and monitoring for both compliance and security violations.
Among the top risks identified are:
- DSGAI-01: Sensitive data leakage through prompts and model outputs
- DSGAI-04: Data poisoning through the manipulation of training data and embedded memory files
- DSGAI-06: Compromise through third-party tools and data
The prioritization of defenses, however, is not one-size-fits-all. Modalavalasa cautioned that organizations need to evaluate their specific patterns of AI adoption to determine which risks are most consequential for them.
"I think the defenses are driven by both how you are adopting it — your business needs. If you are relying on AI a lot, trying to rely on its models for your whole automation and reasoning stack ... or depending too much on it, probably the defenses are not there yet because AI could 'go crazy' — it's very goal-driven and could lose the context."
What Comes Next
As the OWASP GenAI Security Project transitions to a six-month update cadence, the community's focus is shifting toward filling the remaining gaps — particularly around agentic AI observability and the security of emerging inter-agent protocols. With the solutions matrix now covering more than 170 providers and a formal data security risk catalogue now in place, organizations have more structured guidance than ever before. The challenge that remains is translating that guidance into operational defenses fast enough to keep pace with an AI ecosystem that continues to evolve at a remarkable speed.
Source: Dark Reading