site/docs/red-team/owasp-api-top-10.md
The OWASP API Security Top 10 is a security awareness document that identifies the most critical security risks to APIs. While traditionally focused on REST and GraphQL APIs, these vulnerabilities are increasingly relevant for LLM applications as they often function as intelligent API layers that interact with databases, external services, and internal systems.
LLM applications that use function calling, tool usage, or agent architectures are particularly susceptible to API security issues, as the LLM acts as a dynamic interface between users and backend systems.
The current OWASP API Security Top 10 (2023) includes:
LLM applications with API access create unique security challenges:
Promptfoo helps identify API security vulnerabilities in LLM applications through red teaming:
redteam:
plugins:
- owasp:api
strategies:
- jailbreak
- prompt-injection
Or target specific API risks:
redteam:
plugins:
- owasp:api:01 # Broken Object Level Authorization
- owasp:api:05 # Broken Function Level Authorization
- owasp:api:07 # Server Side Request Forgery
Broken Object Level Authorization (BOLA), also known as Insecure Direct Object Reference (IDOR), occurs when an application fails to properly verify that a user is authorized to access a specific object. This is the most common and impactful API vulnerability.
In LLM applications, BOLA vulnerabilities arise when:
Test for BOLA vulnerabilities:
Example configuration:
redteam:
plugins:
- bola
- rbac
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:01
Broken Authentication vulnerabilities allow attackers to compromise authentication tokens or exploit implementation flaws to assume other users' identities.
LLM applications with authentication issues may:
Test for authentication vulnerabilities:
Example configuration:
redteam:
plugins:
- bfla
- rbac
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:02
This vulnerability combines excessive data exposure and mass assignment. It occurs when an API returns more data than necessary or allows users to modify object properties they shouldn't access.
In LLM applications, this manifests as:
Test for property-level authorization issues:
Example configuration:
redteam:
plugins:
- excessive-agency
- overreliance
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:03
This vulnerability, formerly known as "Lack of Resources & Rate Limiting," occurs when APIs don't properly restrict resource consumption, leading to denial of service or excessive costs.
LLM applications are particularly vulnerable to resource exhaustion:
Test for resource consumption vulnerabilities:
Example configuration:
redteam:
plugins:
- harmful:privacy
- pii:api-db
- pii:session
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:04
Broken Function Level Authorization (BFLA) occurs when an application doesn't properly enforce access controls at the function level, allowing users to perform administrative or privileged actions.
In LLM applications with tool calling or function execution:
Test for function-level authorization issues:
Example configuration:
redteam:
plugins:
- bfla
- bola
- rbac
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:05
This vulnerability occurs when APIs expose sensitive business workflows without proper controls, allowing attackers to abuse critical functionality.
LLM applications may expose sensitive flows through:
Test for business flow vulnerabilities:
Example configuration:
redteam:
plugins:
- harmful:misinformation-disinformation
- overreliance
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:06
SSRF vulnerabilities occur when an API fetches a remote resource without validating the user-supplied URL, allowing attackers to access internal systems or perform unauthorized actions.
LLM applications are particularly vulnerable to SSRF:
Test for SSRF and injection vulnerabilities:
Example configuration:
redteam:
plugins:
- shell-injection
- sql-injection
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:07
Security misconfiguration is a broad category covering improper security settings, default configurations, verbose error messages, and missing security patches.
LLM applications commonly have misconfigurations:
Test for misconfiguration issues:
Example configuration:
redteam:
plugins:
- harmful:privacy
- pii:api-db
- pii:session
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:08
This vulnerability occurs when organizations lack proper documentation and inventory of API endpoints, versions, and integrations, leading to unpatched or deprecated APIs remaining accessible.
LLM applications with poor inventory management:
Test for inventory management issues:
Example configuration:
redteam:
plugins:
- harmful:specialized-advice
- overreliance
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:09
This vulnerability occurs when applications trust data from third-party APIs without proper validation, leading to various attacks through compromised or malicious API responses.
LLM applications consuming external APIs face risks:
Test for unsafe API consumption:
Example configuration:
redteam:
plugins:
- debug-access
- harmful:privacy
Or use the OWASP API shorthand:
redteam:
plugins:
- owasp:api:10
For complete OWASP API Security Top 10 coverage:
redteam:
plugins:
- owasp:api
strategies:
- jailbreak
- prompt-injection
This configuration tests your LLM application against all OWASP API Security Top 10 risks.
The OWASP API Security Top 10 and OWASP LLM Top 10 are complementary frameworks:
| API Security Risk | Related LLM Risk |
|---|---|
| API1: BOLA | LLM06: Excessive Agency |
| API5: BFLA | LLM06: Excessive Agency |
| API7: SSRF | LLM05: Improper Output Handling |
| API8: Security Misconfiguration | LLM02: Sensitive Information Disclosure |
Test both frameworks together:
redteam:
plugins:
- owasp:api
- owasp:llm
strategies:
- jailbreak
- prompt-injection
LLM applications introduce unique API security considerations:
Traditional APIs validate structured input (JSON, XML), but LLMs accept natural language, making input validation more complex.
LLMs can chain multiple API calls autonomously, creating authorization challenges traditional APIs don't face.
Authorization decisions may depend on conversation history, making session management critical.
Attackers can manipulate API calls through prompt injection without directly accessing the API.
When securing LLM applications against API vulnerabilities:
API security for LLM applications is an evolving field as new attack patterns emerge. Regular testing with Promptfoo helps ensure your LLM applications maintain strong API security posture.
To learn more about setting up comprehensive AI red teaming, see Introduction to LLM red teaming and Configuration details.