site/docs/code-scanning/index.md
Promptfoo Code Scanning uses AI agents to find LLM-related vulnerabilities in your codebase and helps you fix them before you merge. By focusing specifically on LLM-related vulnerabilities, it finds issues that more general security scanners might miss.
The scanner examines code changes for common LLM security risks including prompt injection, PII exposure, and excessive agency. Rather than just analyzing the surface-level diff, it traces data flows deep into your codebase to understand how user inputs reach LLM prompts, how outputs are used, and what capabilities your LLM has access to.
This agentic approach catches subtle security issues that span multiple files, while maintaining a high signal-to-noise ratio to avoid alert fatigue.
Automatically scan pull requests with findings posted as review comments. This is the recommended way to use the scanner if your code is on GitHub. Set up the GitHub Action →
Scan code directly in your editor with real-time feedback, inline diagnostics, and quick fixes. Available for enterprise customers. Learn more →
Run scans locally or in any CI environment. Use the CLI →
Findings are classified by severity to help you prioritize:
Configure minimum severity thresholds in your scan settings.
Tailor scans to your needs by providing custom guidance:
Example:
guidance: |
Ignore the /examples directory—it contains demo code only.
Treat potential PII exposure as critical.
For this app, sending proprietary code to OpenAI or Claude is not a vulnerability.
Use Zod schemas for validation when suggesting fixes.
Scans run on Promptfoo Cloud by default. For organizations that need to run scans on their own infrastructure, code scanning is available in Promptfoo Enterprise On-Prem.