site/docs/integrations/burp.md
This guide shows how to integrate Promptfoo's application-level jailbreak creation with Burp Suite's Intruder feature for security testing of LLM-powered applications.
The end result is a Burp Suite Intruder configuration that can be used to test for LLM jailbreak vulnerabilities.
(In the above example, we've jailbroken the OpenAI API directly to return an unhinged response.)
Burp Suite integration allows you to:
npm install -g promptfoo)If you've already run an evaluation with test cases, you can export them directly from the web UI:
This will generate a .burp file containing all unique test inputs from your evaluation, with proper JSON escaping and URL encoding.
First, generate adversarial test cases and export them in Burp format:
promptfoo redteam generate -o payloads.burp --burp-escape-json
:::tip
The --burp-escape-json flag is important when your payloads will be inserted into JSON requests. It ensures that special characters are properly escaped to maintain valid JSON syntax.
:::
payloads.burp fileHere's an example of generating targeted test cases. In promptfooconfig.yaml:
redteam:
plugins:
- harmful
strategies:
- jailbreak
- jailbreak:composite
- prompt-injection
Generate Burp-compatible payloads:
promptfoo redteam generate -o payloads.burp --burp-escape-json
This will create a file with payloads ready for use in Burp Intruder.