site/docs/red-team/strategies/jailbreak-templates.md
The Jailbreak Templates strategy tests LLM resistance to known jailbreak techniques using a curated library of static templates from 2022-2023 era attacks.
:::note
This strategy was previously named prompt-injection. The name was changed to better reflect what it does: apply static jailbreak templates. The old name prompt-injection still works but is deprecated.
:::
This strategy applies 67 static jailbreak templates to your test cases. These templates include:
This strategy does not cover modern prompt injection techniques such as:
<|im_end|>, [INST], ``, etc.)For comprehensive prompt injection testing, consider using:
jailbreak - AI-generated adaptive jailbreaksjailbreak:composite - Multi-technique attacksindirect-prompt-injection pluginbase64, rot13, leetspeakAdd to your promptfooconfig.yaml:
strategies:
- jailbreak-templates
By default, one template is applied per test case. To test multiple templates:
strategies:
- id: jailbreak-templates
config:
sample: 10
This has a multiplicative effect on test count. Each test case × sample count = total tests.
To save time and cost, limit to harmful plugins only:
strategies:
- id: jailbreak-templates
config:
sample: 5
harmfulOnly: true
Use this strategy when:
Consider other strategies when:
The old strategy name prompt-injection still works but will show a deprecation warning:
# Deprecated - still works but not recommended
strategies:
- prompt-injection
# Recommended
strategies:
- jailbreak-templates
For a comprehensive overview of LLM vulnerabilities, visit Types of LLM Vulnerabilities.