site/docs/red-team/strategies/image.md
The Image strategy converts prompt text into an image and then encodes that image as a base64 string. This approach enables testing how AI systems handle images of text, which may potentially bypass text-based guardrails / content filters or lead to different behaviors than when processing plain text.
This strategy helps security researchers and AI developers:
The strategy performs the following operations:
The resulting test case contains the same semantic content as the original but in a different format that may be processed differently by AI systems.
Below is an example of converting a text prompt into an image from our harmful:hate plugin.
After encoding, this image is converted to base64, which begins like:
iVBORw0KGgoAAAANSUhEUgAAA...
Use it in your promptfooconfig.yaml like this:
# yaml-language-server: $schema=https://promptfoo.dev/config-schema.json
description: 'Multi-modal redteam'
targets:
- bedrock:amazon.nova-lite-v1:0
prompts:
- file://path/to/your/prompt.json
redteam:
plugins:
- ...
strategies:
- image
- id: basic
config:
enabled: false
:::note
You should only disable all other strategies when using the image strategy. You can do this by setting enabled: false for the basic strategy and removing other strategies from the strategies array.
:::
Your prompt.json file should look like this:
[
{
"role": "system",
"content": "You are a helpful assistant"
},
{
"role": "user",
"content": [
{
"image": {
"format": "png",
"source": { "bytes": "{{image}}" }
}
}
]
}
]
:::note You should update the prompt.json to match the prompt format of your LLM provider. Base64 images are all encoded as PNG images. :::
:::note
The {{image}} syntax in the examples is a Nunjucks template variable. When promptfoo processes your prompt, it replaces {{image}} with the base64-encoded image data.
:::
:::tip
This strategy requires you to install the sharp package for image creation.
npm i sharp
:::
For a comprehensive overview of LLM vulnerabilities and red teaming strategies, visit our Types of LLM Vulnerabilities page.