site/docs/providers/deepseek.md
DeepSeek provides an OpenAI-compatible API for their language models, with specialized models for both general chat and advanced reasoning tasks. The DeepSeek provider is compatible with all the options provided by the OpenAI provider.
DEEPSEEK_API_KEY environment variable or specify apiKey in your configBasic configuration example:
providers:
- id: deepseek:deepseek-chat
config:
temperature: 0.7
max_tokens: 4000
apiKey: YOUR_DEEPSEEK_API_KEY
- id: deepseek:deepseek-reasoner # DeepSeek-R1 model
config:
max_tokens: 8000
temperaturemax_tokenscost, inputCost, outputCost - Override promptfoo's pricing estimates (inputCost and outputCost take precedence over cost)top_p, presence_penalty, frequency_penaltystreamshowThinking - Control whether reasoning content is included in the output (default: true, applies to deepseek-reasoner model):::note
The API model names are aliases that automatically point to the latest versions: both deepseek-chat and deepseek-reasoner currently point to DeepSeek-V3.2, with the chat model using non-thinking mode and the reasoner model using thinking mode.
:::
showThinking parameter:::warning
The reasoning model does not support temperature, top_p, presence_penalty, frequency_penalty, logprobs, or top_logprobs parameters. Setting these parameters will not trigger an error but will have no effect.
:::
Here's an example comparing DeepSeek with OpenAI on reasoning tasks:
providers:
- id: deepseek:deepseek-reasoner
config:
max_tokens: 8000
showThinking: true # Include reasoning content in output (default)
- id: openai:o-1
config:
temperature: 0.0
prompts:
- 'Solve this step by step: {{math_problem}}'
tests:
- vars:
math_problem: 'What is the derivative of x^3 + 2x with respect to x?'
The DeepSeek-R1 model (deepseek-reasoner) includes detailed reasoning steps in its output. You can control whether this reasoning content is shown using the showThinking parameter:
providers:
- id: deepseek:deepseek-reasoner
config:
showThinking: false # Hide reasoning content from output
When showThinking is set to true (default), the output includes both reasoning and the final answer in a standardized format:
Thinking: <reasoning content>
<final answer>
When set to false, only the final answer is included in the output. This is useful when you want better reasoning quality but don't want to expose the reasoning process to end users or in your assertions.
See our complete example that benchmarks it against OpenAI's o1 model on the MMLU reasoning tasks.
https://api.deepseek.com/v1