examples/provider-lm-studio/README.md
You can run this example with:
npx promptfoo@latest init --example provider-lm-studio
cd provider-lm-studio
This example demonstrates how to use Promptfoo with LM Studio for prompt evaluation. It showcases configuration for interacting with the LM Studio API using a locally hosted language model.
bartowski/gemma-2-9b-it-GGUF model in LM Studio.Start LM Studio Server:
bartowski/gemma-2-9b-it-GGUF model.http://localhost:1234).Configure Promptfoo:
Create a promptfooconfig.yaml file with the following content:
providers:
- id: 'http://localhost:1234/v1/chat/completions'
config:
method: 'POST'
headers:
'Content-Type': 'application/json'
body:
messages: '{{ prompt }}'
model: 'bartowski/gemma-2-9b-it-GGUF'
temperature: 0.7
max_tokens: -1
transformResponse: 'json.choices[0].message.content'
Note that you can view the specific configuration for each model within LM Studio's examples in the server tab.
Run Evaluation:
npx promptfoo eval
View Results:
npx promptfoo view
For more information, see the Promptfoo documentation on how to set up a custom HTTP provider.