examples/provider-llama-cpp/README.md
You can run this example with:
npx promptfoo@latest init --example provider-llama-cpp
cd provider-llama-cpp
To begin, install llama.cpp by following the instructions on their GitHub page.
To start the server, use the following command:
./llama-server -m your_model.gguf --port 8080
You can check if it's running by visiting http://localhost:8080.
Edit the prompts in promptfooconfig.yaml.
Run the evaluation:
npx promptfoo@latest eval
View the results:
npx promptfoo@latest view
llama.cpp supports many models that can be converted to the GGUF format. We recommend downloading models from Hugging Face. You may need to authenticate with your Hugging Face account using their CLI to download models.
We do not format the prompts for compatibility with llama.cpp. Prompts are passed as-is. Refer to the documentation or model card for the specific model you are using to ensure compatibility with its interface. We provide various formatting examples to illustrate different ways to format your prompts.
Since promptfoo is unaware of the underlying model being run in llama.cpp, it will not invalidate the cache when the model is updated. This means you may see stale results from the cache if you change the model. Run npx promptfoo@latest eval --no-cache to perform the evaluation without using the cache.