Back to Tensorzero

TensorZero Recipe: Supervised Fine-Tuning with Axolotl

recipes/supervised_fine_tuning/axolotl/README.md

2026.4.11.4 KB
Original Source

TensorZero Recipe: Supervised Fine-Tuning with Axolotl

The axolotl.ipynb notebook provides a step-by-step recipe to perform supervised fine-tuning of models using Axolotl based on data collected by the TensorZero Gateway.

You will need to set a few environment variables in the shell your notebook will run in.

  • Set TENSORZERO_CLICKHOUSE_URL=http://chuser:chpassword@localhost:8123/tensorzero.
  • Set HF_TOKEN=<your-hf-token> to your huggingface token to use gated models like Llama and Gemma.
  • You'll also need to install the CLI tool firectl on your machine and sign in with firectl signin. You can test that this all worked with firectl whoami. We use firectl for deployment to Fireworks in this example but you can serve the model however you prefer.

Setup

We recommend using Python 3.11+ and uv.

bash
export UV_TORCH_BACKEND=cu126
uv sync

Optional: Dev Container

We have provided a Dev Container config in .devcontainer to help users of VS Code who want to run the notebook on a remote server. To use our container, follow the VS Code Instructions, then proceed with the "Using uv" instructions below.