Back to Tensorzero

Code Example: How to fine-tune your LLM with Supervised Fine-Tuning (SFT)

examples/docs/guides/optimization/supervised-fine-tuning-sft/README.md

2026.4.11.1 KB
Original Source

Code Example: How to fine-tune your LLM with Supervised Fine-Tuning (SFT)

This folder contains the code for the Guides > Optimization > Supervised Fine-Tuning page in the documentation.

This example focuses on OpenAI, but TensorZero also integrates with other provides.

Prerequisites

  1. Set the OPENAI_API_KEY environment variable
  2. Set the TENSORZERO_POSTGRES_URL environment variable (e.g., postgres://postgres:postgres@localhost:5432/tensorzero)

Running the Example

  1. Start the required services:

    bash
    docker compose up -d
    
  2. Install dependencies:

    bash
    uv sync
    
  3. Run the example:

    bash
    uv run python main.py
    

The script will:

  1. Run inferences on the NER dataset
  2. Submit demonstration feedback with ground-truth labels
  3. Launch an SFT job with OpenAI
  4. Poll until the job completes
  5. Print the configuration needed to use the fine-tuned model