Back to Promptfoo

fal.ai Provider

site/docs/providers/fal.md

0.121.94.8 KB
Original Source

fal.ai

The fal provider supports the fal.ai inference API using the fal-js client, providing a native experience for using fal.ai models in your evaluations.

Setup

  1. Install the fal client:

    bash
    npm install --save @fal-ai/client
    
  2. Create an API key in the fal dashboard

  3. Set the environment variable:

    bash
    export FAL_KEY=your_api_key_here
    

Provider Format

To run a model, specify the model type and model name: fal:<model_type>:<model_name>.

  • fal:image:fal-ai/flux-pro/v1.1-ultra - Professional-grade image generation with up to 2K resolution
  • fal:image:fal-ai/flux/schnell - Fast, high-quality image generation in 1-4 steps
  • fal:image:fal-ai/fast-sdxl - High-speed SDXL with LoRA support

:::info

Browse the complete model gallery for the latest models and detailed specifications. Model availability and capabilities are frequently updated.

:::

For speed: fal:image:fal-ai/flux/schnell - Ultra-fast generation in 1-4 steps
For quality: fal:image:fal-ai/flux/dev - High-quality 12B parameter model
For highest quality: fal:image:fal-ai/imagen4/preview - Google's highest quality model
For text/logos: fal:image:fal-ai/ideogram/v3 - Exceptional typography handling
For professional work: fal:image:fal-ai/flux-pro/v1.1-ultra - Up to 2K resolution
For vector art: fal:image:fal-ai/recraft/v3/text-to-image - SOTA with vector art and typography
For 4K images: fal:image:fal-ai/sana - 4K generation in under a second
For multimodal: fal:image:fal-ai/bagel - 7B parameter text and image model

Browse all models at fal.ai/models.

Environment Variables

VariableDescription
FAL_KEYYour API key for authentication with fal

Client Options

Provider config values are sent to the fal model as input, except for apiKey and the optional client block. Use client for options that should be passed to the underlying @fal-ai/client SDK instead of the model endpoint.

For example, @fal-ai/client proxy URLs are browser-only when passed as a string. To route promptfoo's Node.js CLI requests through a proxy, use the object form and set when: always:

yaml
providers:
  - id: fal:image:fal-ai/flux/schnell
    config:
      client:
        proxyUrl:
          url: http://localhost:8787/api/fal/proxy
          when: always

Configuration

Configure the fal provider in your promptfoo configuration file. Here's an example using fal-ai/flux/schnell:

:::info

Configuration parameters vary by model. For example, fast-sdxl supports additional parameters like scheduler and guidance_scale. Always check the model-specific documentation for supported parameters.

:::

Basic Setup

yaml
providers:
  - id: fal:image:fal-ai/flux/schnell
    config:
      apiKey: your_api_key_here # Alternative to FAL_KEY environment variable
      image_size:
        width: 1024
        height: 1024
      num_inference_steps: 8
      seed: 6252023

Advanced Options

yaml
providers:
  - id: fal:image:fal-ai/flux/dev
    config:
      num_inference_steps: 28
      guidance_scale: 7.5
      seed: 42
      image_size:
        width: 1024
        height: 1024

Configuration Options

ParameterTypeDescriptionExample
apiKeystringThe API key for authentication with falyour_api_key_here
client.proxyUrlstring or objectfal SDK proxy URL configuration{ url, when }
image_size.widthnumberThe width of the generated image1024
image_size.heightnumberThe height of the generated image1024
num_inference_stepsnumberThe number of inference steps to run4 to 50
seednumberSets a seed for reproducible results42
guidance_scalenumberPrompt adherence (model-dependent)3.5 to 15

See Also