site/docs/providers/fal.md
The fal provider supports the fal.ai inference API using the fal-js client, providing a native experience for using fal.ai models in your evaluations.
Install the fal client:
npm install --save @fal-ai/client
Create an API key in the fal dashboard
Set the environment variable:
export FAL_KEY=your_api_key_here
To run a model, specify the model type and model name: fal:<model_type>:<model_name>.
fal:image:fal-ai/flux-pro/v1.1-ultra - Professional-grade image generation with up to 2K resolutionfal:image:fal-ai/flux/schnell - Fast, high-quality image generation in 1-4 stepsfal:image:fal-ai/fast-sdxl - High-speed SDXL with LoRA support:::info
Browse the complete model gallery for the latest models and detailed specifications. Model availability and capabilities are frequently updated.
:::
For speed: fal:image:fal-ai/flux/schnell - Ultra-fast generation in 1-4 steps
For quality: fal:image:fal-ai/flux/dev - High-quality 12B parameter model
For highest quality: fal:image:fal-ai/imagen4/preview - Google's highest quality model
For text/logos: fal:image:fal-ai/ideogram/v3 - Exceptional typography handling
For professional work: fal:image:fal-ai/flux-pro/v1.1-ultra - Up to 2K resolution
For vector art: fal:image:fal-ai/recraft/v3/text-to-image - SOTA with vector art and typography
For 4K images: fal:image:fal-ai/sana - 4K generation in under a second
For multimodal: fal:image:fal-ai/bagel - 7B parameter text and image model
Browse all models at fal.ai/models.
| Variable | Description |
|---|---|
FAL_KEY | Your API key for authentication with fal |
Provider config values are sent to the fal model as input, except for apiKey and the optional client block. Use client for options that should be passed to the underlying @fal-ai/client SDK instead of the model endpoint.
For example, @fal-ai/client proxy URLs are browser-only when passed as a string. To route promptfoo's Node.js CLI requests through a proxy, use the object form and set when: always:
providers:
- id: fal:image:fal-ai/flux/schnell
config:
client:
proxyUrl:
url: http://localhost:8787/api/fal/proxy
when: always
Configure the fal provider in your promptfoo configuration file. Here's an example using fal-ai/flux/schnell:
:::info
Configuration parameters vary by model. For example, fast-sdxl supports additional parameters like scheduler and guidance_scale. Always check the model-specific documentation for supported parameters.
:::
providers:
- id: fal:image:fal-ai/flux/schnell
config:
apiKey: your_api_key_here # Alternative to FAL_KEY environment variable
image_size:
width: 1024
height: 1024
num_inference_steps: 8
seed: 6252023
providers:
- id: fal:image:fal-ai/flux/dev
config:
num_inference_steps: 28
guidance_scale: 7.5
seed: 42
image_size:
width: 1024
height: 1024
| Parameter | Type | Description | Example |
|---|---|---|---|
apiKey | string | The API key for authentication with fal | your_api_key_here |
client.proxyUrl | string or object | fal SDK proxy URL configuration | { url, when } |
image_size.width | number | The width of the generated image | 1024 |
image_size.height | number | The height of the generated image | 1024 |
num_inference_steps | number | The number of inference steps to run | 4 to 50 |
seed | number | Sets a seed for reproducible results | 42 |
guidance_scale | number | Prompt adherence (model-dependent) | 3.5 to 15 |