Back to Openclaw

Fal

docs/providers/fal.md

2026.5.54.4 KB
Original Source

OpenClaw ships a bundled fal provider for hosted image and video generation.

PropertyValue
Providerfal
AuthFAL_KEY (canonical; FAL_API_KEY also works as a fallback)
APIfal model endpoints

Getting started

<Steps> <Step title="Set the API key"> ```bash openclaw onboard --auth-choice fal-api-key ``` </Step> <Step title="Set a default image model"> ```json5 { agents: { defaults: { imageGenerationModel: { primary: "fal/fal-ai/flux/dev", }, }, }, } ``` </Step> </Steps>

Image generation

The bundled fal image-generation provider defaults to fal/fal-ai/flux/dev.

CapabilityValue
Max images4 per request
Edit modeEnabled, 1 reference image
Size overridesSupported
Aspect ratioSupported
ResolutionSupported
Output formatpng or jpeg
<Warning> The fal image edit endpoint does **not** support `aspectRatio` overrides. </Warning>

Use outputFormat: "png" when you want PNG output. fal does not declare an explicit transparent-background control in OpenClaw, so background: "transparent" is reported as an ignored override for fal models.

To use fal as the default image provider:

json5
{
  agents: {
    defaults: {
      imageGenerationModel: {
        primary: "fal/fal-ai/flux/dev",
      },
    },
  },
}

Video generation

The bundled fal video-generation provider defaults to fal/fal-ai/minimax/video-01-live.

CapabilityValue
ModesText-to-video, single-image reference, Seedance reference-to-video
RuntimeQueue-backed submit/status/result flow for long-running jobs
<AccordionGroup> <Accordion title="Available video models"> **HeyGen video-agent:**
- `fal/fal-ai/heygen/v2/video-agent`

**Seedance 2.0:**

- `fal/bytedance/seedance-2.0/fast/text-to-video`
- `fal/bytedance/seedance-2.0/fast/image-to-video`
- `fal/bytedance/seedance-2.0/fast/reference-to-video`
- `fal/bytedance/seedance-2.0/text-to-video`
- `fal/bytedance/seedance-2.0/image-to-video`
- `fal/bytedance/seedance-2.0/reference-to-video`
</Accordion> <Accordion title="Seedance 2.0 config example"> ```json5 { agents: { defaults: { videoGenerationModel: { primary: "fal/bytedance/seedance-2.0/fast/text-to-video", }, }, }, } ``` </Accordion> <Accordion title="Seedance 2.0 reference-to-video config example"> ```json5 { agents: { defaults: { videoGenerationModel: { primary: "fal/bytedance/seedance-2.0/fast/reference-to-video", }, }, }, } ```
Reference-to-video accepts up to 9 images, 3 videos, and 3 audio references
through the shared `video_generate` `images`, `videos`, and `audioRefs`
parameters, with at most 12 total reference files.
</Accordion> <Accordion title="HeyGen video-agent config example"> ```json5 { agents: { defaults: { videoGenerationModel: { primary: "fal/fal-ai/heygen/v2/video-agent", }, }, }, } ``` </Accordion> </AccordionGroup> <Tip> Use `openclaw models list --provider fal` to see the full list of available fal models, including any recently added entries. </Tip> <CardGroup cols={2}> <Card title="Image generation" href="/tools/image-generation" icon="image"> Shared image tool parameters and provider selection. </Card> <Card title="Video generation" href="/tools/video-generation" icon="video"> Shared video tool parameters and provider selection. </Card> <Card title="Configuration reference" href="/gateway/config-agents#agent-defaults" icon="gear"> Agent defaults including image and video model selection. </Card> </CardGroup>