SKILL.md
This skill covers running gpt4free as a local LLM server with an OpenAI-compatible REST API, custom model routing (config.yaml), and integration with bots like Clawbot or OpenClaw.
python -m g4f --port 8080 (or use g4f api --debug --port 8080)/v1 endpoint for OpenAI-compatible requests (e.g., POST to http://localhost:8080/v1/chat/completions)config.yaml to aggregate/fallback across providersconfig.yaml in your cookies directory (e.g., ~/.g4f/cookies/config.yaml)patch-openclaw.py)g4f client "Hello" --model openclaw or Python clientpip install -r requirements.txt)python -m g4f --port 8080config.yaml for custom model routing:
models:
- name: "openclaw"
providers:
- provider: "GeminiCLI"
model: "gemini-3-flash-preview"
condition: "quota.models.gemini-3-flash-preview.remainingFraction > 0 and error_count < 3"
- provider: "Antigravity"
model: "gemini-3-flash"
- provider: "PollinationsAI"
model: "openai"
http://localhost:8080/v1 as the base URL (see scripts/patch-openclaw.py)