deploy-k8s/README.md
Deploy a minimal ZeroClaw agent on OpenShift with an external LLM provider (Anthropic, OpenAI, or any OpenAI-compatible API).
oc CLI authenticated to your OpenShift clusterCopy the sample manifests to create your real ones:
for f in deploy-k8s/*-sample.yaml; do cp "$f" "${f/-sample/}"; done
Edit secret.yaml and replace REPLACE_WITH_YOUR_API_KEY with
your actual API key
Update the image field in deployment.yaml to point to your
registry (e.g., ghcr.io/youruser/zeroclaw:latest)
Update the namespace in all files if you want a different name
Optionally edit configmap.yaml to change the provider or model
Apply all manifests:
oc apply -f deploy-k8s/
The real .yaml files are gitignored so your secrets and
customizations stay local.
Check that the pod is running and the route is accessible:
oc -n zeroclaw get pods
oc -n zeroclaw get route zeroclaw
Test the health endpoint:
ROUTE=$(oc -n zeroclaw get route zeroclaw -o jsonpath='{.spec.host}')
curl -sf "https://${ROUTE}/health"
Send a test message:
curl -X POST "https://${ROUTE}/webhook" \
-H "Content-Type: application/json" \
-d '{"message": "hello, what model are you?"}'
Edit configmap.yaml to change runtime settings:
| Setting | Field | Default |
|---|---|---|
| LLM provider | default_provider | anthropic |
| Model | default_model | claude-sonnet-4-20250514 |
| Temperature | default_temperature | 0.7 |
| Autonomy level | autonomy.level | supervised |
After editing, re-apply and restart the pod:
oc apply -f deploy-k8s/configmap.yaml
oc -n zeroclaw rollout restart deployment zeroclaw
state and workspace volumes use
emptyDir — agent memory and session history do not persist across
pod restarts. For production, replace these with
PersistentVolumeClaims.Route object is OpenShift-specific. On
vanilla Kubernetes, replace route-sample.yaml with a Kubernetes
Ingress targeting port 42617.oc delete namespace zeroclaw