deploy/helm/opensandbox/README.md
This Helm chart deploys the OpenSandbox Kubernetes Controller, which manages sandbox environments through custom resources.
OpenSandbox requires three separate images:
The main controller that manages BatchSandbox and Pool resources.
# Build controller image
make docker-build IMG=your-registry/opensandbox-controller:v1.0.0
docker push your-registry/opensandbox-controller:v1.0.0
FastAPI control plane that exposes REST API for SDK usage. This is the entry point for SDK clients.
# Build server image (from server directory)
cd ../../../server
TAG=v1.0.0 ./build.sh
# Or manually:
docker build -t your-registry/opensandbox-server:v1.0.0 .
docker push your-registry/opensandbox-server:v1.0.0
Note: The server is required for SDK usage. If you only use kubectl to manage CRDs directly, you can disable it by setting server.enabled=false.
A sidecar container injected into Pool pods for task execution. This is not deployed as a separate Deployment, but configured in Pool resources.
# Build task-executor image
make docker-build-task-executor TASK_EXECUTOR_IMG=your-registry/opensandbox-task-executor:v1.0.0
docker push your-registry/opensandbox-task-executor:v1.0.0
Note: The task-executor image is only needed if you want to use task execution features. For basic sandbox management without tasks, only the controller and server images are required.
# Add the chart repository (if published)
helm repo add opensandbox https://charts.opensandbox.io
helm repo update
# Install the chart with all images
helm install opensandbox-controller opensandbox/opensandbox-controller \
--set controllerManager.image.repository=your-registry/opensandbox-controller \
--set controllerManager.image.tag=v1.0.0 \
--set server.image.repository=your-registry/opensandbox-server \
--set server.image.tag=v1.0.0 \
--set taskExecutor.image.repository=your-registry/opensandbox-task-executor \
--set taskExecutor.image.tag=v1.0.0
# Or install from local directory
helm install opensandbox-controller ./opensandbox-controller \
--set controllerManager.image.repository=your-registry/opensandbox-controller \
--set controllerManager.image.tag=v1.0.0 \
--set server.image.repository=your-registry/opensandbox-server \
--set server.image.tag=v1.0.0 \
--set taskExecutor.image.repository=your-registry/opensandbox-task-executor \
--set taskExecutor.image.tag=v1.0.0
# Install with custom values
helm install opensandbox-controller ./opensandbox-controller \
--set controllerManager.image.repository=your-registry/sandbox-controller \
--set controllerManager.image.tag=v1.0.0 \
--namespace opensandbox \
--create-namespace
# Install with values file
helm install opensandbox-controller ./opensandbox-controller \
-f custom-values.yaml
The following table lists the configurable parameters of the chart and their default values.
| Parameter | Description | Default |
|---|---|---|
controllerManager.image.repository | Controller image repository | opensandbox/controller |
controllerManager.image.tag | Controller image tag | dev |
controllerManager.image.pullPolicy | Image pull policy | Never |
controllerManager.replicas | Number of controller replicas | 1 |
controllerManager.resources.limits.cpu | CPU limit | 500m |
controllerManager.resources.limits.memory | Memory limit | 128Mi |
controllerManager.resources.requests.cpu | CPU request | 10m |
controllerManager.resources.requests.memory | Memory request | 64Mi |
controllerManager.leaderElect | Enable leader election | true |
controllerManager.logLevel | Log verbosity level | 3 |
Important: The task-executor is not deployed as a separate service. It is configured as a sidecar container in Pool resources. These settings provide the default image and resource configurations for reference when creating Pools.
| Parameter | Description | Default |
|---|---|---|
taskExecutor.image.repository | Task Executor image repository | opensandbox/task-executor |
taskExecutor.image.tag | Task Executor image tag | dev |
taskExecutor.image.pullPolicy | Image pull policy | Never |
taskExecutor.resources.limits.cpu | Recommended CPU limit for sidecar | 500m |
taskExecutor.resources.limits.memory | Recommended memory limit for sidecar | 256Mi |
taskExecutor.resources.requests.cpu | Recommended CPU request for sidecar | 100m |
taskExecutor.resources.requests.memory | Recommended memory request for sidecar | 128Mi |
Important: The server is a FastAPI control plane that exposes REST API for SDK usage. It is required for SDK integration but can be disabled if you only use kubectl to manage CRDs.
| Parameter | Description | Default |
|---|---|---|
server.enabled | Enable server deployment | true |
server.image.repository | Server image repository | opensandbox/server |
server.image.tag | Server image tag | v0.1.0 |
server.image.pullPolicy | Image pull policy | Never |
server.replicas | Number of server replicas | 1 |
server.resources.limits.cpu | CPU limit | 1 |
server.resources.limits.memory | Memory limit | 512Mi |
server.resources.requests.cpu | CPU request | 100m |
server.resources.requests.memory | Memory request | 256Mi |
server.config.server.host | Server listen host | 0.0.0.0 |
server.config.server.port | Server listen port | 8080 |
server.config.server.logLevel | Log level (INFO/DEBUG/WARNING/ERROR) | INFO |
server.config.server.apiKey | Optional API key for authentication | "" |
server.config.runtime.type | Runtime type (kubernetes/docker) | kubernetes |
server.config.runtime.execdImage | execd image for non-pool mode | opensandbox/execd:v1.0.5 |
server.config.kubernetes.workloadProvider | Workload provider type | batchsandbox |
server.service.type | Service type (ClusterIP/NodePort/LoadBalancer) | ClusterIP |
server.service.port | Service port | 8080 |
server.service.nodePort | NodePort (when type=NodePort) | "" |
server.ingress.enabled | Enable Ingress | false |
server.ingress.className | Ingress class name | "" |
server.ingress.hosts | Ingress host configuration | [] |
| Parameter | Description | Default |
|---|---|---|
namespaceOverride | Override the default namespace name | "opensandbox" |
Note: Both the controller, server, and user resources (Pool, BatchSandbox) use the same namespace for simplicity.
The server automatically uses in-cluster Kubernetes configuration and reads the namespace from the Helm chart configuration.
# Forward local port to server
kubectl port-forward -n opensandbox svc/opensandbox-controller-server 8080:8080
# Test connection
curl http://localhost:8080/health
# Install with NodePort
helm install opensandbox-controller ./opensandbox-controller \
--set server.service.type=NodePort \
--set server.service.nodePort=30080
# Access via node IP
curl http://<node-ip>:30080/health
# Install with Ingress
helm install opensandbox-controller ./opensandbox-controller \
--set server.ingress.enabled=true \
--set server.ingress.className=nginx \
--set server.ingress.hosts[0].host=opensandbox.example.com \
--set server.ingress.hosts[0].paths[0].path=/ \
--set server.ingress.hosts[0].paths[0].pathType=Prefix
# Access via domain
curl https://opensandbox.example.com/health
| Parameter | Description | Default |
|---|---|---|
rbac.create | Create RBAC resources | true |
rbac.serviceAccount.create | Create ServiceAccount | true |
rbac.serviceAccount.name | ServiceAccount name (if not created) | "" |
| Parameter | Description | Default |
|---|---|---|
metrics.enabled | Enable metrics service | true |
metrics.service.type | Metrics service type | ClusterIP |
metrics.service.port | Metrics service port | 8443 |
metrics.serviceMonitor.enabled | Create ServiceMonitor (Prometheus Operator) | false |
metrics.serviceMonitor.interval | Scrape interval | 30s |
| Parameter | Description | Default |
|---|---|---|
crds.install | Install CRDs | true |
| Parameter | Description | Default |
|---|---|---|
extraRoles.batchsandboxEditor.enabled | Create BatchSandbox editor role | true |
extraRoles.batchsandboxViewer.enabled | Create BatchSandbox viewer role | true |
extraRoles.poolEditor.enabled | Create Pool editor role | true |
extraRoles.poolViewer.enabled | Create Pool viewer role | true |
helm install opensandbox-controller ./opensandbox-controller \
--set controllerManager.image.repository=myregistry.com/sandbox-controller \
--set controllerManager.image.tag=latest
helm install opensandbox-controller ./opensandbox-controller \
--set controllerManager.replicas=3 \
--set controllerManager.resources.requests.cpu=100m \
--set controllerManager.resources.requests.memory=256Mi
helm install opensandbox-controller ./opensandbox-controller \
--set metrics.serviceMonitor.enabled=true
helm upgrade opensandbox-controller ./opensandbox-controller \
--set crds.install=false
After installation, you can create OpenSandbox resources:
apiVersion: sandbox.opensandbox.io/v1alpha1
kind: Pool
metadata:
name: example-pool
spec:
minBufferSize: 2
maxBufferSize: 5
capacity: 10
sandboxTemplate:
spec:
image: ubuntu:latest
command: ["sleep", "infinity"]
apiVersion: sandbox.opensandbox.io/v1alpha1
kind: BatchSandbox
metadata:
name: example-batchsandbox
spec:
replicas: 3
ttlSecondsAfterFinished: 3600
sandboxTemplate:
spec:
image: ubuntu:latest
command: ["sleep", "infinity"]
The OpenSandbox Python SDK connects to the server to manage sandboxes. The server must be accessible from where you run the SDK.
# Forward local port to server
kubectl port-forward -n opensandbox svc/opensandbox-controller-server 8080:8080
Then use SDK with localhost:8080:
from opensandbox import Sandbox
from opensandbox.config import ConnectionConfig
sandbox = await Sandbox.create(
"ubuntu:latest",
entrypoint=["sleep", "infinity"],
connection_config=ConnectionConfig(domain="localhost:8080"),
extensions={"poolRef": "agent-pool"}
)
If running SDK inside the same Kubernetes cluster:
sandbox = await Sandbox.create(
"ubuntu:latest",
entrypoint=["sleep", "infinity"],
connection_config=ConnectionConfig(
domain="opensandbox-controller-server.opensandbox.svc.cluster.local:8080"
),
extensions={"poolRef": "agent-pool"}
)
For external access, configure the service type accordingly and use the appropriate domain.
The OpenSandbox Python SDK supports two creation modes:
Fast creation using pre-warmed pools. Image must match the Pool's configuration:
from opensandbox import Sandbox
from opensandbox.config import ConnectionConfig
sandbox = await Sandbox.create(
"ubuntu:latest", # Must match Pool's image
entrypoint=["sleep", "infinity"],
connection_config=ConnectionConfig(domain="localhost:8080"), # Server address
extensions={"poolRef": "agent-pool"} # Reference to Pool name
)
Important: When using poolRef, the SDK's image parameter will be ignored - the Pool's pre-configured image is used instead. Only entrypoint and env can be customized.
Direct creation with custom image and resources:
sandbox = await Sandbox.create(
"python:3.11", # Any image
resource={"cpu": "1", "memory": "500Mi"},
connection_config=ConnectionConfig(domain="localhost:8080")
# No poolRef specified
)
# List all sandboxes
from opensandbox import SandboxManager
manager = SandboxManager(connection_config=ConnectionConfig(domain="localhost:8080"))
sandboxes = await manager.list_sandbox_infos(SandboxFilter())
# Connect to existing
sandbox = await Sandbox.connect(
sandbox_id="<sandbox-id>",
connection_config=ConnectionConfig(domain="localhost:8080")
)
For detailed SDK integration guide including troubleshooting, see examples/README.md
# Upgrade to a new version
helm upgrade opensandbox-controller ./opensandbox-controller \
--set controllerManager.image.tag=v1.1.0
# Upgrade with new values
helm upgrade opensandbox-controller ./opensandbox-controller \
-f new-values.yaml
# Uninstall the release
helm uninstall opensandbox-controller
# Note: CRDs are not automatically deleted. To remove them:
kubectl delete crd batchsandboxes.sandbox.opensandbox.io
kubectl delete crd pools.sandbox.opensandbox.io
# Check deployment
kubectl get deployment -n opensandbox
# Check pods
kubectl get pods -n opensandbox
# Check logs
kubectl logs -n opensandbox -l control-plane=controller-manager
# List CRDs
kubectl get crds | grep sandbox.opensandbox.io
# Describe CRD
kubectl describe crd batchsandboxes.sandbox.opensandbox.io
# Check ServiceAccount
kubectl get sa -n opensandbox
# Check ClusterRoles
kubectl get clusterrole | grep sandbox-k8s
# Check ClusterRoleBindings
kubectl get clusterrolebinding | grep sandbox-k8s
The chart includes utility scripts in the scripts/ directory:
scripts/install.sh - Interactive installation wizardscripts/uninstall.sh - Safe uninstallation with cleanupscripts/e2e-test.sh - End-to-end validationSee scripts/README.md for detailed documentation.
helm lint ./opensandbox-controller
# Dry run
helm install opensandbox-controller ./opensandbox-controller --dry-run --debug
# Template rendering
helm template opensandbox-controller ./opensandbox-controller
helm package ./opensandbox-controller
Please refer to the main OpenSandbox repository for contribution guidelines.
Apache License 2.0