Back to Fastgpt

OpenSandbox Controller Helm Chart

deploy/helm/opensandbox/README.md

4.14.1816.3 KB
Original Source

OpenSandbox Controller Helm Chart

This Helm chart deploys the OpenSandbox Kubernetes Controller, which manages sandbox environments through custom resources.

Prerequisites

  • Kubernetes 1.19+
  • Helm 3.0+
  • Container runtime (Docker, containerd, etc.)
  • Three container images required:
    1. Controller image: The main controller manager
    2. Server image: FastAPI control plane for SDK usage
    3. Task Executor image: Sidecar container for task execution (optional but required for task features)

Important: Image Requirements

OpenSandbox requires three separate images:

1. Controller Image

The main controller that manages BatchSandbox and Pool resources.

bash
# Build controller image
make docker-build IMG=your-registry/opensandbox-controller:v1.0.0
docker push your-registry/opensandbox-controller:v1.0.0

2. Server Image

FastAPI control plane that exposes REST API for SDK usage. This is the entry point for SDK clients.

bash
# Build server image (from server directory)
cd ../../../server
TAG=v1.0.0 ./build.sh
# Or manually:
docker build -t your-registry/opensandbox-server:v1.0.0 .
docker push your-registry/opensandbox-server:v1.0.0

Note: The server is required for SDK usage. If you only use kubectl to manage CRDs directly, you can disable it by setting server.enabled=false.

3. Task Executor Image

A sidecar container injected into Pool pods for task execution. This is not deployed as a separate Deployment, but configured in Pool resources.

bash
# Build task-executor image
make docker-build-task-executor TASK_EXECUTOR_IMG=your-registry/opensandbox-task-executor:v1.0.0
docker push your-registry/opensandbox-task-executor:v1.0.0

Note: The task-executor image is only needed if you want to use task execution features. For basic sandbox management without tasks, only the controller and server images are required.

Features

  • SDK Control Plane: FastAPI server for Python SDK integration
  • Batch Sandbox Management: Create and manage multiple identical sandbox environments
  • Resource Pooling: Maintain pre-warmed resource pools for rapid provisioning
  • Task Orchestration: Optional integrated task execution engine
  • High Availability: Leader election support for multiple replicas
  • Metrics & Monitoring: Prometheus metrics endpoint with optional ServiceMonitor
  • Flexible Access: ClusterIP, NodePort, or Ingress support for server access

Installation

Quick Start

bash
# Add the chart repository (if published)
helm repo add opensandbox https://charts.opensandbox.io
helm repo update

# Install the chart with all images
helm install opensandbox-controller opensandbox/opensandbox-controller \
  --set controllerManager.image.repository=your-registry/opensandbox-controller \
  --set controllerManager.image.tag=v1.0.0 \
  --set server.image.repository=your-registry/opensandbox-server \
  --set server.image.tag=v1.0.0 \
  --set taskExecutor.image.repository=your-registry/opensandbox-task-executor \
  --set taskExecutor.image.tag=v1.0.0

# Or install from local directory
helm install opensandbox-controller ./opensandbox-controller \
  --set controllerManager.image.repository=your-registry/opensandbox-controller \
  --set controllerManager.image.tag=v1.0.0 \
  --set server.image.repository=your-registry/opensandbox-server \
  --set server.image.tag=v1.0.0 \
  --set taskExecutor.image.repository=your-registry/opensandbox-task-executor \
  --set taskExecutor.image.tag=v1.0.0

Custom Installation

bash
# Install with custom values
helm install opensandbox-controller ./opensandbox-controller \
  --set controllerManager.image.repository=your-registry/sandbox-controller \
  --set controllerManager.image.tag=v1.0.0 \
  --namespace opensandbox \
  --create-namespace

# Install with values file
helm install opensandbox-controller ./opensandbox-controller \
  -f custom-values.yaml

Configuration

The following table lists the configurable parameters of the chart and their default values.

Controller Manager Configuration

ParameterDescriptionDefault
controllerManager.image.repositoryController image repositoryopensandbox/controller
controllerManager.image.tagController image tagdev
controllerManager.image.pullPolicyImage pull policyNever
controllerManager.replicasNumber of controller replicas1
controllerManager.resources.limits.cpuCPU limit500m
controllerManager.resources.limits.memoryMemory limit128Mi
controllerManager.resources.requests.cpuCPU request10m
controllerManager.resources.requests.memoryMemory request64Mi
controllerManager.leaderElectEnable leader electiontrue
controllerManager.logLevelLog verbosity level3

Task Executor Configuration

Important: The task-executor is not deployed as a separate service. It is configured as a sidecar container in Pool resources. These settings provide the default image and resource configurations for reference when creating Pools.

ParameterDescriptionDefault
taskExecutor.image.repositoryTask Executor image repositoryopensandbox/task-executor
taskExecutor.image.tagTask Executor image tagdev
taskExecutor.image.pullPolicyImage pull policyNever
taskExecutor.resources.limits.cpuRecommended CPU limit for sidecar500m
taskExecutor.resources.limits.memoryRecommended memory limit for sidecar256Mi
taskExecutor.resources.requests.cpuRecommended CPU request for sidecar100m
taskExecutor.resources.requests.memoryRecommended memory request for sidecar128Mi

Server Configuration

Important: The server is a FastAPI control plane that exposes REST API for SDK usage. It is required for SDK integration but can be disabled if you only use kubectl to manage CRDs.

ParameterDescriptionDefault
server.enabledEnable server deploymenttrue
server.image.repositoryServer image repositoryopensandbox/server
server.image.tagServer image tagv0.1.0
server.image.pullPolicyImage pull policyNever
server.replicasNumber of server replicas1
server.resources.limits.cpuCPU limit1
server.resources.limits.memoryMemory limit512Mi
server.resources.requests.cpuCPU request100m
server.resources.requests.memoryMemory request256Mi
server.config.server.hostServer listen host0.0.0.0
server.config.server.portServer listen port8080
server.config.server.logLevelLog level (INFO/DEBUG/WARNING/ERROR)INFO
server.config.server.apiKeyOptional API key for authentication""
server.config.runtime.typeRuntime type (kubernetes/docker)kubernetes
server.config.runtime.execdImageexecd image for non-pool modeopensandbox/execd:v1.0.5
server.config.kubernetes.workloadProviderWorkload provider typebatchsandbox
server.service.typeService type (ClusterIP/NodePort/LoadBalancer)ClusterIP
server.service.portService port8080
server.service.nodePortNodePort (when type=NodePort)""
server.ingress.enabledEnable Ingressfalse
server.ingress.classNameIngress class name""
server.ingress.hostsIngress host configuration[]

Namespace Configuration

ParameterDescriptionDefault
namespaceOverrideOverride the default namespace name"opensandbox"

Note: Both the controller, server, and user resources (Pool, BatchSandbox) use the same namespace for simplicity.

The server automatically uses in-cluster Kubernetes configuration and reads the namespace from the Helm chart configuration.

Accessing the Server

Option 1: Port Forward (Development)

bash
# Forward local port to server
kubectl port-forward -n opensandbox svc/opensandbox-controller-server 8080:8080

# Test connection
curl http://localhost:8080/health

Option 2: NodePort (Local Development)

bash
# Install with NodePort
helm install opensandbox-controller ./opensandbox-controller \
  --set server.service.type=NodePort \
  --set server.service.nodePort=30080

# Access via node IP
curl http://<node-ip>:30080/health

Option 3: Ingress (Production)

bash
# Install with Ingress
helm install opensandbox-controller ./opensandbox-controller \
  --set server.ingress.enabled=true \
  --set server.ingress.className=nginx \
  --set server.ingress.hosts[0].host=opensandbox.example.com \
  --set server.ingress.hosts[0].paths[0].path=/ \
  --set server.ingress.hosts[0].paths[0].pathType=Prefix

# Access via domain
curl https://opensandbox.example.com/health

RBAC Configuration

ParameterDescriptionDefault
rbac.createCreate RBAC resourcestrue
rbac.serviceAccount.createCreate ServiceAccounttrue
rbac.serviceAccount.nameServiceAccount name (if not created)""

Metrics Configuration

ParameterDescriptionDefault
metrics.enabledEnable metrics servicetrue
metrics.service.typeMetrics service typeClusterIP
metrics.service.portMetrics service port8443
metrics.serviceMonitor.enabledCreate ServiceMonitor (Prometheus Operator)false
metrics.serviceMonitor.intervalScrape interval30s

CRD Configuration

ParameterDescriptionDefault
crds.installInstall CRDstrue

Extra Roles Configuration

ParameterDescriptionDefault
extraRoles.batchsandboxEditor.enabledCreate BatchSandbox editor roletrue
extraRoles.batchsandboxViewer.enabledCreate BatchSandbox viewer roletrue
extraRoles.poolEditor.enabledCreate Pool editor roletrue
extraRoles.poolViewer.enabledCreate Pool viewer roletrue

Usage Examples

Example 1: Install with Custom Image

bash
helm install opensandbox-controller ./opensandbox-controller \
  --set controllerManager.image.repository=myregistry.com/sandbox-controller \
  --set controllerManager.image.tag=latest

Example 2: Install with High Availability

bash
helm install opensandbox-controller ./opensandbox-controller \
  --set controllerManager.replicas=3 \
  --set controllerManager.resources.requests.cpu=100m \
  --set controllerManager.resources.requests.memory=256Mi

Example 3: Install with Prometheus Monitoring

bash
helm install opensandbox-controller ./opensandbox-controller \
  --set metrics.serviceMonitor.enabled=true

Example 4: Install without CRDs (for upgrades)

bash
helm upgrade opensandbox-controller ./opensandbox-controller \
  --set crds.install=false

Creating Resources

After installation, you can create OpenSandbox resources:

Create a Pool

yaml
apiVersion: sandbox.opensandbox.io/v1alpha1
kind: Pool
metadata:
  name: example-pool
spec:
  minBufferSize: 2
  maxBufferSize: 5
  capacity: 10
  sandboxTemplate:
    spec:
      image: ubuntu:latest
      command: ["sleep", "infinity"]

Create a BatchSandbox

yaml
apiVersion: sandbox.opensandbox.io/v1alpha1
kind: BatchSandbox
metadata:
  name: example-batchsandbox
spec:
  replicas: 3
  ttlSecondsAfterFinished: 3600
  sandboxTemplate:
    spec:
      image: ubuntu:latest
      command: ["sleep", "infinity"]

Using with SDK

The OpenSandbox Python SDK connects to the server to manage sandboxes. The server must be accessible from where you run the SDK.

Access Methods

1. Port Forward (Recommended for Development)

bash
# Forward local port to server
kubectl port-forward -n opensandbox svc/opensandbox-controller-server 8080:8080

Then use SDK with localhost:8080:

python
from opensandbox import Sandbox
from opensandbox.config import ConnectionConfig

sandbox = await Sandbox.create(
    "ubuntu:latest",
    entrypoint=["sleep", "infinity"],
    connection_config=ConnectionConfig(domain="localhost:8080"),
    extensions={"poolRef": "agent-pool"}
)

2. In-Cluster Access

If running SDK inside the same Kubernetes cluster:

python
sandbox = await Sandbox.create(
    "ubuntu:latest",
    entrypoint=["sleep", "infinity"],
    connection_config=ConnectionConfig(
        domain="opensandbox-controller-server.opensandbox.svc.cluster.local:8080"
    ),
    extensions={"poolRef": "agent-pool"}
)

3. NodePort / LoadBalancer / Ingress

For external access, configure the service type accordingly and use the appropriate domain.

SDK Usage Examples

The OpenSandbox Python SDK supports two creation modes:

Fast creation using pre-warmed pools. Image must match the Pool's configuration:

python
from opensandbox import Sandbox
from opensandbox.config import ConnectionConfig

sandbox = await Sandbox.create(
    "ubuntu:latest",  # Must match Pool's image
    entrypoint=["sleep", "infinity"],
    connection_config=ConnectionConfig(domain="localhost:8080"),  # Server address
    extensions={"poolRef": "agent-pool"}  # Reference to Pool name
)

Important: When using poolRef, the SDK's image parameter will be ignored - the Pool's pre-configured image is used instead. Only entrypoint and env can be customized.

Non-pooled Mode

Direct creation with custom image and resources:

python
sandbox = await Sandbox.create(
    "python:3.11",  # Any image
    resource={"cpu": "1", "memory": "500Mi"},
    connection_config=ConnectionConfig(domain="localhost:8080")
    # No poolRef specified
)

Connect to Existing Sandbox

python
# List all sandboxes
from opensandbox import SandboxManager
manager = SandboxManager(connection_config=ConnectionConfig(domain="localhost:8080"))
sandboxes = await manager.list_sandbox_infos(SandboxFilter())

# Connect to existing
sandbox = await Sandbox.connect(
    sandbox_id="<sandbox-id>",
    connection_config=ConnectionConfig(domain="localhost:8080")
)

For detailed SDK integration guide including troubleshooting, see examples/README.md

Upgrading

bash
# Upgrade to a new version
helm upgrade opensandbox-controller ./opensandbox-controller \
  --set controllerManager.image.tag=v1.1.0

# Upgrade with new values
helm upgrade opensandbox-controller ./opensandbox-controller \
  -f new-values.yaml

Uninstalling

bash
# Uninstall the release
helm uninstall opensandbox-controller

# Note: CRDs are not automatically deleted. To remove them:
kubectl delete crd batchsandboxes.sandbox.opensandbox.io
kubectl delete crd pools.sandbox.opensandbox.io

Troubleshooting

Check Controller Status

bash
# Check deployment
kubectl get deployment -n opensandbox

# Check pods
kubectl get pods -n opensandbox

# Check logs
kubectl logs -n opensandbox -l control-plane=controller-manager

Verify CRDs

bash
# List CRDs
kubectl get crds | grep sandbox.opensandbox.io

# Describe CRD
kubectl describe crd batchsandboxes.sandbox.opensandbox.io

Check RBAC

bash
# Check ServiceAccount
kubectl get sa -n opensandbox

# Check ClusterRoles
kubectl get clusterrole | grep sandbox-k8s

# Check ClusterRoleBindings
kubectl get clusterrolebinding | grep sandbox-k8s

Development

Quick Start Scripts

The chart includes utility scripts in the scripts/ directory:

  • scripts/install.sh - Interactive installation wizard
  • scripts/uninstall.sh - Safe uninstallation with cleanup
  • scripts/e2e-test.sh - End-to-end validation

See scripts/README.md for detailed documentation.

Linting the Chart

bash
helm lint ./opensandbox-controller

Testing the Chart

bash
# Dry run
helm install opensandbox-controller ./opensandbox-controller --dry-run --debug

# Template rendering
helm template opensandbox-controller ./opensandbox-controller

Package the Chart

bash
helm package ./opensandbox-controller

Contributing

Please refer to the main OpenSandbox repository for contribution guidelines.

License

Apache License 2.0

Support