docker/README_K8S.md
本指南提供了在 Kubernetes 集群中部署 LangBot 的完整步骤。Kubernetes 部署配置基于 docker-compose.yaml,适用于生产环境的容器化部署。
kubectl 命令行工具已配置并可访问集群Kubernetes 部署包含以下组件:
langbot: 主应用服务
langbot-plugin-runtime: 插件运行时服务
持久化存储:
langbot-data: LangBot 主数据langbot-plugins: 插件文件langbot-plugin-runtime-data: 插件运行时数据# 克隆仓库
git clone https://github.com/langbot-app/LangBot
cd LangBot/docker
# 或直接下载 kubernetes.yaml
wget https://raw.githubusercontent.com/langbot-app/LangBot/main/docker/kubernetes.yaml
# 应用所有配置
kubectl apply -f kubernetes.yaml
# 检查部署状态
kubectl get all -n langbot
# 查看 Pod 日志
kubectl logs -n langbot -l app=langbot -f
默认情况下,LangBot 服务使用 ClusterIP 类型,只能在集群内部访问。您可以选择以下方式之一来访问:
选项 A: 端口转发(推荐用于测试)
kubectl port-forward -n langbot svc/langbot 5300:5300
选项 B: NodePort(适用于开发环境)
编辑 kubernetes.yaml,取消注释 NodePort Service 部分,然后:
kubectl apply -f kubernetes.yaml
# 获取节点 IP
kubectl get nodes -o wide
# 访问 http://<NODE_IP>:30300
选项 C: LoadBalancer(适用于云环境)
编辑 kubernetes.yaml,取消注释 LoadBalancer Service 部分,然后:
kubectl apply -f kubernetes.yaml
# 获取外部 IP
kubectl get svc -n langbot langbot-loadbalancer
# 访问 http://<EXTERNAL_IP>
选项 D: Ingress(推荐用于生产环境)
确保集群中已安装 Ingress Controller(如 nginx-ingress),然后:
kubernetes.yaml 中的 Ingress 配置kubectl apply -f kubernetes.yaml
# 访问 http://langbot.yourdomain.com
在 ConfigMap 中配置环境变量:
apiVersion: v1
kind: ConfigMap
metadata:
name: langbot-config
namespace: langbot
data:
TZ: "Asia/Shanghai" # 修改为您的时区
默认使用动态存储分配。如果您有特定的 StorageClass,请在 PVC 中指定:
spec:
storageClassName: your-storage-class-name
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
根据您的需求调整资源限制:
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2000m"
# 查看 LangBot 主服务日志
kubectl logs -n langbot -l app=langbot -f
# 查看插件运行时日志
kubectl logs -n langbot -l app=langbot-plugin-runtime -f
# 重启 LangBot
kubectl rollout restart deployment/langbot -n langbot
# 重启插件运行时
kubectl rollout restart deployment/langbot-plugin-runtime -n langbot
# 更新到最新版本
kubectl set image deployment/langbot -n langbot langbot=rockchin/langbot:latest
kubectl set image deployment/langbot-plugin-runtime -n langbot langbot-plugin-runtime=rockchin/langbot:latest
# 检查更新状态
kubectl rollout status deployment/langbot -n langbot
注意:由于 LangBot 使用 ReadWriteOnce 的持久化存储,不支持多副本扩容。如需高可用,请考虑使用 ReadWriteMany 存储或其他架构方案。
# 备份 PVC 数据
kubectl exec -n langbot -it <langbot-pod-name> -- tar czf /tmp/backup.tar.gz /app/data
kubectl cp langbot/<langbot-pod-name>:/tmp/backup.tar.gz ./backup.tar.gz
# 删除所有资源(保留 PVC)
kubectl delete deployment,service,configmap -n langbot --all
# 删除 PVC(会删除数据)
kubectl delete pvc -n langbot --all
# 删除命名空间
kubectl delete namespace langbot
# 查看 Pod 状态
kubectl get pods -n langbot
# 查看详细信息
kubectl describe pod -n langbot <pod-name>
# 查看事件
kubectl get events -n langbot --sort-by='.lastTimestamp'
# 检查 PVC 状态
kubectl get pvc -n langbot
# 检查 PV
kubectl get pv
# 检查 Service
kubectl get svc -n langbot
# 检查端口转发
kubectl port-forward -n langbot svc/langbot 5300:5300
latest 标签,使用具体版本号如 rockchin/langbot:v1.0.0如果需要配置 API 密钥等敏感信息:
apiVersion: v1
kind: Secret
metadata:
name: langbot-secrets
namespace: langbot
type: Opaque
data:
api_key: <base64-encoded-value>
然后在 Deployment 中引用:
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: langbot-secrets
key: api_key
注意:需要确保使用 ReadWriteMany 存储类型
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: langbot-hpa
namespace: langbot
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: langbot
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
This guide provides complete steps for deploying LangBot in a Kubernetes cluster. The Kubernetes deployment configuration is based on docker-compose.yaml and is suitable for production containerized deployments.
kubectl command-line tool configured with cluster accessThe Kubernetes deployment includes the following components:
langbot: Main application service
langbot-plugin-runtime: Plugin runtime service
Persistent Storage:
langbot-data: LangBot main datalangbot-plugins: Plugin fileslangbot-plugin-runtime-data: Plugin runtime data# Clone repository
git clone https://github.com/langbot-app/LangBot
cd LangBot/docker
# Or download kubernetes.yaml directly
wget https://raw.githubusercontent.com/langbot-app/LangBot/main/docker/kubernetes.yaml
# Apply all configurations
kubectl apply -f kubernetes.yaml
# Check deployment status
kubectl get all -n langbot
# View Pod logs
kubectl logs -n langbot -l app=langbot -f
By default, LangBot service uses ClusterIP type, accessible only within the cluster. Choose one of the following methods to access:
Option A: Port Forwarding (Recommended for testing)
kubectl port-forward -n langbot svc/langbot 5300:5300
Then visit http://localhost:5300
Option B: NodePort (Suitable for development)
Edit kubernetes.yaml, uncomment the NodePort Service section, then:
kubectl apply -f kubernetes.yaml
# Get node IP
kubectl get nodes -o wide
# Visit http://<NODE_IP>:30300
Option C: LoadBalancer (Suitable for cloud environments)
Edit kubernetes.yaml, uncomment the LoadBalancer Service section, then:
kubectl apply -f kubernetes.yaml
# Get external IP
kubectl get svc -n langbot langbot-loadbalancer
# Visit http://<EXTERNAL_IP>
Option D: Ingress (Recommended for production)
Ensure an Ingress Controller (e.g., nginx-ingress) is installed in the cluster, then:
kubernetes.yamlkubectl apply -f kubernetes.yaml
# Visit http://langbot.yourdomain.com
Configure environment variables in ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: langbot-config
namespace: langbot
data:
TZ: "Asia/Shanghai" # Change to your timezone
Uses dynamic storage provisioning by default. If you have a specific StorageClass, specify it in PVC:
spec:
storageClassName: your-storage-class-name
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Adjust resource limits based on your needs:
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2000m"
# View LangBot main service logs
kubectl logs -n langbot -l app=langbot -f
# View plugin runtime logs
kubectl logs -n langbot -l app=langbot-plugin-runtime -f
# Restart LangBot
kubectl rollout restart deployment/langbot -n langbot
# Restart plugin runtime
kubectl rollout restart deployment/langbot-plugin-runtime -n langbot
# Update to latest version
kubectl set image deployment/langbot -n langbot langbot=rockchin/langbot:latest
kubectl set image deployment/langbot-plugin-runtime -n langbot langbot-plugin-runtime=rockchin/langbot:latest
# Check update status
kubectl rollout status deployment/langbot -n langbot
Note: Due to LangBot using ReadWriteOnce persistent storage, multi-replica scaling is not supported. For high availability, consider using ReadWriteMany storage or alternative architectures.
# Backup PVC data
kubectl exec -n langbot -it <langbot-pod-name> -- tar czf /tmp/backup.tar.gz /app/data
kubectl cp langbot/<langbot-pod-name>:/tmp/backup.tar.gz ./backup.tar.gz
# Delete all resources (keep PVCs)
kubectl delete deployment,service,configmap -n langbot --all
# Delete PVCs (will delete data)
kubectl delete pvc -n langbot --all
# Delete namespace
kubectl delete namespace langbot
# Check Pod status
kubectl get pods -n langbot
# View detailed information
kubectl describe pod -n langbot <pod-name>
# View events
kubectl get events -n langbot --sort-by='.lastTimestamp'
# Check PVC status
kubectl get pvc -n langbot
# Check PV
kubectl get pv
# Check Service
kubectl get svc -n langbot
# Test port forwarding
kubectl port-forward -n langbot svc/langbot 5300:5300
latest tag, use specific version like rockchin/langbot:v1.0.0If you need to configure sensitive information like API keys:
apiVersion: v1
kind: Secret
metadata:
name: langbot-secrets
namespace: langbot
type: Opaque
data:
api_key: <base64-encoded-value>
Then reference in Deployment:
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: langbot-secrets
key: api_key
Note: Requires ReadWriteMany storage type
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: langbot-hpa
namespace: langbot
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: langbot
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70