Back to Charts

⚠️ Repo Archive Notice

stable/contour/README.md

latest7.3 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

Contour

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Unlike other Ingress controllers, Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile.

DEPRECATION NOTICE

This chart is deprecated and no longer supported.

Installing the Chart

To install the chart with the release name my-release:

bash
$ helm install --name my-release stable/contour

Uninstalling the Chart

To uninstall/delete the my-release deployment:

bash
$ helm delete my-release --purge

Upgrading the Chart

To upgrade the my-release deployment:

bash
$ helm upgrade --install my-release stable/contour

Configuration

The default configuration values for this chart are listed in values.yaml.

ParameterDescriptionDefault
contour.image.registryRegistry for the contour container imagegcr.io/heptio-images/contour
contour.image.tagContour image tagv0.15.0
contour.image.pullPolicyImage pull policyIfNotPresent
contour.replicasReplica count for the contour deployment2
contour.resourcesResource definitions for the contour pods{}
customResourceDefinitions.createWhether the release should install CRDs. Regardless of this value, Helm v3+ will install the CRDs if those are not present already. Use --skip-crds with helm install if you want to skip CRD creationtrue
customResourceDefinitions.cleanupWhether to remove installed CRD definitions and CRDsfalse
envoy.image.registryRegistry for envoy container imagedocker.io/envoyproxy/envoy-alpine
envoy.image.tagEnvoy image tagv1.11.1
envoy.image.pullPolicyImage pull policyIfNotPresent
envoy.resourcesResource definitions for the envoy pods{}
hpa.createCreate hpa for contourfalse
hpa.minReplicasAutoscaling minimum replicaset count2
hpa.maxReplicasAutoscaling maximum replicaset count15
hpa.targetCPUUtilizationPercentageThreshold cpu usage70
init.image.registryRegistry for the contour init container imagegcr.io/heptio-images/contour
init.image.tagInit image tagv0.15.0
init.image.pullPolicyImage pull policyIfNotPresent
init.resourcesResource definitions for the init pods{}
rbac.createWhether the release should create RBAC objectstrue
serviceTypeThe type of Service Contour will useLoadBalancer
service.nodePorts.httpDesired nodePort for service of type NodePort used for http requestsnil "" - will assign a dynamic node port
service.nodePorts.httpsDesired nodePort for service of type NodePort used for https requestsnil "" - will assign a dynamic node port
serviceAccounts.createWhether the release should create Service Account objectstrue

Project Contour CRDs

The CRDs are provisioned using crd-install hooks, rather than relying on a separate chart installation. If you already have these CRDs provisioned and don't want to remove them, you can disable the CRD creation by these hooks by passing customResourceDefinitions.create=false (not required if using Helm v3).

Example workload

Start a cluster using Kind by running the below command:

kind create cluster --name=kind

Ensure kubectl configuration is set to the newly created Kind cluster

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"

Ensure tiller has permission to install (no recommended for production)

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF

Install tiller

helm init --service-account tiller

Install or upgrade contour

helm upgrade --install contour --namespace contour --set service.loadBalancerType=ClusterIP .

If you don't have an application ready to run with Contour, you can explore with kuard.

kubectl apply -f https://projectcontour.io/examples/kuard.yaml

This example specifies a default backend for all hosts, so that you can test your Contour install. It's recommended for exploration and testing only, however, because it responds to all requests regardless of the incoming DNS that is mapped. You probably want to run with specific Ingress rules for specific hostnames.

Get the Contour ClusterIP

$ CLUSTER_IP=$(kubectl -n contour get svc | grep contour | awk '{print $3}')

Access Kuard application via internal Contour Service ClusterIP

docker exec -ti kind-control-plane curl $CLUSTER_IP