site/content/docs/1.28/deploy-options.md
The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
This topic explains the details and shows you additional options.
Most of this covers running Contour using a Kubernetes Service of Type: LoadBalancer.
If you don't have a cluster with that capability see the Running without a Kubernetes LoadBalancer section.
Contour requires a secret containing TLS certificates that are used to secure the gRPC communication between Contour<>Envoy.
This secret can be auto-generated by the Contour certgen job or provided by an administrator.
Traffic must be forwarded to Envoy, typically via a Service of type: LoadBalancer.
All other requirements such as RBAC permissions, configuration details, are provided or have good defaults for most installations.
It is recommended that resource requests and limits be set on all Contour and Envoy containers. The example YAML manifests used in the [Getting Started][8] guide do not include these, because the appropriate values can vary widely from user to user. The table below summarizes the Contour and Envoy containers, and provides some reasonable resource requests to start with (note that these should be adjusted based on observed usage and expected load):
| Workload | Container | Request (mem) | Request (cpu) |
|---|---|---|---|
| deployment/contour | contour | 128Mi | 250m |
| daemonset/envoy | envoy | 256Mi | 500m |
| daemonset/envoy | shutdown-manager | 50Mi | 25m |
The recommended installation is for Contour to run as a Deployment and Envoy to run as a Daemonset.
The example Damonset places a single instance of Envoy per node in the cluster as well as attaches to hostPorts on each node.
This model allows for simple scaling of Envoy instances as well as ensuring even distribution of instances across the cluster.
The [example daemonset manifest][2] or [Contour Gateway Provisioner][12] will create an installation based on these recommendations.
Note: If the size of the cluster is scaled down, connections can be lost since Kubernetes Damonsets do not follow proper preStop hooks.
An alternative Envoy deployment model is utilizing a Kubernetes Deployment with a configured podAntiAffinity which attempts to mirror the Daemonset deployment model.
A benefit of this model compared to the Daemonset version is when a node is removed from the cluster, the proper shutdown events are available so connections can be cleanly drained from Envoy before terminating.
The [example deployment manifest][14] will create an installation based on these recommendations.
To retrieve the IP address or DNS name assigned to your Contour deployment, run:
$ kubectl get -n projectcontour service envoy -o wide
On AWS, for example, the response looks like:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour 10.106.53.14 a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com 80:30274/TCP 3h app=contour
Depending on your cloud provider, the EXTERNAL-IP value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections. See the [instructions for enabling the PROXY protocol.][4]
On Minikube, to get the IP address of the Contour service run:
$ minikube service -n projectcontour envoy --url
The response is always an IP address, for example http://192.168.99.100:30588. This is used as CONTOUR_IP in the rest of the documentation.
When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "0.0.0.0"
- containerPort: 443
hostPort: 443
listenAddress: "0.0.0.0"
Then run the create cluster command passing the config file as a parameter.
This file is in the examples/kind directory:
$ kind create cluster --config examples/kind/kind-expose-port.yaml
Then, your CONTOUR_IP (as used below) will just be localhost:80.
Note: We've created a public DNS record (local.projectcontour.io) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster.
The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
To test your Contour deployment, deploy kuard with the following command:
$ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
Then monitor the progress of the deployment with:
$ kubectl get po,svc,ing -l app=kuard
You should see something like:
NAME READY STATUS RESTARTS AGE
po/kuard-370091993-ps2gf 1/1 Running 0 4m
po/kuard-370091993-r63cm 1/1 Running 0 4m
po/kuard-370091993-t4dqk 1/1 Running 0 4m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kuard 10.110.67.121 <none> 80/TCP 4m
NAME HOSTS ADDRESS PORTS AGE
ing/kuard * 10.0.0.47 80 4m
... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (*).
In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
To test your Contour deployment with [HTTPProxy][9], run the following command:
$ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
Then monitor the progress of the deployment with:
$ kubectl get po,svc,httpproxy -l app=kuard
You should see something like:
NAME READY STATUS RESTARTS AGE
pod/kuard-bcc7bf7df-9hj8d 1/1 Running 0 1h
pod/kuard-bcc7bf7df-bkbr5 1/1 Running 0 1h
pod/kuard-bcc7bf7df-vkbtl 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kuard ClusterIP 10.102.239.168 <none> 80/TCP 1h
NAME FQDN TLS SECRET FIRST ROUTE STATUS STATUS DESCRIPT
httpproxy.projectcontour.io/kuard kuard.local <SECRET NAME IF TLS USED> valid valid HTTPProxy
... showing that there are three Pods, one Service, and one HTTPProxy .
In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
$ curl -H 'Host: kuard.local' ${CONTOUR_IP}
If you can't or don't want to use a Service of type: LoadBalancer there are other ways to run Contour.
If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
or if you want to configure the load balancer outside Kubernetes,
you can change the Envoy Service in the [02-service-envoy.yaml][7] file and set type to NodePort.
This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
That port can be discovered by taking the second number listed in the PORT column when listing the service, for example 30274 in 80:30274/TCP.
Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
You can run Contour without a Kubernetes Service at all.
This is done by having the Envoy pod run with host networking.
Contour's examples utilize this model in the /examples directory.
To configure, set: hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet on your Envoy pod definition.
Next, pass --envoy-service-http-port=80 --envoy-service-https-port=443 to the contour serve command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
See the [AWS NLB tutorial][10] as an example.
You can run Contour with certain features disabled by passing --disable-feature flag to the Contour serve command.
The flag is used to disable the informer for a custom resource, effectively making the corresponding CRD optional in the cluster.
You can provide the flag multiple times.
For example, to disable ExtensionService CRD, use the flag as follows: --disable-feature=extensionservices.
See the [configuration section entry][19] for all options.
At times, it's needed to upgrade Contour, the version of Envoy, or both.
The included shutdown-manager can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it's fine to delete Envoy pods during this process.
See the [redeploy envoy][11] docs for more information about how to not drop active connections to Envoy. Also see the [upgrade guides][15] on steps to roll out a new version of Contour.
It's possible to run multiple instances of Contour within a single Kubernetes cluster.
This can be useful for separating external vs. internal ingress, for having separate ingress controllers for different ingress classes, and more.
Each Contour instance can also be configured via the --watch-namespaces flag to handle their own namespaces. This allows the Kubernetes RBAC objects
to be restricted further.
The recommended way to deploy multiple Contour instances is to put each instance in its own namespace. This avoids most naming conflicts that would otherwise occur, and provides better logical separation between the instances. However, it is also possible to deploy multiple instances in a single namespace if needed; this approach requires more modifications to the example manifests to function properly. Each approach is described in detail below, using the [examples/contour][17] directory's manifests for reference.
In general, this approach requires updating the namespace of all resources, as well as giving unique names to cluster-scoped resources to avoid conflicts.
00-common.yaml:
NamespaceServiceAccounts01-contour-config.yaml:
ConfigMapfallback-certificate, envoy-client-certificate), ensure those point to the correct namespace as well.01-crds.yaml will be shared between the two instances; no changes are needed.02-job-certgen.yaml:
ServiceAccount subject within the RoleBinding02-role-contour.yaml:
ClusterRole to be uniqueRole02-rbac.yaml:
ClusterRoleBinding to be uniqueRoleBindingServiceAccount subject within both resources02-role-contour.yaml02-service-contour.yaml:
Service02-service-envoy.yaml:
Service03-contour.yaml:
Deployment--ingress-class-name=<unique ingress class>, so this instance only processes Ingresses/HTTPProxies with the given ingress class.03-envoy.yaml:
DaemonSethostPort definitions from the container (otherwise, these would conflict between the two instances)This approach requires giving unique names to all resources to avoid conflicts, and updating all resource references to use the correct names.
00-common.yaml:
ServiceAccounts to be unique01-contour-config.yaml:
ConfigMap to be unique01-crds.yaml will be shared between the two instances; no changes are needed.02-job-certgen.yaml:
Role within the RoleBinding's roleRef to match the unique name used for the RoleServiceAccount within the RoleBinding's subjects to match the unique name used for the ServiceAccountJob--secrets-name-suffix=<unique suffix>, so the generated TLS secrets have unique namesJob to be unique02-role-contour.yaml:
ClusterRole and Role to be unique02-rbac.yaml:
ClusterRoleBinding and RoleBinding to be uniqueRole and ClusterRole names used in 02-role-contour.yamlServiceAccount name used in 00-common.yaml02-service-contour.yaml:
Service to be unique03-contour.yaml, below)02-service-envoy.yaml:
Service to be unique03-envoy.yaml, below)03-contour.yaml:
Deployment to be unique02-service-contour.yaml00-common.yamlcontourcert volume to reference the unique Secret name generated from 02-certgen.yaml (e.g. contourcert<unique-suffix>)contour-config volume to reference the unique ConfigMap name used in 01-contour-config.yaml--leader-election-resource-name=<unique lease name>, so this Contour instance uses a separate leader election Lease--envoy-service-name=<unique envoy service name>, referencing the unique name used in 02-service-envoy.yaml--ingress-class-name=<unique ingress class>, so this instance only processes Ingresses/HTTPProxies with the given ingress class.03-envoy.yaml:
DaemonSet to be unique02-service-envoy.yaml--xds-address argument to the initContainer to use the unique name of the contour Service from 02-service-contour.yaml00-common.yamlenvoycert volume to reference the unique Secret name generated from 02-certgen.yaml (e.g. envoycert<unique-suffix>)hostPort definitions from the container (otherwise, these would conflict between the two instances)The Contour Gateway provisioner also supports deploying multiple instances of Contour, either in the same namespace or different namespaces.
See [Getting Started with the Gateway provisioner][16] for more information on getting started with the Gateway provisioner.
To deploy multiple Contour instances, you create multiple Gateways, either in the same namespace or in different namespaces.
Note that although the provisioning request itself is made via a Gateway API resource (Gateway), this method of installation still allows you to use any of the supported APIs for defining virtual hosts and routes: Ingress, HTTPProxy, or Gateway API's HTTPRoute and TLSRoute.
If you are using Ingress or HTTPProxy, you will likely want to assign each Contour instance a different ingress class, so they each handle different subsets of Ingress/HTTPProxy resources.
To do this, [create two separate GatewayClasses][18], each with a different ContourDeployment parametersRef.
The ContourDeployment specs should look like:
kind: ContourDeployment
apiVersion: projectcontour.io/v1alpha1
metadata:
namespace: projectcontour
name: ingress-class-1
spec:
runtimeSettings:
ingress:
classNames:
- ingress-class-1
---
kind: ContourDeployment
apiVersion: projectcontour.io/v1alpha1
metadata:
namespace: projectcontour
name: ingress-class-2
spec:
runtimeSettings:
ingress:
classNames:
- ingress-class-2
Then create each Gateway with the appropriate spec.gatewayClassName.
If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
you can specify the annotation kubernetes.io/ingress.class: "contour" on all ingresses that you would like Contour to claim.
You can customize the class name with the --ingress-class-name flag at runtime. (A comma-separated list of class names is allowed.)
If the kubernetes.io/ingress.class annotation is present with a value other than "contour", Contour will ignore that ingress.
To remove Contour or the Contour Gateway Provisioner from your cluster, delete the namespace:
$ kubectl delete ns projectcontour
Note: Your namespace may differ from above.
[2]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour.yaml [3]: #host-networking [4]: guides/proxy-proto.md [5]: https://github.com/kubernetes-up-and-running/kuard [7]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour/02-service-envoy.yaml [8]: /getting-started [9]: config/fundamentals.md [10]: guides/deploy-aws-nlb.md [11]: redeploy-envoy.md [12]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour-gateway-provisioner.yaml [13]: https://projectcontour.io/resources/deprecation-policy/ [14]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour-deployment.yaml [15]: /resources/upgrading/ [16]: https://projectcontour.io/getting-started/#option-3-contour-gateway-provisioner-alpha [17]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour [18]: guides/gateway-api/#next-steps [19]: configuration.md