Back to Charts

⚠️ Repo Archive Notice

stable/coredns/README.md

latest16.6 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

CoreDNS

CoreDNS is a DNS server that chains plugins and provides DNS Services

DEPRECATION NOTICE

This chart is deprecated and no longer supported.

TL;DR;

console
$ helm install --name coredns --namespace=kube-system stable/coredns

Introduction

This chart bootstraps a CoreDNS deployment on a Kubernetes cluster using the Helm package manager. This chart will provide DNS Services and can be deployed in multiple configuration to support various scenarios listed below:

  • CoreDNS as a cluster dns service and a drop-in replacement for Kube/SkyDNS. This is the default mode and CoreDNS is deployed as cluster-service in kube-system namespace. This mode is chosen by setting isClusterService to true.
  • CoreDNS as an external dns service. In this mode CoreDNS is deployed as any kubernetes app in user specified namespace. The CoreDNS service can be exposed outside the cluster by using using either the NodePort or LoadBalancer type of service. This mode is chosen by setting isClusterService to false.
  • CoreDNS as an external dns provider for kubernetes federation. This is a sub case of 'external dns service' which uses etcd plugin for CoreDNS backend. This deployment mode as a dependency on etcd-operator chart, which needs to be pre-installed.

Prerequisites

  • Kubernetes 1.10 or later

Installing the Chart

The chart can be installed as follows:

console
$ helm install --name coredns --namespace=kube-system stable/coredns

The command deploys CoreDNS on the Kubernetes cluster in the default configuration. The configuration section lists various ways to override default configuration during deployment.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

console
$ helm delete coredns

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

ParameterDescriptionDefault
image.repositoryThe image repository to pull fromcoredns/coredns
image.tagThe image tag to pull fromv1.7.1
image.pullPolicyImage pull policyIfNotPresent
replicaCountNumber of replicas1
resources.limits.cpuContainer maximum CPU100m
resources.limits.memoryContainer maximum memory128Mi
resources.requests.cpuContainer requested CPU100m
resources.requests.memoryContainer requested memory128Mi
serviceTypeKubernetes Service typeClusterIP
prometheus.service.enabledSet this to true to create Service for Prometheus metricsfalse
prometheus.service.annotationsAnnotations to add to the metrics Service{prometheus.io/scrape: "true", prometheus.io/port: "9153"}
prometheus.monitor.enabledSet this to true to create ServiceMonitor for Prometheus operatorfalse
prometheus.monitor.additionalLabelsAdditional labels that can be used so ServiceMonitor will be discovered by Prometheus{}
prometheus.monitor.namespaceSelector to select which namespaces the Endpoints objects are discovered from.""
service.clusterIPIP address to assign to service""
service.loadBalancerIPIP address to assign to load balancer (if supported)""
service.externalIPsExternal IP addresses[]
service.externalTrafficPolicyEnable client source IP preservation[]
service.annotationsAnnotations to add to service{}
serviceAccount.createIf true, create & use serviceAccountfalse
serviceAccount.nameIf not set & create is true, use template fullname
rbac.createIf true, create & use RBAC resourcestrue
rbac.pspEnableSpecifies whether a PodSecurityPolicy should be created.false
isClusterServiceSpecifies whether chart should be deployed as cluster-service or normal k8s app.true
priorityClassNameName of Priority Class to assign pods""
serversConfiguration for CoreDNS and pluginsSee values.yml
affinityAffinity settings for pod assignment{}
nodeSelectorNode labels for pod assignment{}
tolerationsTolerations for pod assignment[]
zoneFilesConfigure custom Zone files[]
extraVolumesOptional array of volumes to create[]
extraVolumeMountsOptional array of volumes to mount inside the CoreDNS container[]
extraSecretsOptional array of secrets to mount inside the CoreDNS container[]
customLabelsOptional labels for Deployment(s), Pod, Service, ServiceMonitor objects{}
rollingUpdate.maxUnavailableMaximum number of unavailable replicas during rolling update1
rollingUpdate.maxSurgeMaximum number of pods created above desired number of pods25%
podDisruptionBudgetOptional PodDisruptionBudget{}
podAnnotationsOptional Pod only Annotations{}
terminationGracePeriodSecondsOptional duration in seconds the pod needs to terminate gracefully.30
preStopSleepDefinition of Kubernetes preStop hook executed before Pod termination{}
hpa.enabledEnable Hpa autoscaler instead of proportional onefalse
hpa.minReplicasHpa minimum number of CoreDNS replicas1
hpa.maxReplicasHpa maximum number of CoreDNS replicas2
hpa.metricsMetrics definitions used by Hpa to scale up and down{}
autoscaler.enabledOptionally enabled a cluster-proportional-autoscaler for CoreDNSfalse
autoscaler.coresPerReplicaNumber of cores in the cluster per CoreDNS replica256
autoscaler.nodesPerReplicaNumber of nodes in the cluster per CoreDNS replica16
autoscaler.minMin size of replicaCount0
autoscaler.maxMax size of replicaCount0 (aka no max)
autoscaler.includeUnschedulableNodesShould the replicas scale based on the total number or only schedulable nodesfalse
autoscaler.preventSinglePointFailureIf true does not allow single points of failure to formtrue
autoscaler.image.repositoryThe image repository to pull autoscaler fromk8s.gcr.io/cluster-proportional-autoscaler-amd64
autoscaler.image.tagThe image tag to pull autoscaler from1.7.1
autoscaler.image.pullPolicyImage pull policy for the autoscalerIfNotPresent
autoscaler.priorityClassNameOptional priority class for the autoscaler pod. priorityClassName used if not set.""
autoscaler.affinityAffinity settings for pod assignment for autoscaler{}
autoscaler.nodeSelectorNode labels for pod assignment for autoscaler{}
autoscaler.tolerationsTolerations for pod assignment for autoscaler[]
autoscaler.resources.limits.cpuContainer maximum CPU for cluster-proportional-autoscaler20m
autoscaler.resources.limits.memoryContainer maximum memory for cluster-proportional-autoscaler10Mi
autoscaler.resources.requests.cpuContainer requested CPU for cluster-proportional-autoscaler20m
autoscaler.resources.requests.memoryContainer requested memory for cluster-proportional-autoscaler10Mi
autoscaler.configmap.annotationsAnnotations to add to autoscaler config map. For example to stop CI renaming them{}

See values.yaml for configuration notes. Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

console
$ helm install --name coredns \
  --set rbac.create=false \
    stable/coredns

The above command disables automatic creation of RBAC rules.

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

console
$ helm install --name coredns -f values.yaml stable/coredns

Tip: You can use the default values.yaml

Caveats

The chart will automatically determine which protocols to listen on based on the protocols you define in your zones. This means that you could potentially use both "TCP" and "UDP" on a single port. Some cloud environments like "GCE" or "Azure container service" cannot create external loadbalancers with both "TCP" and "UDP" protocols. So When deploying CoreDNS with serviceType="LoadBalancer" on such cloud environments, make sure you do not attempt to use both protocols at the same time.

Autoscaling

By setting autoscaler.enabled = true a cluster-proportional-autoscaler will be deployed. This will default to a coredns replica for every 256 cores, or 16 nodes in the cluster. These can be changed with autoscaler.coresPerReplica and autoscaler.nodesPerReplica. When cluster is using large nodes (with more cores), coresPerReplica should dominate. If using small nodes, nodesPerReplica should dominate.

This also creates a ServiceAccount, ClusterRole, and ClusterRoleBinding for the autoscaler deployment.

replicaCount is ignored if this is enabled.

By setting hpa.enabled = true a Horizontal Pod Autoscaler is enabled for Coredns deployment. This can scale number of replicas based on meitrics like CpuUtilization, MemoryUtilization or Custom ones.