Back to Charts

DEPRECATED - nginx-ingress

stable/nginx-ingress/README.md

latest32.1 KB
Original Source

DEPRECATED - nginx-ingress

This chart is deprecated as we have moved to the upstream repo ingress-nginx The chart source can be found here: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx

nginx-ingress is an Ingress controller that uses ConfigMap to store the nginx configuration.

To use, add the kubernetes.io/ingress.class: nginx annotation to your Ingress resources.

TL;DR;

console
$ helm install stable/nginx-ingress

Introduction

This chart bootstraps an nginx-ingress deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.6+

Installing the Chart

To install the chart with the release name my-release:

console
$ helm install --name my-release stable/nginx-ingress

The command deploys nginx-ingress on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

console
$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following table lists the configurable parameters of the nginx-ingress chart and their default values.

ParameterDescriptionDefault
controller.namename of the controller componentcontroller
controller.image.registrycontroller container image registryus.gcr.io
controller.image.repositorycontroller container image repositoryk8s-artifacts-prod/ingress-nginx/controller
controller.image.tagcontroller container image tag0.32.0
controller.image.digestcontroller container image digest""
controller.image.pullPolicycontroller container image pull policyIfNotPresent
controller.image.runAsUserUser ID of the controller process. Value depends on the Linux distribution used inside of the container image.101
controller.useComponentLabelWether to add component label so the HPA can work separately for controller and defaultBackend. Note: don't change this if you have an already running deployment as it will need the recreation of the controller deploymentfalse
controller.componentLabelKeyOverrideAllows override of the component label key""
controller.containerPort.httpThe port that the controller container listens on for http connections.80
controller.containerPort.httpsThe port that the controller container listens on for https connections.443
controller.confignginx ConfigMap entriesnone
controller.hostNetworkIf the nginx deployment / daemonset should run on the host's network namespace. Do not set this when controller.service.externalIPs is set and kube-proxy is used as there will be a port-conflict for port 80false
controller.defaultBackendServicedefault 404 backend service; needed only if defaultBackend.enabled = false and version < 0.21.0""
controller.dnsPolicyIf using hostNetwork=true, change to ClusterFirstWithHostNet. See pod's dns policy for detailsClusterFirst
controller.dnsConfigcustom pod dnsConfig. See pod's dns config for details{}
controller.reportNodeInternalIpIf using hostNetwork=true, setting reportNodeInternalIp=true, will pass the flag report-node-internal-ip-address to nginx-ingress. This sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller.
controller.electionIDelection ID to use for the status updateingress-controller-leader
controller.extraEnvsany additional environment variables to set in the pods{}
controller.extraContainersSidecar containers to add to the controller pod. See LemonLDAP::NG controller as example{}
controller.extraVolumeMountsAdditional volumeMounts to the controller main container{}
controller.extraVolumesAdditional volumes to the controller pod{}
controller.extraInitContainersContainers, which are run before the app containers are started[]
controller.ingressClassname of the ingress class to route through this controllernginx
controller.maxmindLicenseKeyMaxmind license key to download GeoLite2 Databases. See Accessing and using GeoLite2 database""
controller.scope.enabledlimit the scope of the ingress controllerfalse (watch all namespaces)
controller.scope.namespacenamespace to watch for ingress"" (use the release namespace)
controller.extraArgsAdditional controller container arguments{}
controller.kindinstall as Deployment, DaemonSet or BothDeployment
controller.deploymentAnnotationsannotations to be added to deployment{}
controller.autoscaling.enabledIf true, creates Horizontal Pod Autoscalerfalse
controller.autoscaling.minReplicasIf autoscaling enabled, this field sets minimum replica count2
controller.autoscaling.maxReplicasIf autoscaling enabled, this field sets maximum replica count11
controller.autoscaling.targetCPUUtilizationPercentageTarget CPU utilization percentage to scale"50"
controller.autoscaling.targetMemoryUtilizationPercentageTarget memory utilization percentage to scale"50"
controller.daemonset.useHostPortIf controller.kind is DaemonSet, this will enable hostPort for TCP/80 and TCP/443false
controller.daemonset.hostPorts.httpIf controller.daemonset.useHostPort is true and this is non-empty, it sets the hostPort"80"
controller.daemonset.hostPorts.httpsIf controller.daemonset.useHostPort is true and this is non-empty, it sets the hostPort"443"
controller.tolerationsnode taints to tolerate (requires Kubernetes >=1.6)[]
controller.affinitynode/pod affinities (requires Kubernetes >=1.6){}
controller.terminationGracePeriodSecondshow many seconds to wait before terminating a pod60
controller.minReadySecondshow many seconds a pod needs to be ready before killing the next, during update0
controller.nodeSelectornode labels for pod assignment{}
controller.podAnnotationsannotations to be added to pods{}
controller.podAnnotationConfigChecksumadd annotation with checksum/configfalse
controller.deploymentLabelslabels to add to the deployment metadata{}
controller.podLabelslabels to add to the pod container metadata{}
controller.podSecurityContextSecurity context policies to add to the controller pod{}
controller.replicaCountdesired number of controller pods1
controller.minAvailableminimum number of available controller pods for PodDisruptionBudget1
controller.resourcescontroller pod resource requests & limits{}
controller.priorityClassNamecontroller priorityClassNamenil
controller.lifecyclecontroller pod lifecycle hooks{}
controller.service.annotationsannotations for controller service{}
controller.service.labelslabels for controller service{}
controller.publishService.enabledif true, the controller will set the endpoint records on the ingress objects to reflect those on the servicefalse
controller.publishService.pathOverrideoverride of the default publish-service name""
controller.service.enabledif disabled no service will be created. This is especially useful when controller.kind is set to DaemonSet and controller.daemonset.useHostPorts is truetrue
controller.service.clusterIPinternal controller cluster service IP (set to "-" to pass an empty value)nil
controller.service.omitClusterIP(Deprecated) To omit the clusterIP from the controller servicefalse
controller.service.externalIPscontroller service external IP addresses. Do not set this when controller.hostNetwork is set to true and kube-proxy is used as there will be a port-conflict for port 80[]
controller.service.externalTrafficPolicyIf controller.service.type is NodePort or LoadBalancer, set this to Local to enable source IP preservation"Cluster"
controller.service.sessionAffinityEnables client IP based session affinity. Must be ClientIP or None if set.""
controller.service.healthCheckNodePortIf controller.service.type is NodePort or LoadBalancer and controller.service.externalTrafficPolicy is set to Local, set this to the managed health-check port the kube-proxy will expose. If blank, a random port in the NodePort range will be assigned""
controller.service.loadBalancerIPIP address to assign to load balancer (if supported)""
controller.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to load balancer (if supported)[]
controller.service.enableHttpif port 80 should be opened for servicetrue
controller.service.enableHttpsif port 443 should be opened for servicetrue
controller.service.targetPorts.httpSets the targetPort that maps to the Ingress' port 8080
controller.service.targetPorts.httpsSets the targetPort that maps to the Ingress' port 443443
controller.service.ports.httpSets service http port80
controller.service.ports.httpsSets service https port443
controller.service.typetype of controller service to createLoadBalancer
controller.service.nodePorts.httpIf controller.service.type is either NodePort or LoadBalancer and this is non-empty, it sets the nodePort that maps to the Ingress' port 80""
controller.service.nodePorts.httpsIf controller.service.type is either NodePort or LoadBalancer and this is non-empty, it sets the nodePort that maps to the Ingress' port 443""
controller.service.nodePorts.tcpSets the nodePort for an entry referenced by its key from tcp{}
controller.service.nodePorts.udpSets the nodePort for an entry referenced by its key from udp{}
controller.service.internal.enabledEnables an (additional) internal load balancerfalse
controller.service.internal.annotationsAnnotations for configuring the additional internal load balancer{}
controller.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated10
controller.livenessProbe.periodSecondsHow often to perform the probe10
controller.livenessProbe.timeoutSecondsWhen the probe times out5
controller.livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
controller.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.3
controller.livenessProbe.portThe port number that the liveness probe will listen on.10254
controller.readinessProbe.initialDelaySecondsDelay before readiness probe is initiated10
controller.readinessProbe.periodSecondsHow often to perform the probe10
controller.readinessProbe.timeoutSecondsWhen the probe times out1
controller.readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
controller.readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.3
controller.readinessProbe.portThe port number that the readiness probe will listen on.10254
controller.metrics.enabledif true, enable Prometheus metricsfalse
controller.metrics.service.annotationsannotations for Prometheus metrics service{}
controller.metrics.service.clusterIPcluster IP address to assign to service (set to "-" to pass an empty value)nil
controller.metrics.service.omitClusterIP(Deprecated) To omit the clusterIP from the metrics servicefalse
controller.metrics.service.externalIPsPrometheus metrics service external IP addresses[]
controller.metrics.service.labelslabels for metrics service{}
controller.metrics.service.loadBalancerIPIP address to assign to load balancer (if supported)""
controller.metrics.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to load balancer (if supported)[]
controller.metrics.service.servicePortPrometheus metrics service port9913
controller.metrics.service.typetype of Prometheus metrics service to createClusterIP
controller.metrics.serviceMonitor.enabledSet this to true to create ServiceMonitor for Prometheus operatorfalse
controller.metrics.serviceMonitor.additionalLabelsAdditional labels that can be used so ServiceMonitor will be discovered by Prometheus{}
controller.metrics.serviceMonitor.honorLabelshonorLabels chooses the metric's labels on collisions with target labels.false
controller.metrics.serviceMonitor.namespacenamespace where servicemonitor resource should be createdthe same namespace as nginx ingress
controller.metrics.serviceMonitor.namespaceSelectornamespaceSelector to configure what namespaces to scrapewill scrape the helm release namespace only
controller.metrics.serviceMonitor.scrapeIntervalinterval between Prometheus scraping30s
controller.metrics.prometheusRule.enabledSet this to true to create prometheusRules for Prometheus operatorfalse
controller.metrics.prometheusRule.additionalLabelsAdditional labels that can be used so prometheusRules will be discovered by Prometheus{}
controller.metrics.prometheusRule.namespacenamespace where prometheusRules resource should be createdthe same namespace as nginx ingress
controller.metrics.prometheusRule.rulesrules to be prometheus in YAML format, check values for an example.[]
controller.admissionWebhooks.enabledCreate Ingress admission webhooks. Validating webhook will check the ingress syntax.false
controller.admissionWebhooks.failurePolicyFailure policy for admission webhooksFail
controller.admissionWebhooks.portAdmission webhook port8080
controller.admissionWebhooks.service.annotationsAnnotations for admission webhook service{}
controller.admissionWebhooks.service.omitClusterIP(Deprecated) To omit the clusterIP from the admission webhook servicefalse
controller.admissionWebhooks.service.clusterIPcluster IP address to assign to admission webhook service (set to "-" to pass an empty value)nil
controller.admissionWebhooks.service.externalIPsAdmission webhook service external IP addresses[]
controller.admissionWebhooks.service.loadBalancerIPIP address to assign to load balancer (if supported)""
controller.admissionWebhooks.service.loadBalancerSourceRangesList of IP CIDRs allowed access to load balancer (if supported)[]
controller.admissionWebhooks.service.servicePortAdmission webhook service port443
controller.admissionWebhooks.service.typeType of admission webhook service to createClusterIP
controller.admissionWebhooks.patch.enabledIf true, will use a pre and post install hooks to generate a CA and certificate to use for validating webhook endpoint, and patch the created webhooks with the CA.true
controller.admissionWebhooks.patch.image.repositoryRepository to use for the webhook integration jobsjettech/kube-webhook-certgen
controller.admissionWebhooks.patch.image.tagTag to use for the webhook integration jobsv1.0.0
controller.admissionWebhooks.patch.image.digestDigest to use for the webhook integration jobs""
controller.admissionWebhooks.patch.image.pullPolicyImage pull policy for the webhook integration jobsIfNotPresent
controller.admissionWebhooks.patch.priorityClassNamePriority class for the webhook integration jobs""
controller.admissionWebhooks.patch.podAnnotationsAnnotations for the webhook job pods{}
controller.admissionWebhooks.patch.nodeSelectorNode selector for running admission hook patch jobs{}
controller.admissionWebhooks.patch.resourcesAdmission webhooks pod resource requests & limits{}
controller.customTemplate.configMapNameconfigMap containing a custom nginx template""
controller.customTemplate.configMapKeyconfigMap key containing the nginx template""
controller.addHeadersconfigMap key:value pairs containing custom headers added before sending response to the client{}
controller.proxySetHeadersconfigMap key:value pairs containing custom headers added before sending request to the backends{}
controller.headersDEPRECATED, Use controller.proxySetHeaders instead.{}
controller.updateStrategyallows setting of RollingUpdate strategy{}
controller.configMapNamespaceThe nginx-configmap namespace name""
controller.tcp.configMapNamespaceThe tcp-services-configmap namespace name""
controller.udp.configMapNamespaceThe udp-services-configmap namespace name""
defaultBackend.enabledUse default backend componenttrue
defaultBackend.namename of the default backend componentdefault-backend
defaultBackend.image.repositorydefault backend container image repositoryk8s.gcr.io/defaultbackend-amd64
defaultBackend.image.tagdefault backend container image tag1.5
defaultBackend.image.digestdefault backend container image digest""
defaultBackend.image.pullPolicydefault backend container image pull policyIfNotPresent
defaultBackend.image.runAsUserUser ID of the controller process. Value depends on the Linux distribution used inside of the container image. By default uses nobody user.65534
defaultBackend.useComponentLabelWhether to add component label so the HPA can work separately for controller and defaultBackend. Note: don't change this if you have an already running deployment as it will need the recreation of the defaultBackend deploymentfalse
defaultBackend.componentLabelKeyOverrideAllows override of the component label key""
defaultBackend.extraArgsAdditional default backend container arguments{}
defaultBackend.extraEnvsany additional environment variables to set in the defaultBackend pods[]
defaultBackend.portHttp port number8080
defaultBackend.autoscaling.enabledIf true, creates Horizontal Pod Autoscalerfalse
defaultBackend.autoscaling.minReplicasIf autoscaling enabled, this field sets minimum replica count1
defaultBackend.autoscaling.maxReplicasIf autoscaling enabled, this field sets maximum replica count2
defaultBackend.autoscaling.targetCPUUtilizationPercentageTarget CPU utilization percentage to scale"50"
defaultBackend.autoscaling.targetMemoryUtilizationPercentageTarget memory utilization percentage to scale"50"
defaultBackend.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
defaultBackend.livenessProbe.periodSecondsHow often to perform the probe10
defaultBackend.livenessProbe.timeoutSecondsWhen the probe times out5
defaultBackend.livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
defaultBackend.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.3
defaultBackend.readinessProbe.initialDelaySecondsDelay before readiness probe is initiated0
defaultBackend.readinessProbe.periodSecondsHow often to perform the probe5
defaultBackend.readinessProbe.timeoutSecondsWhen the probe times out5
defaultBackend.readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
defaultBackend.readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
defaultBackend.tolerationsnode taints to tolerate (requires Kubernetes >=1.6)[]
defaultBackend.affinitynode/pod affinities (requires Kubernetes >=1.6){}
defaultBackend.nodeSelectornode labels for pod assignment{}
defaultBackend.podAnnotationsannotations to be added to pods{}
defaultBackend.deploymentLabelslabels to add to the deployment metadata{}
defaultBackend.podLabelslabels to add to the pod container metadata{}
defaultBackend.replicaCountdesired number of default backend pods1
defaultBackend.minAvailableminimum number of available default backend pods for PodDisruptionBudget1
defaultBackend.resourcesdefault backend pod resource requests & limits{}
defaultBackend.priorityClassNamedefault backend priorityClassNamenil
defaultBackend.podSecurityContextSecurity context policies to add to the default backend{}
defaultBackend.service.annotationsannotations for default backend service{}
defaultBackend.service.clusterIPinternal default backend cluster service IP (set to "-" to pass an empty value)nil
defaultBackend.service.omitClusterIP(Deprecated) To omit the clusterIP from the default backend servicefalse
defaultBackend.service.externalIPsdefault backend service external IP addresses[]
defaultBackend.service.loadBalancerIPIP address to assign to load balancer (if supported)""
defaultBackend.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to load balancer (if supported)[]
defaultBackend.service.typetype of default backend service to createClusterIP
defaultBackend.serviceAccount.createif true, create a backend service account. Only useful if you need a pod security policy to run the backend.true
defaultBackend.serviceAccount.nameThe name of the backend service account to use. If not set and create is true, a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend.``
imagePullSecretsname of Secret resource containing private registry credentialsnil
rbac.createif true, create & use RBAC resourcestrue
rbac.scopeif true, do not create & use clusterrole and -binding. Set to true in combination with controller.scope.enabled=true to disable load-balancer status updates and scope the ingress entirely.false
podSecurityPolicy.enabledif true, create & use Pod Security Policy resourcesfalse
serviceAccount.createif true, create a service account for the controllertrue
serviceAccount.nameThe name of the controller service account to use. If not set and create is true, a name is generated using the fullname template.``
serviceAccount.annotationsAnnotations for service account. Only used if create is true.``
revisionHistoryLimitThe number of old history to retain to allow rollback.10
tcpTCP service key:value pairs. The value is evaluated as a template.{}
udpUDP service key:value pairs The value is evaluated as a template.{}
releaseLabelOverrideIf provided, the value will be used as the release label instead of .Release.Name""

These parameters can be passed via Helm's --set option

console
$ helm install stable/nginx-ingress --name my-release \
    --set controller.metrics.enabled=true

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

console
$ helm install stable/nginx-ingress --name my-release -f values.yaml

A useful trick to debug issues with ingress is to increase the logLevel as described here

console
$ helm install stable/nginx-ingress --set controller.extraArgs.v=2

Tip: You can use the default values.yaml

PodDisruptionBudget

Note that the PodDisruptionBudget resource will only be defined if the replicaCount is greater than one, else it would make it impossible to evacuate a node. See gh issue #7127 for more info.

Prometheus Metrics

The Nginx ingress controller can export Prometheus metrics.

console
$ helm install stable/nginx-ingress --name my-release \
    --set controller.metrics.enabled=true

You can add Prometheus annotations to the metrics service using controller.metrics.service.annotations. Alternatively, if you use the Prometheus Operator, you can enable ServiceMonitor creation using controller.metrics.serviceMonitor.enabled.

nginx-ingress nginx_status page/stats server

Previous versions of this chart had a controller.stats.* configuration block, which is now obsolete due to the following changes in nginx ingress controller:

  • in 0.16.1, the vts (virtual host traffic status) dashboard was removed
  • in 0.23.0, the status page at port 18080 is now a unix socket webserver only available at localhost. You can use curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status inside the controller container to access it locally, or use the snippet from nginx-ingress changelog to re-enable the http server

ExternalDNS Service configuration

Add an ExternalDNS annotation to the LoadBalancer service:

yaml
controller:
  service:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.

AWS L7 ELB with SSL Termination

Annotate the controller as shown in the nginx-ingress l7 patch:

yaml
controller:
  service:
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:XX-XXXX-X:XXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'

AWS L4 NLB with SSL Redirection

ssl-redirect and force-ssl-redirect flag are not working with AWS Network Load Balancer. You need to turn if off and add additional port with server-snippet in order to make it work.

The port NLB 80 will be mapped to nginx container port 80 and NLB port 443 will be mapped to nginx container port 8000 (special). Then we use $server_port to manage redirection on port 80

controller:
  config:
    ssl-redirect: "false" # we use `special` port to control ssl redirection
    server-snippet: |
      listen 8000;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }
  containerPort:
    http: 80
    https: 443
    special: 8000
  service:
    targetPorts:
      http: http
      https: special
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your-arn"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"

AWS route53-mapper

To configure the LoadBalancer service with the route53-mapper addon, add the domainName annotation and dns label:

yaml
controller:
  service:
    labels:
      dns: "route53"
    annotations:
      domainName: "kubernetes-example.com"

Additional internal load balancer

This setup is useful when you need both external and internal load balancers but don't want to have multiple ingress controllers and multiple ingress objects per application.

By default, the ingress object will point to the external load balancer address, but if correctly configured, you can make use of the internal one if the URL you are looking up resolves to the internal load balancer's URL.

You'll need to set both the following values:

controller.service.internal.enabled controller.service.internal.annotations

If one of them is missing the internal load balancer will not be deployed. Example you may have controller.service.internal.enabled=true but no annotations set, in this case no action will be taken.

controller.service.internal.annotations varies with the cloud service you're using.

Example for AWS

controller:
  service:
    internal:
      enabled: true
      annotations:
        # Create internal ELB
        service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
        # Any other annotation can be declared here.

Example for GCE

controller:
  service:
    internal:
      enabled: true
      annotations:
        # Create internal LB
        cloud.google.com/load-balancer-type: "Internal"
        # Any other annotation can be declared here.

An use case for this scenario is having a split-view DNS setup where the public zone CNAME records point to the external balancer URL while the private zone CNAME records point to the internal balancer URL. This way, you only need one ingress kubernetes object.

Ingress Admission Webhooks

With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the validatingwebhookconfiguration Kubernetes feature to prevent bad ingress from being added to the cluster.

With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fix this issue

Helm error when upgrading: spec.clusterIP: Invalid value: ""

If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:

Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable

Detail of how and why are in this issue but to resolve this you can set xxxx.service.omitClusterIP to true where xxxx is the service referenced in the error.

As of version 1.26.0 of this chart, by simply not providing any clusterIP value, invalid: spec.clusterIP: Invalid value: "": field is immutable will no longer occur since clusterIP: "" will not be rendered.

Using custom default backend

Default can be used to server custom error pages when service endpoints are not available. This is requires custom webserver image build with simmilar configuration as below.

server {
  listen 80 default_server;

  location /nginx_status {
    stub_status on;
    access_log  off;
    allow 127.0.0.1;
    allow all;
    deny all;
  }

  location /healthz {
    stub_status on;
    access_log  off;
    allow 127.0.0.1;
    allow all;
    deny all;
  }

}

###
# DefaultBackend application handler block
server {
  listen 80;
  server_name *.example-app.com example-app.com;

  access_log  /var/log/nginx/access.log main;
  root /usr/share/nginx/html;

  location / {
    add_header Content-Type application/json;
    add_header Cache-Control "no-cache, no-store" always;
    try_files /maintenance.json  =502;
  }
}