Back to Charts

⚠️ Repo Archive Notice

stable/sentry/README.md

latest30.2 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

Sentry

Sentry is a cross-platform crash reporting and aggregation platform.

This helm chart is not official nor maintained by Sentry itself.


Deprecation Warning

As part of the deprecation timeline, another repository has taken over the chart here

Note: this repository supports Sentry 10.

Please make PRs / Issues here from now on.


TL;DR;

console
$ helm install --wait stable/sentry

Introduction

This chart bootstraps a Sentry deployment on a Kubernetes cluster using the Helm package manager.

It also optionally packages the PostgreSQL and Redis which are required for Sentry.

Prerequisites

  • Kubernetes 1.4+ with Beta APIs enabled
  • helm >= v2.3.0 to run "weighted" hooks in right order.
  • PV provisioner support in the underlying infrastructure (with persistence storage enabled)

Installing the Chart

To install the chart with the release name my-release:

console
$ helm install --name my-release --wait stable/sentry

Note: We have to use the --wait flag for initial creation because the database creation takes longer than the default 300 seconds

The command deploys Sentry on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Warning: This Chart does not support helm upgrade an upgrade will currently reset your installation

Uninstalling the Chart

To uninstall/delete the my-release deployment:

console
$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Warning: Jobs are not deleted automatically. They need to be manually deleted

console
$ kubectl delete job/sentry-db-init job/sentry-user-create

Configuration

The following table lists the configurable parameters of the Sentry chart and their default values.

Dependent charts can also have values overwritten. Preface values with postgresql.* or redis.*

ParameterDescriptionDefault
image.repositorySentry imagelibrary/sentry
image.tagSentry image tag9.1.2
image.pullPolicyImage pull policyIfNotPresent
image.imagePullSecretsSpecify image pull secrets[]
sentrySecretSpecify SENTRY_SECRET_KEY. If isn't specified it will be generated automatically.nil
web.podAnnotationsWeb pod annotations{}
web.podLabelsWeb pod extra labels{}
web.replicacountAmount of web pods to run1
web.resources.limitsWeb resource limits{cpu: 500m, memory: 500Mi}
web.resources.requestsWeb resource requests{cpu: 300m, memory: 300Mi}
web.envAdditional web environment variables[{name: GITHUB_APP_ID}, {name: GITHUB_API_SECRET}]
web.nodeSelectorNode labels for web pod assignment{}
web.affinityAffinity settings for web pod assignment{}
web.schedulerNameName of an alternate scheduler for web podnil
web.tolerationsToleration labels for web pod assignment[]
web.livenessProbe.failureThresholdThe liveness probe failure threshold5
web.livenessProbe.initialDelaySecondsThe liveness probe initial delay seconds50
web.livenessProbe.periodSecondsThe liveness probe period seconds10
web.livenessProbe.successThresholdThe liveness probe success threshold1
web.livenessProbe.timeoutSecondsThe liveness probe timeout seconds2
web.readinessProbe.failureThresholdThe readiness probe failure threshold10
web.readinessProbe.initialDelaySecondsThe readiness probe initial delay seconds50
web.readinessProbe.periodSecondsThe readiness probe period seconds10
web.readinessProbe.successThresholdThe readiness probe success threshold1
web.readinessProbe.timeoutSecondsThe readiness probe timeout seconds2
web.priorityClassNameThe priorityClassName on web deploymentnil
web.hpa.enabledBoolean to create a HorizontalPodAutoscaler for web deploymentfalse
web.hpa.cputhresholdCPU threshold percent for the web HorizontalPodAutoscaler60
web.hpa.minpodsMin pods for the web HorizontalPodAutoscaler1
web.hpa.maxpodsMax pods for the web HorizontalPodAutoscaler10
cron.podAnnotationsCron pod annotations{}
cron.podLabelsWorker pod extra labels{}
cron.replicacountAmount of cron pods to run1
cron.resources.limitsCron resource limits{cpu: 200m, memory: 200Mi}
cron.resources.requestsCron resource requests{cpu: 100m, memory: 100Mi}
cron.nodeSelectorNode labels for cron pod assignment{}
cron.affinityAffinity settings for cron pod assignment{}
cron.schedulerNameName of an alternate scheduler for cron podnil
cron.tolerationsToleration labels for cron pod assignment[]
cron.priorityClassNameThe priorityClassName on cron deploymentnil
worker.podAnnotationsWorker pod annotations{}
worker.podLabelsWorker pod extra labels{}
worker.replicacountAmount of worker pods to run2
worker.resources.limitsWorker resource limits{cpu: 300m, memory: 500Mi}
worker.resources.requestsWorker resource requests{cpu: 100m, memory: 100Mi}
worker.nodeSelectorNode labels for worker pod assignment{}
worker.schedulerNameName of an alternate scheduler for workernil
worker.affinityAffinity settings for worker pod assignment{}
worker.tolerationsToleration labels for worker pod assignment[]
worker.concurrencyCelery worker concurrencynil
worker.priorityClassNameThe priorityClassName on workers deploymentnil
worker.hpa.enabledBoolean to create a HorizontalPodAutoscaler for worker deploymentfalse
worker.hpa.cputhresholdCPU threshold percent for the worker HorizontalPodAutoscaler60
worker.hpa.minpodsMin pods for the worker HorizontalPodAutoscaler1
worker.hpa.maxpodsMax pods for the worker HorizontalPodAutoscaler10
user.createCreate the default admintrue
user.emailUsername for default admin[email protected]
user.passwordPassword for default adminRandomly generated
email.from_addressEmail notifications are fromsmtp
email.hostSMTP host for sending emailsmtp
email.portSMTP port25
email.userSMTP usernil
email.passwordSMTP passwordnil
email.use_tlsSMTP TLS for securityfalse
email.enable_repliesAllow email repliesfalse
email.existingSecretSMTP password from an existing secretnil
email.existingSecretKeyKey to get from the email.existingSecret secretsmtp-password
service.typeKubernetes service typeLoadBalancer
service.nameKubernetes service namesentry
service.externalPortKubernetes external service port9000
service.internalPortKubernetes internal service port9000
service.annotationsService annotations{}
service.nodePortKubernetes service NodePort portRandomly chosen by Kubernetes
service.loadBalancerSourceRangesAllow list for the load balancernil
ingress.enabledEnable ingress controller resourcefalse
ingress.annotationsIngress annotations{}
ingress.labelsIngress labels{}
ingress.hostnameURL to address your Sentry installationsentry.local
ingress.pathpath to address your Sentry installation/
ingress.extraPathsIngress extra paths to prepend to every host configuration.[]
ingress.tlsIngress TLS configuration[]
postgresql.enabledDeploy postgres server (see below)true
postgresql.postgresqlDatabasePostgres database namesentry
postgresql.postgresqlUsernamePostgres usernamepostgres
postgresql.postgresqlHostExternal postgres hostnil
postgresql.postgresqlPasswordExternal/Internal postgres passwordnil
postgresql.postgresqlPortExternal postgres port5432
postgresql.existingSecretName of existing secret to use for the PostgreSQL passwordnil
postgresql.existingSecretKeyKey to get from the postgresql.existingSecret secretpostgresql-password
redis.enabledDeploy redis server (see below)true
redis.hostExternal redis hostnil
redis.passwordExternal redis passwordnil
redis.portExternal redis port6379
redis.existingSecretName of existing secret to use for the Redis passwordnil
redis.existingSecretKeyKey to get from the redis.existingSecret secretredis-password
filestore.backendBackend for Sentry Filestorefilesystem
filestore.filesystem.pathLocation to store files for Sentry/var/lib/sentry/files
filestore.filesystem.persistence.enabledEnable Sentry files persistence using PVCtrue
filestore.filesystem.persistence.existingClaimProvide an existing PersistentVolumeClaimnil
filestore.filesystem.persistence.storageClassPVC Storage Classnil (uses alpha storage class annotation)
filestore.filesystem.persistence.accessModePVC Access ModeReadWriteOnce
filestore.filesystem.persistence.sizePVC Storage Request10Gi
filestore.filesystem.persistence.persistentWorkersMount the PVC to Sentry workers, enabling features such as private source mapsfalse
filestore.gcs.credentialsFileFilename of the service account in secretcredentials.json
filestore.gcs.secretNameThe name of the secret for GCS accessnil
filestore.gcs.bucketNameThe name of the GCS bucketnil
filestore.s3.accessKeyS3 access keynil
filestore.s3.secretKeyS3 secret keynil
filestore.s3.existingSecretName of existing secret to use for the S3 keysnil
filestore.s3.bucketNameThe name of the S3 bucketnil
filestore.s3.endpointUrlThe endpoint url of the S3 (using for "MinIO S3 Backend")nil
filestore.s3.signature_versionS3 signature version (optional)nil
filestore.s3.region_nameS3 region name (optional)nil
filestore.s3.default_aclS3 default acl (optional)nil
config.configYmlSentry config.yml file``
config.sentryConfPySentry sentry.conf.py file``
metrics.enabledStart an exporter for sentry metricsfalse
metrics.nodeSelectorNode labels for metrics pod assignment{}
metrics.tolerationsToleration labels for metrics pod assignment[]
metrics.affinityAffinity settings for metrics pod{}
metrics.schedulerNameName of an alternate scheduler for metrics podnil
metrics.podLabelsLabels for metrics podnil
metrics.resourcesMetrics resource requests/limit{}
metrics.service.typeKubernetes service type for metrics serviceClusterIP
metrics.service.labelsAdditional labels for metrics service{}
metrics.image.repositoryMetrics exporter image repositoryprom/statsd-exporter
metrics.image.tagMetrics exporter image tagv0.10.5
metrics.image.PullPolicyMetrics exporter image pull policyIfNotPresent
metrics.serviceMonitor.enabledif true, creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true)false
metrics.serviceMonitor.namespaceOptional namespace which Prometheus is running innil
metrics.serviceMonitor.intervalHow frequently to scrape metrics (use by default, falling back to Prometheus' default)nil
metrics.serviceMonitor.selectorDefault to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install{ prometheus: kube-prometheus }
hooks.affinityAffinity settings for hooks pods{}
hooks.tolerationsToleration labels for hook pod assignment[]
hooks.dbInit.enabledBoolean to enable the dbInit job using a hooktrue
hooks.dbInit.resources.limitsHook job resource limits{memory: 3200Mi}
hooks.dbInit.resources.requestsHook job resource requests{memory: 3000Mi}
serviceAccount.namename of the ServiceAccount to be used by access-controlled resourcesautogenerated
serviceAccount.createConfigures if a ServiceAccount with this name should be createdtrue
serviceAccount.annotationsConfigures annotation for the ServiceAccount{}

Dependent charts can also have values overwritten. Preface values with postgresql. or redis.

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

console
$ helm install --name my-release \
  --set persistence.enabled=false,email.host=email \
    stable/sentry

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

console
$ helm install --name my-release -f values.yaml stable/sentry

Tip: You can use the default <values.yaml>

PostgresSQL

By default, PostgreSQL is installed as part of the chart. To use an external PostgreSQL server set postgresql.enabled to false and then set postgresql.postgresHost and postgresql.postgresqlPassword. The other options (postgresql.postgresqlDatabase, postgresql.postgresqlUsername and postgresql.postgresqlPort) may also want changing from their default values.

To avoid issues when upgrade this chart, provide postgresql.postgresqlPassword for subsequent upgrades. This is due to an issue in the PostgreSQL chart where password will be overwritten with randomly generated passwords otherwise. See https://github.com/helm/charts/tree/master/stable/postgresql#upgrade for more detail.

Redis

By default, Redis is installed as part of the chart. To use an external Redis server/cluster set redis.enabled to false and then set redis.host. If your redis cluster uses password define it with redis.password, otherwise just omit it. Check the table above for more configuration options.

Persistence

The Sentry image stores the Sentry data at the /var/lib/sentry/files path of the container.

Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Configuration section to configure the PVC or to disable persistence.

Ingress

This chart provides support for Ingress resource. If you have an available Ingress Controller such as Nginx or Traefik you maybe want to set ingress.enabled to true and choose an ingress.hostname for the URL. Then, you should be able to access the installation using that address.

Persistence

This chart is capable of mounting the sentry-data PV in the Sentry worker and cron pods. This feature is disabled by default, but is needed for some advanced features such as private sourcemaps.

You may enable mounting of the sentry-data PV across worker and cron pods by changing filestore.filesystem.persistence.persistentWorkers to true. If you plan on deploying Sentry containers across multiple nodes, you may need to change your PVC's access mode to ReadWriteMany and check that your PV supports mounting across multiple nodes.

Upgrading from pre-2.0.0

The persistence keys have changed in charts 2.0.0 and newer, the following shows the mapping of keys from pre-2.0.0 to their current form:

Previous KeyNew Key
persistence.enabledfilestore.filesystem.persistence.enabled
persistence.existingClaimfilestore.filesystem.persistence.existingClaim
persistence.storageClassfilestore.filesystem.persistence.storageClass
persistence.accessModefilestore.filesystem.persistence.accessMode
persistence.sizefilestore.filesystem.persistence.size
persistence.filestore_dirfilestore.filesystem.path
persistence.persistentWorkersfilestore.filesystem.persistence.persistentWorkers