Back to Charts

JFrog Distribution Helm Chart - DEPRECATED

stable/distribution/README.md

latest13.1 KB
Original Source

JFrog Distribution Helm Chart - DEPRECATED

This chart is deprecated! You can find the new chart in:

bash
helm repo add jfrog https://charts.jfrog.io

Prerequisites Details

  • Kubernetes 1.8+

Chart Details

This chart will do the following:

  • Deploy Mongodb database.
  • Deploy a Redis.
  • Deploy a distributor.
  • Deploy a distribution.

Requirements

  • A running Kubernetes cluster
  • Dynamic storage provisioning enabled
  • Default StorageClass set to allow services using the default StorageClass for persistent storage
  • A running Artifactory Enterprise Plus
  • Kubectl installed and setup to use the cluster
  • Helm installed and setup to use the cluster (helm init)

Installing the Chart

To install the chart with the release name distribution:

helm install --name distribution stable/distribution

Accessing Distribution

NOTE: It might take a few minutes for Distribution's public IP to become available, and the nodes to complete initial setup. Follow the instructions outputted by the install command to get the Distribution IP and URL to access it.

Updating Distribution

Once you have a new chart version, you can update your deployment with

helm upgrade distribution stable/distribution

Create a unique Master Key

JFrog Distribution requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set in values.yaml (distribution.masterKey).

This key is for demo purpose and should not be used in a production environment!

You should generate a unique one and pass it to the template at install/upgrade time.

bash
# Create a key
$ export MASTER_KEY=$(openssl rand -hex 32)
$ echo ${MASTER_KEY}

# Pass the created master key to helm
$ helm install --set distribution.masterKey=${MASTER_KEY} -n distribution stable/distribution

NOTE: Make sure to pass the same master key with --set distribution.masterKey=${MASTER_KEY} on all future calls to helm install and helm upgrade!

External Databases

There is an option to use external database services (MongoDB or PostgreSQL) for your Distribution.

MongoDB

To use an external MongoDB, You need to set Distribution MongoDB connection URL.

For this, pass the parameter: mongodb.enabled=false,global.mongoUrl=${DISTRIBUTION_MONGODB_CONN_URL},global.mongoAuditUrl=${DISTRIBUTION_MONGODB_AUDIT_URL}.

IMPORTANT: Make sure the DB is already created before deploying Distribution services

bash
# Passing a custom MongoDB to Distribution

# Example
# MongoDB host: custom-mongodb.local
# MongoDB port: 27017
# MongoDB user: distribution
# MongoDB password: password1_X

$ export DISTRIBUTION_MONGODB_CONN_URL='mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@custom-mongodb.local:27017/${MONGODB_DATABSE}'
$ export DISTRIBUTION_MONGODB_AUDIT_URL='mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@custom-mongodb.local:27017/audit?maxpoolsize=500'
$ helm install -n distribution --set global.mongoUrl=${DISTRIBUTION_MONGODB_CONN_URL},global.mongoAuditUrl=${DISTRIBUTION_MONGODB_AUDIT_URL} stable/distribution

External Redis

To use an external Redis, You need to disable the use of the bundled Redis and set a custom Redis connection URL.

For this, pass the parameters: redis.enabled=false and global.redisUrl=${DISTRIBUTION_REDIS_CONN_URL}.

IMPORTANT: Make sure the DB is already created before deploying Distribution services

bash
# Passing a custom Redis to Distribution

# Example
# Redis host: custom-redis.local
# Redis port: 6379
# Redis password: password2_X

$ export DISTRIBUTION_REDIS_CONN_URL='redis://:${REDIS_PASSWORD}@custom-redis.local:6379'
$ helm install -n distribution --set redis.enabled=false,global.redisUrl=${DISTRIBUTION_REDIS_CONN_URL} stable/distribution

Configuration

The following table lists the configurable parameters of the distribution chart and their default values.

ParameterDescriptionDefault
imagePullSecretsDocker registry pull secret
serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
serviceAccount.nameThe name of the ServiceAccount to createGenerated using the fullname template
rbac.createSpecifies whether RBAC resources should be createdtrue
rbac.role.rulesRules to create[]
ingress.enabledIf true, distribution Ingress will be createdfalse
ingress.annotationsdistribution Ingress annotations{}
ingress.hostsdistribution Ingress hostnames[]
ingress.tlsdistribution Ingress TLS configuration (YAML)[]
mongodb.enabledEnable Mongodbtrue
mongodb.image.tagMongodb docker image tag3.6.3
mongodb.image.pullPolicyMongodb Container pull policyIfNotPresent
mongodb.persistence.enabledMongodb persistence volume enabledtrue
mongodb.persistence.existingClaimUse an existing PVC to persist datanil
mongodb.persistence.storageClassStorage class of backing PVCgeneric
mongodb.persistence.sizeMongodb persistence volume size10Gi
mongodb.livenessProbe.initialDelaySecondsMongodb delay before liveness probe is initiated40
mongodb.readinessProbe.initialDelaySecondsMongodb delay before readiness probe is initiated30
mongodb.mongodbExtraFlagsMongoDB additional command line flags["--wiredTigerCacheSizeGB=1"]
mongodb.usePasswordEnable password authenticationfalse
mongodb.mongodbDatabaseMongodb Database for distributionbintray
mongodb.mongodbRootPasswordMongodb Database Password for root user
mongodb.mongodbUsernameMongodb Database Mission Control Userdistribution
mongodb.mongodbPasswordMongodb Database Password for Mission Control user
redis.enabledEnable Redistrue
redis.redisPasswordRedis password
redis.master.portRedis Port6379
redis.persistence.enabledUse a PVC to persist datatrue
redis.persistence.existingClaimUse an existing PVC to persist datanil
redis.persistence.storageClassStorage class of backing PVCgeneric
redis.persistence.sizeSize of data volume10Gi
distribution.nameDistribution namedistribution
distribution.image.pullPolicyContainer pull policyIfNotPresent
distribution.image.repositoryContainer imagedocker.jfrog.io/jf-distribution
distribution.image.versionContainer image tag1.1.0
distribution.service.typeDistribution service typeLoadBalancer
distribution.externalPortDistribution service external port80
distribution.internalPortDistribution service internal port8080
distribution.env.artifactoryUrlDistribution Environment Artifactory URL
distribution.persistence.mountPathDistribution persistence volume mount path"/jf-distribution"
distribution.persistence.enabledDistribution persistence volume enabledtrue
distribution.persistence.storageClassStorage class of backing PVCnil
distribution.persistence.existingClaimProvide an existing PersistentVolumeClaimnil
distribution.persistence.accessModeDistribution persistence volume access modeReadWriteOnce
distribution.persistence.sizeDistribution persistence volume size50Gi
distributor.nameDistribution namedistribution
distributor.image.pullPolicyContainer pull policyIfNotPresent
distributor.image.repositoryContainer imagedocker.jfrog.io/jf-distribution
distributor.image.versionContainer image tag1.1.0
distributor.tokenDistributor token
distributor.persistence.mountPathDistributor persistence volume mount path"/bt-distributor"
distributor.persistence.existingClaimProvide an existing PersistentVolumeClaimnil
distributor.persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
distributor.persistence.enabledDistributor persistence volume enabledtrue
distributor.persistence.accessModeDistributor persistence volume access modeReadWriteOnce
distributor.persistence.sizeDistributor persistence volume size50Gi

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Ingress and TLS

To get Helm to create an ingress object with a hostname, add these two lines to your Helm command:

helm install --name distribution \
  --set ingress.enabled=true \
  --set ingress.hosts[0]="distribution.company.com" \
  --set distribution.service.type=NodePort \
  stable/distribution

If your cluster allows automatic creation/retrieval of TLS certificates (e.g. cert-manager), please refer to the documentation for that mechanism.

To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:

console
kubectl create secret tls distribution-tls --cert=path/to/tls.cert --key=path/to/tls.key

Include the secret's name, along with the desired hostnames, in the Distribution Ingress TLS section of your custom values.yaml file:

  ingress:
    ## If true, Distribution Ingress will be created
    ##
    enabled: true

    ## Distribution Ingress hostnames
    ## Must be provided if Ingress is enabled
    ##
    hosts:
      - distribution.domain.com
    annotations:
      kubernetes.io/tls-acme: "true"
    ## Distribution Ingress TLS configuration
    ## Secrets must be manually created in the namespace
    ##
    tls:
      - secretName: distribution-tls
        hosts:
          - distribution.domain.com