Back to Charts

JFrog Xray HA on Kubernetes Helm Chart - DEPRECATED

stable/xray/README.md

latest16.6 KB
Original Source

JFrog Xray HA on Kubernetes Helm Chart - DEPRECATED

This chart is deprecated! You can find the new chart in:

bash
helm repo add jfrog https://charts.jfrog.io

Prerequisites Details

  • Kubernetes 1.8+

Chart Details

This chart will do the following:

  • Optionally deploy PostgreSQL, MongoDB
  • Deploy RabbitMQ (optionally as an HA cluster)
  • Deploy JFrog Xray micro-services

Requirements

  • A running Kubernetes cluster
    • Dynamic storage provisioning enabled
    • Default StorageClass set to allow services using the default StorageClass for persistent storage
  • A running Artifactory
  • Kubectl installed and setup to use the cluster
  • Helm installed and setup to use the cluster (helm init)

Deploy JFrog Xray

bash
# cd to directory that includes untar version of helm charts
helm install -n xray --set replicaCount=2,rabbitmq-ha.replicaCount=2,common.masterKey=${MASTER_KEY} stable/xray

# Passing the imagePullSecrets to authenticate with Bintray and download E+ docker images. 
helm upgrade xray --set replicaCount=3,rabbitmq-ha.replicaCount=3,common.masterKey=${MASTER_KEY} stable/xray

Deploy Xray

Deploy the Xray tools and services

bash
# Get required dependency charts
$ helm dependency update stable/xray

# Deploy Xray
$ helm install -n xray stable/xray

Status

See the status of your deployed helm releases

bash
$ helm status xray 

Upgrade

To upgrade an existing Xray, you still use helm

bash
# Update existing deployed version to 2.1.2
$ helm upgrade --set common.xrayVersion=2.1.2 stable/xray

Remove

Removing a helm release is done with

bash
# Remove the Xray services and data tools
$ helm delete --purge xray

# Remove the data disks
$ kubectl delete pvc -l release=xray

Create a unique Master Key

JFrog Xray requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set in values.yaml (common.masterKey).

This key is for demo purpose and should not be used in a production environment!

You should generate a unique one and pass it to the template at install/upgrade time.

bash
# Create a key
$ export MASTER_KEY=$(openssl rand -hex 32)
$ echo ${MASTER_KEY}

# Pass the created master key to helm
$ helm install --set common.masterKey=${MASTER_KEY} -n xray stable/xray

NOTE: Make sure to pass the same master key with --set common.masterKey=${MASTER_KEY} on all future calls to helm install and helm upgrade!

Special deployments

This is a list of special use cases for non-standard deployments

High Availability

For high availability of Xray, just need to set the replica count per pod be equal or higher than 2. Recommended is 3.

It is highly recommended to also set RabbitMQ to run as an HA cluster.

bash
# Start Xray with 3 replicas per service and 3 replicas for RabbitMQ
$ helm install -n xray --set replicaCount=3,rabbitmq-ha.replicaCount=3 stable/xray

External Databases

There is an option to use external database services (MongoDB or PostgreSQL) for your Xray.

MongoDB

To use an external MongoDB, You need to set Xray MongoDB connection URL.

For this, pass the parameter: global.mongoUrl=${XRAY_MONGODB_CONN_URL}.

IMPORTANT: Make sure the DB is already created before deploying Xray services

bash
# Passing a custom MongoDB to Xray

# Example
# MongoDB host: custom-mongodb.local
# MongoDB port: 27017
# MongoDB user: xray
# MongoDB password: password1_X

$ export XRAY_MONGODB_CONN_URL='mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@custom-mongodb.local:27017/?authSource=${MONGODB_DATABSE}&authMechanism=SCRAM-SHA-1'
$ helm install -n xray --set global.mongoUrl=${XRAY_MONGODB_CONN_URL} stable/xray

PostgreSQL

To use an external PostgreSQL, You need to disable the use of the bundled PostgreSQL and set a custom PostgreSQL connection URL.

For this, pass the parameters: postgresql.enabled=false and global.postgresqlUrl=${XRAY_POSTGRESQL_CONN_URL}.

IMPORTANT: Make sure the DB is already created before deploying Xray services

bash
# Passing a custom PostgreSQL to Xray

# Example
# PostgreSQL host: custom-postgresql.local
# PostgreSQL port: 5432
# PostgreSQL user: xray
# PostgreSQL password: password2_X

$ export XRAY_POSTGRESQL_CONN_URL='postgres://${POSTGRESQL_USER}:${POSTGRESQL_PASSWORD}@custom-postgresql.local:5432/${POSTGRESQL_DATABASE}?sslmode=disable'
$ helm install -n xray --set postgresql.enabled=false,global.postgresqlUrl=${XRAY_POSTGRESQL_CONN_URL} stable/xray

Configuration

The following table lists the configurable parameters of the xray chart and their default values.

ParameterDescriptionDefault
imagePullSecretsDocker registry pull secret
imagePullPolicyContainer pull policyIfNotPresent
initContainerImageInit container imagealpine:3.6
serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
serviceAccount.nameThe name of the ServiceAccount to createGenerated using the fullname template
rbac.createSpecifies whether RBAC resources should be createdtrue
rbac.role.rulesRules to create[]
ingress.enabledIf true, Xray Ingress will be createdfalse
ingress.annotationsXray Ingress annotations{}
ingress.hostsXray Ingress hostnames[]
ingress.tlsXray Ingress TLS configuration (YAML)[]
replicaCountReplica count for Xray services1
postgresql.enabledUse enclosed PostgreSQL as databasetrue
postgresql.postgresDatabasePostgreSQL database namexraydb
postgresql.postgresUserPostgreSQL database userxray
postgresql.postgresPasswordPostgreSQL database password
postgresql.persistence.enabledPostgreSQL use persistent storagetrue
postgresql.persistence.sizePostgreSQL persistent storage size50Gi
postgresql.persistence.existingClaimPostgreSQL use existing persistent storage
postgresql.service.portPostgreSQL database port5432
postgresql.resources.requests.memoryPostgreSQL initial memory request
postgresql.resources.requests.cpuPostgreSQL initial cpu request
postgresql.resources.limits.memoryPostgreSQL memory limit
postgresql.resources.limits.cpuPostgreSQL cpu limit
mongodb.enabledEnable Mongodbtrue
mongodb.image.tagMongodb docker image tag3.6.3
mongodb.image.pullPolicyMongodb Container pull policyIfNotPresent
mongodb.persistence.enabledMongodb persistence volume enabledtrue
mongodb.persistence.existingClaimUse an existing PVC to persist datanil
mongodb.persistence.storageClassStorage class of backing PVCgeneric
mongodb.persistence.sizeMongodb persistence volume size50Gi
mongodb.livenessProbe.initialDelaySecondsMongodb delay before liveness probe is initiated
mongodb.readinessProbe.initialDelaySecondsMongodb delay before readiness probe is initiated
mongodb.mongodbExtraFlagsMongoDB additional command line flags["--wiredTigerCacheSizeGB=1"]
mongodb.mongodbDatabaseMongodb Database for Xrayxray
mongodb.mongodbRootPasswordMongodb Database Password for root user
mongodb.mongodbUsernameMongodb Database Xray Useradmin
mongodb.mongodbPasswordMongodb Database Password for Xray User
rabbitmq-ha.replicaCountRabbitMQ Number of replica1
rabbitmq-ha.rabbitmqUsernameRabbitMQ application usernameguest
rabbitmq-ha.rabbitmqPasswordRabbitMQ application password
rabbitmq-ha.customConfigMapRabbitMQ Use a custom ConfigMaptrue
rabbitmq-ha.rabbitmqErlangCookieRabbitMQ Erlang cookieXRAYRABBITMQCLUSTER
rabbitmq-ha.rabbitmqMemoryHighWatermarkRabbitMQ Memory high watermark500MB
rabbitmq-ha.persistentVolume.enabledIf true, persistent volume claims are createdtrue
rabbitmq-ha.persistentVolume.sizeRabbitMQ Persistent volume size20Gi
rabbitmq-ha.rbac.createIf true, create & use RBAC resourcestrue
common.xrayVersionXray image tag2.3.0
common.xrayConfigPathXray config path/var/opt/jfrog/xray/data
common.xrayUserIdXray User Id1035
common.xrayGroupIdXray Group Id1035
common.stdOutEnabledXray enable standard outputtrue
common.masterKeyXray Master Key Can be generated with openssl rand -hex 32FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
global.mongoUrlXray external MongoDB URL
global.postgresqlUrlXray external PostgreSQL URL
analysis.nameXray Analysis namexray-analysis
analysis.imageXray Analysis container imagedocker.bintray.io/jfrog/xray-analysis
analysis.internalPortXray Analysis internal port7000
analysis.externalPortXray Analysis external port7000
analysis.service.typeXray Analysis service typeClusterIP
analysis.storage.sizeLimitXray Analysis storage size limit10Gi
analysis.resourcesXray Analysis resources{}
indexer.nameXray Indexer namexray-indexer
indexer.imageXray Indexer container imagedocker.bintray.io/jfrog/xray-indexer
indexer.internalPortXray Indexer internal port7002
indexer.externalPortXray Indexer external port7002
indexer.service.typeXray Indexer service typeClusterIP
indexer.storage.sizeLimitXray Indexer storage size limit10Gi
indexer.resourcesXray Indexer resources{}
persist.nameXray Persist namexray-persist
persist.imageXray Persist container imagedocker.bintray.io/jfrog/xray-persist
persist.internalPortXray Persist internal port7003
persist.externalPortXray Persist external port7003
persist.service.typeXray Persist service typeClusterIP
persist.storage.sizeLimitXray Persist storage size limit10Gi
persist.resourcesXray Persist resources{}
server.nameXray server namexray-server
server.imageXray server container imagedocker.bintray.io/jfrog/xray-server
server.internalPortXray server internal port8000
server.externalPortXray server external port80
server.service.nameXray server service namexray
server.service.typeXray server service typeLoadBalancer
server.storage.sizeLimitXray server storage size limit10Gi
server.resourcesXray server resources{}

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Ingress and TLS

To get Helm to create an ingress object with a hostname, add these two lines to your Helm command:

helm install --name xray \
  --set ingress.enabled=true \
  --set ingress.hosts[0]="xray.company.com" \
  --set server.service.type=NodePort \
  stable/xray

If your cluster allows automatic creation/retrieval of TLS certificates (e.g. kube-lego), please refer to the documentation for that mechanism.

To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:

console
kubectl create secret tls xray-tls --cert=path/to/tls.cert --key=path/to/tls.key

Include the secret's name, along with the desired hostnames, in the Xray Ingress TLS section of your custom values.yaml file:

  ingress:
    ## If true, Xray Ingress will be created
    ##
    enabled: true

    ## Xray Ingress hostnames
    ## Must be provided if Ingress is enabled
    ##
    hosts:
      - xray.domain.com
    annotations:
      kubernetes.io/tls-acme: "true"
    ## Xray Ingress TLS configuration
    ## Secrets must be manually created in the namespace
    ##
    tls:
      - secretName: xray-tls
        hosts:
          - xray.domain.com