Back to Charts

⚠️ Repo Archive Notice

stable/hlf-ord/README.md

latest10.9 KB
Original Source

⚠️ Repo Archive Notice

As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.

Hyperledger Fabric Orderer

Hyperledger Fabric Orderer is the node type responsible for "consensus" for the Hyperledger Fabric permissioned blockchain framework.

DEPRECATION NOTICE

This chart is deprecated and no longer supported.

TL;DR;

bash
$ helm install stable/hlf-ord

Introduction

The Hyperledger Fabric Orderer can be installed as either a solo orderer (for development), or a kafka orderer (for crash fault tolerant consensus).

This Orderer can receive transaction endorsements and package them into blocks to be distributed to the nodes of the Hyperledger Fabric network.

Learn more about deploying a production ready consensus framework based on Apache Kafka. Minimally, you will need to set these options:

  "default.replication.factor": 4  # given a 4 node Kafka cluster
  "unclean.leader.election.enable": false
  "min.insync.replicas": 3  # to permit one Kafka replica to go offline
  "message.max.bytes": "103809024"  # 99 * 1024 * 1024 B
  "replica.fetch.max.bytes": "103809024"  # 99 * 1024 * 1024 B
  "log.retention.ms": -1  # Since we need to keep logs indefinitely for the HL Fabric Orderer

Prerequisites

  • Kubernetes 1.9+
  • PV provisioner support in the underlying infrastructure.
  • K8S secrets containing:
    • the crypto-materials (e.g. signcert, key, cacert, and optionally intermediatecert, CA credentials)
    • the genesis block for the Orderer
    • the certificate of the Orderer Organisation Admin
  • A running Kafka Chart if you are using the kafka consensus mechanism.

Installing the Chart

To install the chart with the release name ord1:

bash
$ helm install stable/hlf-ord --name ord1

The command deploys the Hyperledger Fabric Orderer on the Kubernetes cluster in the default configuration. The Configuration section lists the parameters that can be configured during installation.

Custom parameters

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example:

bash
$ helm install stable/hlf-ord --name ord1 --set ord.mspID=MyMSP

Alternatively, a YAML file can be provided while installing the chart. This file specifies values to override those provided in the default values.yaml. For example,

bash
$ helm install stable/hlf-ord --name ord1 -f my-values.yaml

Updating the chart

To update the chart run:

bash
$ helm upgrade ord1 stable/hlf-ord -f my-values.yaml

Uninstalling the Chart

To uninstall/delete the ord1 deployment:

bash
$ helm delete ord1

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following table lists the configurable parameters of the Hyperledger Fabric Orderer chart and default values.

ParameterDescriptionDefault
image.repositoryhlf-ord image repositoryhyperledger/fabric-orderer
image.taghlf-ord image tagx86_64-1.1.0
image.pullPolicyImage pull policyIfNotPresent
service.portTCP port7050
service.typeK8S service type exposing ports, e.g. ClusterIPClusterIP
service.portMetricsTCP port for the metrics service9443
ingress.enabledIf true, Ingress will be createdfalse
ingress.annotationsIngress annotations{}
ingress.pathIngress path/
ingress.hostsIngress hostnames[]
ingress.tlsIngress TLS configuration[]
persistence.accessModeUse volume as ReadOnly or ReadWriteReadWriteOnce
persistence.annotationsPersistent Volume annotations{}
persistence.sizeSize of data volume (adjust for production!)1Gi
persistence.storageClassStorage class of backing PVCdefault
ord.typeType of Orderer (solo or kafka)solo
ord.mspIDID of MSP the Orderer belongs toOrdererMSP
ord.tls.server.enabledDo we enable server-side TLS?false
ord.tls.client.enabledDo we enable client-side TLS?false
ord.metrics.providerMetrics provider, can be statsd, prometheus or disableddisabled
ord.metrics.statsd.networkNetwork type, can be udp or tcpudp
ord.metrics.statsd.addressAddress of the StatsD server127.0.0.1:8125
ord.metrics.statsd.writeIntervalIntervall at whitch counters and gauges are pushed30s
ord.metrics.statsd.prefixPrefix prepended to all the exported metrics``
secrets.ord.credCredentials: 'CA_USERNAME' and 'CA_PASSWORD'``
secrets.ord.certCertificate: as 'cert.pem'``
secrets.ord.keyPrivate key: as 'key.pem'``
secrets.ord.caCertCA Cert: as 'cacert.pem'``
secrets.ord.intCaCertInt. CA Cert: as 'intermediatecacert.pem'``
secrets.ord.tlsTLS secret: as 'tls.crt' and 'tls.key'``
secrets.ord.tlsRootCertTLS root CA certificate: as 'cert.pem'``
secrets.ord.tlsClientRootCertTLS client root CA certificate: as 'cert.pem'``
secrets.genesisSecret containing Genesis Block for orderer``
secrets.adminCertSecret containing Orderer Org admin certificate``
resourcesCPU/Memory resource requests/limits{}
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
affinityAffinity settings for pod assignment{}

Persistence

The volume stores the Fabric Orderer data and configurations at the /var/hyperledger path of the container.

The chart mounts a Persistent Volume at this location. The volume is created using dynamic volume provisioning through a PersistentVolumeClaim managed by the chart.

Upgrading from version 1.1.x

Previous versions of this chart performed enrollment with the Fabric CA directly from the pod. This prevented the possibility of using development cryptographic material (certificates and keys) from Cryptogen or the usage of other CA mechanisms.

Instead, crypto-material and CA credentials are stored separately as secrets.

If you used the former type of chart, you will need to obtain the relevant credentials and cryptographic material from the running pod, and save it externally to a set of secrets, whose names you will need to feed into the chart, under the secrets.ord section.

An example upgrade procedure is described in UPGRADE_1-1-x.md

Feedback and feature requests

This is a work in progress and we are happy to accept feature requests. We are even happier to accept pull requests implementing improvements :-)