content/integrate/redis-data-integration/release-notes/rdi-1-8-0.md
{{< note >}}This minor release replaces the 1.6.7 release.{{< /note >}}
RDI’s mission is to help Redis customers sync Redis Enterprise with live data from their slow disk-based databases to:
RDI keeps the Redis cache up to date with changes in the primary database, using a Change Data Capture (CDC) mechanism. It also lets you transform the data from relational tables into convenient and fast data structures that match your app's requirements. You specify the transformations using a configuration system, so no coding is required.
3.0.8.Final) to address known vulnerabilities.expire expressions.collector, collectorSourceMetricsExporter, and processor have been moved to operator.dataPlane.collector and operator.dataPlane.processor.global.collectorApiEnabled has been moved to operator.dataPlane.collectorApi.enabled, and is now a boolean value (true or false), not "0" or "1".api.authEnabled is also now a boolean value, not "0" or "1".rdiMetricsExporter.service.protocol, rdiMetricsExporter.service.port, rdiMetricsExporter.serviceMonitor.path, api.service.nameThe RDI operator has been significantly enhanced in the following areas:
stop will remain stopped after deploy or reset, until explicitly started again.external.Pipeline and PipelineRelease custom K8s resources.expire expression for target output in transformation jobs.values.yaml formatting.rdi-secret.sh in the Helm zip file).requests and urllib3.status command.primary_key and unique_constraint attributes in Oracle metadata.capture.mode to MongoDB scaffolding.RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits.
When upgrading from RDI < 1.8.0 to RDI >= 1.8.0 in an HA setup, both RDI instances may incorrectly consider themselves active after the upgrade. This occurs because the upgrade process doesn't update the rdi:ha:lock value from the legacy cluster-1 identifier, causing both clusters to assume they are the active cluster.
Symptoms:
Workaround:
After upgrading, manually set a unique cluster ID for one of the installations by editing the configmap:
kubectl edit cm -n rdi rdi-sys-config
Then add the following line to distinguish between the clusters:
RDI_CLUSTER_ID: cluster-2