Skip to content

Releases: redis-field-engineering/redis-connect-dist

Release v0.10.0-redis-connect

23 Mar 01:55
Compare
Choose a tag to compare

πŸš€ Changelog

🧰 Enhancements

  • Added GEMFIRE connector to support both initial load and stream jobs.
  • Added FILES connector to support initial load jobs from CVS, TSV, PSV, JSON, and XML files types.
  • Added SPLUNK connector to support HEC Collector stream jobs as a source and a HEC_FORWARDER_SINK to forward logs to Splunk Enterprise. Initial load jobs are not supported.
  • Added REDIS_STREAMS_MESSAGE_BROKER source type to support Redis Streams brokered deployments in which the source and sink connectors can scale independently and communicate via a structured schema.
  • Added REDIS_STREAMS_MESSAGE_BROKER sink type to support Redis Streams brokered deployments in which the source and sink connectors can scale independently and communicate via a structured schema.
  • Added REDIS_STRING_SINK to support String blobs (often JSON) from sources such as Gemfire, Splunk, etc.
  • Enhanced delivery guarantees by adding checkpoint rollback for failed Redis transactions.
  • Enhanced delivery guarantees by adding checkpoint rollback for failed replication-to-backup Redis shards using the WAIT command.
  • Added checkpoint key as an attribute to the checkpoint hash value. In the event of an exception, the checkpoint value will be logged and its key will make it clear which hashslot should be updated to move past the corrupted source offset.
  • Enhanced save checkpoint process to no longer assume that the latest checkpoint key should be used during the administrative save checkpoint process.
  • Added Time-Sequence Strategies including INITIAL_LOAD, LRU, NO, PASS_THROUGH and SEQUENTIAL and standardized all usage of sourceTxTime and sourceTxSequence around them.
  • Added Domain Model Strategies including DICTIONARY, KEY_ONLY, STRING, and MESSAGE_BROKER.
  • Added Redis Streams Eviction Strategies including THRESHOLD and SCHEDULER. They are used in conjunction with a MESSAGE_BROKER deployment to clean up Redis Streams partitions in coordination with committed checkpoints so data is not lost.
  • Added thread-safe in-memory ChangeEventQueue which transitions changed data events between the source and the JobProducer's polling loop for non-debezium connectors with write-through persistence to Redis Streams.
  • Added Redis Streams distributed ChangeEventQueue which transitions changed data events between the source and the JobProducer's polling loop for non-debezium connectors.
  • Added new LOGGER called "redis-connect-pipeline" which only captures DEBUG level logs solely dedicated to the critical path across every layer of the replication pipeline at the change event level. This will be helpful for debugging in production since it avoids all other noise.
  • Added JobPipelineStageDTO configuration called "checkpointDatabase" to be used for offset management when the target database is not Redis. This would be typical in a MESSAGE_BROKER deployment.
  • Added JobSourceTableColumnDTO configuration called "passThroughEnabled" which does not allow column-level values to be passed beyond the transformation layer when disabled. This is useful if they are only used for enrichment.
  • Enhanced JobReaper service with distributed two-phase commit to avoid split-brain / zombie jobs (running but not claimed) in the event of network split failure type scenarios and un-synchronized job manager interval threshold configurations.
  • Added job-level configuration and validations for max partitions per cluster member. Each JobClaimer service will now check to see if its cluster member has reached a threshold for max claimed job partitions before attempting to claim another. This is supported for both the stream and initial load jobs.
  • Removed validation to allow a Custom Stage to access custom configs without requiring a database credential validation.
  • Removed optional secrets encryption logic since we no longer store credentials within Redis Connect.
  • Enhanced error messages across all REST, CLI, and SWAGGER endpoints by consistently including the first line of the exception stack trace. This specifically helps identify which field is missing/incorrect within the job config file in the event of save failure.
  • Enhanced validation of source connections by handling stream and initial load jobs separately.
  • Added validation for event handler (pipeline stages) configuration which will be enforced as part of the start job process.
  • Added validation for source configuration which will be enforced as part of the start job process.
  • Added validation to the start and migrate job processes which calculates available capacity with respect to the job's max partitions per cluster member configuration.
  • Added various minor validations to avoid corner case misconfigurations.
  • Added granularity for trustStore and keyStore configuration across source, target, and checkpoint databases.
  • Disabled debezium's tombstone.on.delete configuration by default since it emits events meant for kafka compaction which are not relevant to Redis Connect. Existing workaround code was left in place for backward compatibility.
  • Enhanced metrics to report at the jobId level instead of jobName for granularity at the job partition level.
  • Refactored Producer, Transformation, Connection, Event Handler (pipeline stages), and Utils layers for easier future extension and standardization.
  • (Deprecated - Not backward compatible) JobSourceDTO.Table configuration called "schemaAndTableName". It was redundant with the table name. Extra care should be taken to remove this configuration during upgrades as it will cause the save job config process to fail.
  • (Not backward compatible for custom stages) Refactored ChangeEventDTO to account for dependent events. An example of a dependent event is a primary key change within a relational database which creates both a delete and insert within the target.
  • (Not backward compatible for custom stages) Changed package for BaseCustomStageHandler from com.redis.connect.pipeline.event.handler to com.redis.connect.pipeline.event.handler.impl.
  • (Not backward compatible for custom stages) Broke up ConnectConstants into 4 separate Constants classes and moved them to a dedicated package.
  • Added feature to receive Email alert notifications for STOPPED jobs due to exceptions.
  • Added feature to build Redis Connect docker image based on users choice of base images.
  • Added Multi-Architecture build with Docker.

πŸ› Bug Fixes

  • Handled edge case in which Redis client could be successfully created and cached however the connection would fail and the client wasn't removed from the cache.

Tested Versions

Java 11+
Redis Connect 0.10.x
DB2 (Initial Loader) Database: 11.5.x
JDBC Driver: 11.5.6.0
Files (Initial Loader) CSV, TSV, PSV, JSON, XML
Gemfire (CDC and Initial Loader) Database: 1.12.9
Driver: 1.12.9
MongoDB (CDC and Initial Loader) Database: 4.4+
Driver: 4.7.1
MySQL (CDC and Initial Loader) Database: 8.0.x
JDBC Driver: 8.0.29
Oracle (CDC and Initial Loader) Database: 11g, 12c, 19c, 21c
JDBC Driver: 12.2.0.1, 19.8.0.0, 21.1.0.0, 21.3.0.0, 21.4.0.0, 21.5.0.0, 21.6.0.0
Adapter: logminer
PostgreSQL (CDC and Initial Loader) Database: 10, 11, 12, 13, 14, 15
JDBC Driver: 42.5.1
Plug-ins: pgoutput
Splunk (CDC) 8.1.2
SQL Server (CDC and Initial Loader) Database: 2017, 2019
JDBC Driver: 10.2.1.jre8
Vertica (Initial Loader) Database: 11.1.0-0
JDBC Driver: 11.1.0-0

Release v0.9.4-redis-connect-core

16 Jul 18:50
48ed69e
Compare
Choose a tag to compare
Pre-release

πŸš€ Redis Connect Core

redis-connect-core maven dependency to build Custom Stages

Custom Stages in Redis Connect is used when there is a need for custom coding for the purpose of user specific transformations, de-tokenization or any other custom tasks you would want to do before the source data is passed along to the final WRITE stage and persisted in the Redis Enterprise database.

        <dependency>
            <!-- This jar can be found in the Redis Connect lib folder and installed using maven
            install-file goal on the command line, http://maven.apache.org/general.html#importing-jars
            or imported directly into this project -->
            <groupId>com.redis.connect</groupId>
            <artifactId>redis-connect-core</artifactId>
            <version>0.9.4</version>
        </dependency>

Release v0.9.4-redis-connect

04 Oct 23:55
48ed69e
Compare
Choose a tag to compare
Pre-release

πŸš€ Changelog

🧰 Enhancements

  • Enhanced REDIS_STREAMS_SINK to support partitioning by publishing to separate Redis Stream keys. This includes partitioning at the Partitioned Job level (scale-out the source) and/or at the Target Sink level (scale-out the target).
  • Enhanced REDIS_STREAMS_SINK to pass through the timestamp which denotes when the source committed the transaction to its change log/table. This includes maintaining a unique sequence for transactions that occurred within the same timestamp. With this capability, users can maintain exact ordering as captured at the source, across partitioned Redis Stream keys, even if they arrive out of order, simply by reordering them within the target database without concern about managing conflicts.
  • Enhanced REDIS_STREAMS_SINK with optional maxLength configuration so the target Redis database can be protected from potentially running out of memory in the event the Redis Stream's consumer stops managing the stream's length.
  • Enhanced Initial Load to support credentials rotation without having to restart tasks. This is particularly useful for very long-running initial load and periodic ETL processes which might overlap with credential rotation schedules.
  • Enhanced Initial Load tasks to support the stop process with full feature parity to partitioned stream jobs. This includes handling cascading failure scenarios and graceful failure of all partitions in the event of a single partition failure. It is particularly useful for testing in development environments.
  • Enhanced Initial Load tasks to support the use of RowIndex as the primary key. This is particularly useful for ETL processes that replicate data from aggregated reporting tables which do not have a primary key.
  • Enhanced Initial Load tasks to support a customWhereClause configuration that is seamlessly added to the underlying select statement used for initial load. This is compatible with each variation of initial load configuration (primary key, RowIndex, and pass through).
  • Enhanced Initial Load tasks with circuit breaker protection and connection retry logic for parity with stream jobs.
  • Enhanced Initial Load tasks to quiesce each stage for all events published to its pipeline before notification of its completion, release of resources, and status update in the Job Manager database. This is particularly useful for long-running custom stages and pipelines with large buffer sizes which might require prolonged durations to fully quiesce.
  • Enhanced Initial Load tasks with new transition types so there is distinction between tasks that were COMPLETED, manually STOPPED, or abruptly FAILED.
  • Enhanced Initial Load tasks to share a data source across partitions significantly reducing connection overhead. This is only supported for JDBC-based tasks.
  • Added new REST (including SWAGGER) and CLI endpoints to access Job and Task (Initial Load) transition logs without having to access the Redis CLI or RedisInsight directly.
  • Enhanced stop/remove processes with quiesce capability so all events published to their pipeline (per partition) fully process each stage before shutdown (due to graceful stop or failure event). This includes bypass logic for certain failure cases in which the root cause would prevent writing to the target which avoids waiting to timeout each event.
  • Improved orchestration for stop process while backpressure protection is occurring (e.g. there is more load than the system was configured to handle) so all events already published to their pipeline (per partition) fully process each stage before shutdown.
  • Improved orchestration for graceful failures in the transformation layer in order to avoid race conditions between producer and stage(s) threads.
  • Enhanced credentials rotation by adding stop process to handle failed connections with new credentials. This avoids harder to troubleshoot downstream connection exceptions once the former credentials expire.
  • Enhanced stop/restart/migrate processes with parallelization so all partitions, owned by a single Redis Connect Instance, can begin their quiesce process at the same time instead of serial.
  • Added support for RAW and CLOB column types.
  • Added BaseCustomStageHandler to standardize logging and exception handling. Users now only need to extend this new handler and implement a single method when creating a custom stage.
  • Bumped debezium release version to v1.9.6.Final which includes our requested fix for RDB sources to parse JSON data without the constraint on CLOB column.
  • Added various new validations to avoid corner case misconfigurations.
  • Improved various exception handling and logging for easier root-cause analysis.
  • Changed default pipeline buffer size to 4096 to avoid unnecessarily prolonged quiesce cycles.
  • (Not backward compatible) Renamed REDIS_STREAM_SINK to REDIS_STREAMS_SINK.
  • (Not backward compatible) Changed default value for snapshot.mode for all Debezium supported sources from "initial" to either "never" or "schema_only". This avoids using debezium's initial load snapshot process which is slow and does not scale efficiently. For development environments, users can manually set the snapshot.mode back to "initial" since they test on small tables. For production environments, Redis Connect's initial load process should be used to scale independently from the stream process.
  • (Deprecated) Initial Load selectQuery and countQuery configurations. In their place a new framework removes the need for users to create complex nested queries. Instead each query will be customized to user preferences based on Boolean fields and customWhereClause; specific to the unique semantics of each source's SQL support.

πŸ› Bug Fixes

  • Fixed checkpoint transactionality to work on clustered Redis databases.
  • Replaced scanning every file within the user-provided credentials directory, during the credentials rotation process, with a direct read on only the file/job that is having its credentials rotated. This avoids impacting every job (noisy neighbor) during each individual credentials rotation cycle.

Tested Versions

Java 11+
Redis Connect 0.9.x
DB2 (Initial Loader) Database: 11.5.x
JDBC Driver: 11.5.6.0
Files (Initial Loader) CSV
MongoDB (CDC and Initial Loader) Database: 4.4+
Driver: 4.3.3
MySQL (CDC and Initial Loader) Database: 5.7, 8.0.x
JDBC Driver: 8.0.28
Oracle (CDC and Initial Loader) Database: 11g, 12c, 19c, 21c
JDBC Driver: 12.2.0.1, 19.8.0.0, 21.1.0.0
Adapter: logminer
PostgreSQL (CDC and Initial Loader) Database: 10, 11, 12, 13, 14
JDBC Driver: 42.3.5
Plug-ins: pgoutput
SQL Server (CDC and Initial Loader) Database: 2017, 2019
JDBC Driver: 9.4.1.jre8
Vertica (Initial Loader) Database: 11.1.0-0
JDBC Driver: 11.1.0-0, 12.0.1-0

Release v0.9.3-redis-connect [DEPRECATED]

07 Jul 03:55
Compare
Choose a tag to compare

πŸš€ Changelog

πŸ› Bug Fixes

  • Migration from Pub/Sub to Streams created a scenario for migration transitions where the same instance that previously owned the job would always win the race to claim it and even though it was blacklisted would recover it from the PEL renewing its claim ownership. A new validation was added to prevent the blacklisted instance from being able to claim the job using the PEL flow.

🧰 Enhancements

  • MongoDB SSL/TLS certificate support using configurations
  • Load/Filter query support with MongoDB's Initial load job using MongoDB's query and projection operators
  • Support for limiting query results at the partition level for MongoDB Initial Load
  • Enhancement to increase latency for multi-stage pipelines

Tested Versions

Java 11+
Redis Connect 0.9.x
DB2 (Initial Loader) Database: 11.5.x
JDBC Driver: 11.5.6.0
Files (Initial Loader) CSV
MongoDB (CDC and Initial Loader) Database: 4.4+
Driver: 4.3.3
MySQL (CDC and Initial Loader) Database: 5.7, 8.0.x
JDBC Driver: 8.0.28
Oracle (CDC and Initial Loader) Database: 11g, 12c, 19c, 21c
JDBC Driver: 12.2.0.1, 19.8.0.0, 21.1.0.0
Adapter: logminer
PostgreSQL (CDC and Initial Loader) Database: 10+
JDBC Driver: 42.3.3
Plug-ins: pgoutput
SQL Server (CDC and Initial Loader) Database: 2017, 2019
JDBC Driver: 9.4.1.jre8
Vertica (Initial Loader) Database: 11.1.0-0
JDBC Driver: 11.1.0-0

Release v0.9.3-redis-connect-core [DEPRECATED]

29 Jun 02:11
Compare
Choose a tag to compare

πŸš€ Redis Connect Core

redis-connect-core maven dependency to build Custom Stages

Custom Stages in Redis Connect is used when there is a need for custom coding for the purpose of user specific transformations, de-tokenization or any other custom tasks you would want to do before the source data is passed along to the final WRITE stage and persisted in the Redis Enterprise database.

        <dependency>
            <!-- This jar can be found in the Redis Connect lib folder and installed using maven
            install-file goal on the command line, http://maven.apache.org/general.html#importing-jars
            or imported directly into this project -->
            <groupId>com.redis.connect</groupId>
            <artifactId>redis-connect-core</artifactId>
            <version>0.9.3</version>
        </dependency>

Release v0.9.2-redis-connect [DEPRECATED]

22 Jun 02:42
Compare
Choose a tag to compare

πŸš€ Changelog

⭐ New Features

  • MongoDB connector with load (initial/batch loader) and stream (CDC) jobs
  • Added support for Oracle 11g with load (initial/batch loader) and stream (CDC) jobs

πŸ› Bug Fixes

  • Between different start transitions, job remained in both staged and stopped sets which prevented the job to stop again. The same jobId should never be in both staged and stopped sets at the same time.

🧰 Enhancements

  • Additional resiliency for network split events and graceful failure for initial load edge cases during startup

Tested Versions

Java 11+
Redis Connect 0.9.x
DB2 (Initial Loader) Database: 11.5.x
JDBC Driver: 11.5.6.0
Files (Initial Loader) CSV
MongoDB (CDC and Initial Loader) Database: 4.4+
Driver: 4.3.3
MySQL (CDC and Initial Loader) Database: 5.7, 8.0.x
JDBC Driver: 8.0.28
Oracle (CDC and Initial Loader) Database: 11g, 12c, 19c, 21c
JDBC Driver: 12.2.0.1, 19.8.0.0, 21.1.0.0
Adapter: logminer
PostgreSQL (CDC and Initial Loader) Database: 10+
JDBC Driver: 42.3.3
Plug-ins: pgoutput
SQL Server (CDC and Initial Loader) Database: 2017, 2019
JDBC Driver: 9.4.1.jre8
Vertica (Initial Loader) Database: 11.1.0-0
JDBC Driver: 11.1.0-0

Release v0.9.2-redis-connect-core [DEPRECATED]

22 Jun 01:15
Compare
Choose a tag to compare

πŸš€ Redis Connect Core

redis-connect-core maven dependency to build Custom Stages

Custom Stages in Redis Connect is used when there is a need for custom coding for the purpose of user specific transformations, de-tokenization or any other custom tasks you would want to do before the source data is passed along to the final WRITE stage and persisted in the Redis Enterprise database.

        <dependency>
            <!-- This jar can be found in the Redis Connect lib folder and installed using maven
            install-file goal on the command line, http://maven.apache.org/general.html#importing-jars
            or imported directly into this project -->
            <groupId>com.redis.connect</groupId>
            <artifactId>redis-connect-core</artifactId>
            <version>0.9.2</version>
        </dependency>

Release v0.9.1-redis-connect [DEPRECATED]

02 Jun 14:54
Compare
Choose a tag to compare

πŸš€ Changelog

⭐ New Features


  • Support for Vertica database with Initial Loader
  • Support for remote debugging with Custom Stage transformations
  • Windows OS artifacts

Tested Versions

Java 11+
Redis Connect 0.9.x
DB2 (Initial Loader) Database: 11.5.x
JDBC Driver: 11.5.6.0
Files (Initial Loader) CSV
MySQL (CDC and Initial Loader) Database: 5.7, 8.0.x
JDBC Driver: 8.0.28
Oracle (CDC and Initial Loader) Database: 12c, 19c, 21c
JDBC Driver: 12.2.0.1, 19.8.0.0, 21.1.0.0
Adapter: logminer
PostgreSQL (CDC and Initial Loader) Database: 10+
JDBC Driver: 42.3.3
Plug-ins: pgoutput
SQL Server (CDC and Initial Loader) Database: 2017, 2019
JDBC Driver: 9.4.1.jre8
Vertica (Initial Loader) Database: 11.1.0-0
JDBC Driver: 11.1.0-0

Release v0.9.1-redis-connect-core [DEPRECATED]

24 May 19:23
Compare
Choose a tag to compare

πŸš€ Redis Connect Core

redis-connect-core maven dependency to build Custom Stages

Custom Stages in Redis Connect is used when there is a need for custom coding for the purpose of user specific transformations, de-tokenization or any other custom tasks you would want to do before the source data is passed along to the final WRITE stage and persisted in the Redis Enterprise database.

        <dependency>
            <!-- This jar can be found in the Redis Connect lib folder and installed using maven
            install-file goal on the command line, http://maven.apache.org/general.html#importing-jars
            or imported directly into this project -->
            <groupId>com.redis.connect</groupId>
            <artifactId>redis-connect-core</artifactId>
            <version>0.9.1</version>
        </dependency>

Release v0.9.0-redis-connect [DEPRECATED]

14 May 01:39
Compare
Choose a tag to compare

πŸš€ Redis Connect

⭐ New Features


  • New configuration framework
    Please note this is not backword compatible
  • Multi-Tenancy (supports collocation of heterogeneous source cdc jobs)
  • Checkpoint Transactionality
  • Support for partitioning jobs
  • Expansion of REST and CLI capabilities with full parity
  • Support for credentials externalization
  • Support for event driven credentials rotation
  • Additional HA coverage and internal enhancements

Tested Versions

Java 11+
Redis Connect 0.9.x
DB2 (Initial Loader) Database: 11.5.x
JDBC Driver: 11.5.6.0
Files (Initial Loader) CSV
MySQL (CDC and Initial Loader) Database: 5.7, 8.0.x
JDBC Driver: 8.0.28
Oracle (CDC and Initial Loader) Database: 12c, 19c
JDBC Driver: 12.2.0.1, 19.8.0.0, 21.1.0.0
Adapter: logminer
PostgreSQL (CDC and Initial Loader) Database: 10+
JDBC Driver: 42.3.3
Plug-ins: pgoutput
SQL Server (CDC and Initial Loader) Database: 2017, 2019
JDBC Driver: 9.4.1.jre8