diff --git a/docs/asciidoc/modules/ROOT/pages/database-integration/kafka/index.adoc b/docs/asciidoc/modules/ROOT/pages/database-integration/kafka/index.adoc index c90b16858f..5da3133c55 100644 --- a/docs/asciidoc/modules/ROOT/pages/database-integration/kafka/index.adoc +++ b/docs/asciidoc/modules/ROOT/pages/database-integration/kafka/index.adoc @@ -12,8 +12,7 @@ endif::env-docs[] [[apoc_neo4j_plugin_quickstart]] == APOC Kafka Plugin -Any configuration option that starts with `apoc.kafka.` controls how the plugin itself behaves. For a full -list of options available, see the documentation subsections on the xref:database-integration/kafka/producer.adoc[source] and xref:database-integration/kafka/consumer.adoc#apoc_kafka_sink[sink]. +Any configuration option that starts with `apoc.kafka.` controls how the plugin itself behaves. === Install the Plugin @@ -30,11 +29,7 @@ Configuration settings which are valid for those connectors will also work for A For example, in the Kafka documentation linked below, the configuration setting named `batch.size` should be stated as `apoc.kafka.batch.size` in APOC Kafka. -The following are common configuration settings you may wish to use. _This is not a complete -list_. The full list of configuration options and reference material is available from Confluent's -site for link:{url-confluent-install}/configuration/consumer-configs.html[sink configurations] and -link:{url-confluent-install}/configuration/producer-configs.html[source configurations]. - +The following are common configuration settings you may wish to use. .Most Common Needed Configuration Settings |=== |Setting Name |Description |Default Value @@ -75,94 +70,8 @@ apoc.kafka.bootstrap.servers=localhost:9092 If you are using Confluent Cloud (managed Kafka), you can connect to Kafka as described in the xref:database-integration/kafka/cloud.adoc#confluent_cloud[Confluent Cloud] section -=== Decide: Sink, Source, or Both - -Configuring APOC Neo4j plugin comes in three different parts, depending on your need: - -. *Required*: Configuring a connection to Kafka - -.neo4j.conf -[source,ini] ----- -apoc.kafka.bootstrap.servers=localhost:9092 ----- - -. _Optional_: Configuring Neo4j to produce records to Kafka (xref:database-integration/kafka/producer.adoc[Source]) -. _Optional_: Configuring Neo4j to ingest from Kafka (xref:database-integration/kafka/consumer.adoc#apoc_kafka_sink[Sink]) - -Follow one or both subsections according to your use case and need: - -==== Sink - -Take data from Kafka and store it in Neo4j (Neo4j as a data consumer) by adding configuration such as: - -.neo4j.conf -[source,ini] ----- -apoc.kafka.sink.enabled=true -apoc.kafka.sink.topic.cypher.my-ingest-topic=MERGE (n:Label {id: event.id}) ON CREATE SET n += event.properties ----- - -This will process every message that comes in on `my-ingest-topic` with the given cypher statement. When -that cypher statement executes, the `event` variable that is referenced will be set to the message received, -so this sample cypher will create a `(:Label)` node in the graph with the given ID, copying all of the -properties in the source message. - -For full details on what you can do here, see the xref:database-integration/kafka/consumer.adoc#apoc_kafka_sink[Sink] section of the documentation. - -==== Source - -Produce data from Neo4j and send it to a Kafka topic (Neo4j as a data producer) by adding configuration such as: - -.neo4j.conf -[source,ini] ----- -apoc.kafka.source.topic.nodes.my-nodes-topic=Person{*} -apoc.kafka.source.topic.relationships.my-rels-topic=BELONGS-TO{*} -apoc.kafka.source.enabled=true -apoc.kafka.source.schema.polling.interval=10000 ----- - -This will produce all graph nodes labeled `(:Person)` on to the topic `my-nodes-topic` and all -relationships of type `-[:BELONGS-TO]->` to the topic named `my-rels-topic`. Further, schema changes will -be polled every 10,000 ms, which affects how quickly the database picks up new indexes/schema changes. -Please note that if not specified a value for `apoc.kafka.source.schema.polling.interval` property then Streams plugin will use -300,000 ms as default. - -The expressions `Person{\*}` and `BELONGS-TO{*}` are _patterns_. You can find documentation on how to change -these in the xref:database-integration/kafka/producer.adoc#source-patterns[Patterns] section. - -For full details on what you can do here, see the xref:database-integration/kafka/producer.adoc[Source] section of the documentation. ==== Restart Neo4j Once the plugin is installed and configured, restarting the database will make it active. If you have configured Neo4j to consume from kafka, it will begin immediately processing messages. - -[NOTE] - -==== -When installing the latest version of the APOC Kafka plugin into Neo4j 4.x, watching to logs you could find something -similar to the following: - -[source,logs] ----- -2020-03-25 20:13:50.606+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.max.partition.fetch.bytes -2020-03-25 20:13:50.608+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.errors.log.include.messages -2020-03-25 20:13:50.608+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.auto.offset.reset -2020-03-25 20:13:50.608+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.bootstrap.servers -2020-03-25 20:13:50.608+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.max.poll.records -2020-03-25 20:13:50.609+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.errors.log.enable -2020-03-25 20:13:50.609+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.source.enabled -2020-03-25 20:13:50.609+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.topic.cypher.boa.to.kafkaTest -2020-03-25 20:13:50.609+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.errors.tolerance -2020-03-25 20:13:50.609+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.group.id -2020-03-25 20:13:50.609+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.errors.deadletterqueue.context.headers.enable -2020-03-25 20:13:50.609+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.errors.deadletterqueue.context.header.prefix -2020-03-25 20:13:50.610+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.errors.deadletterqueue.topic.name -2020-03-25 20:13:50.610+0000 WARN Unrecognized setting. No declared setting with name: apoc.kafka.sink.enabled.to.kafkaTest ----- - -*These are not errors*. They comes from the new Neo4j 4 Configuration System, which warns that it doesn't recognize those -properties. Despite these warnings the plugin will work properly. -==== \ No newline at end of file diff --git a/extended/src/main/java/apoc/ExtendedApocConfig.java b/extended/src/main/java/apoc/ExtendedApocConfig.java index e89cc12241..5cc43e5bcc 100644 --- a/extended/src/main/java/apoc/ExtendedApocConfig.java +++ b/extended/src/main/java/apoc/ExtendedApocConfig.java @@ -74,25 +74,6 @@ public enum UuidFormatType { hex, base64 } public static final String CONFIG_DIR = "config-dir="; - private static final String CONF_DIR_ARG = "config-dir="; - private static final String SOURCE_ENABLED = "apoc.kafka.source.enabled"; - private static final boolean SOURCE_ENABLED_VALUE = true; - private static final String PROCEDURES_ENABLED = "apoc.kafka.procedures.enabled"; - private static final boolean PROCEDURES_ENABLED_VALUE = true; - private static final String SINK_ENABLED = "apoc.kafka.sink.enabled"; - private static final boolean SINK_ENABLED_VALUE = false; - private static final String CHECK_APOC_TIMEOUT = "apoc.kafka.check.apoc.timeout"; - private static final String CHECK_APOC_INTERVAL = "apoc.kafka.check.apoc.interval"; - private static final String CLUSTER_ONLY = "apoc.kafka.cluster.only"; - private static final String CHECK_WRITEABLE_INSTANCE_INTERVAL = "apoc.kafka.check.writeable.instance.interval"; - private static final String SYSTEM_DB_WAIT_TIMEOUT = "apoc.kafka.systemdb.wait.timeout"; - private static final long SYSTEM_DB_WAIT_TIMEOUT_VALUE = 10000L; - private static final String POLL_INTERVAL = "apoc.kafka.sink.poll.interval"; - private static final String INSTANCE_WAIT_TIMEOUT = "apoc.kafka.wait.timeout"; - private static final long INSTANCE_WAIT_TIMEOUT_VALUE = 120000L; - private static final int DEFAULT_TRIGGER_PERIOD = 10000; - private static final String DEFAULT_PATH = "."; - public ExtendedApocConfig(LogService log, GlobalProcedures globalProceduresRegistry, String defaultConfigPath) { this.log = log.getInternalLog(ApocConfig.class); this.defaultConfigPath = defaultConfigPath; diff --git a/extended/src/main/kotlin/apoc/kafka/KafkaHandler.kt b/extended/src/main/kotlin/apoc/kafka/KafkaHandler.kt index b508410a66..3882cf6a66 100644 --- a/extended/src/main/kotlin/apoc/kafka/KafkaHandler.kt +++ b/extended/src/main/kotlin/apoc/kafka/KafkaHandler.kt @@ -3,7 +3,6 @@ package apoc.kafka import apoc.ApocConfig import apoc.ExtendedApocConfig.APOC_KAFKA_ENABLED import apoc.kafka.config.StreamsConfig -import apoc.kafka.consumer.StreamsSinkConfigurationListener import apoc.kafka.producer.StreamsRouterConfigurationListener import org.neo4j.kernel.internal.GraphDatabaseAPI import org.neo4j.kernel.lifecycle.LifecycleAdapter @@ -28,13 +27,6 @@ class KafkaHandler(): LifecycleAdapter() { } catch (e: Exception) { log.error("Exception in StreamsRouterConfigurationListener {}", e.message) } - - try { - StreamsSinkConfigurationListener(db, log) - .start(StreamsConfig.getConfiguration()) - } catch (e: Exception) { - log.error("Exception in StreamsSinkConfigurationListener {}", e.message) - } } } @@ -42,7 +34,6 @@ class KafkaHandler(): LifecycleAdapter() { if(ApocConfig.apocConfig().getBoolean(APOC_KAFKA_ENABLED)) { StreamsRouterConfigurationListener(db, log).shutdown() - StreamsSinkConfigurationListener(db, log).shutdown() } } } \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/PublishProcedures.kt b/extended/src/main/kotlin/apoc/kafka/PublishProcedures.kt index fcc77aca3b..f9285e8942 100644 --- a/extended/src/main/kotlin/apoc/kafka/PublishProcedures.kt +++ b/extended/src/main/kotlin/apoc/kafka/PublishProcedures.kt @@ -1,8 +1,6 @@ package apoc.kafka -//import apoc.kafka.producer.StreamsEventRouter -//import apoc.kafka.producer.StreamsTransactionEventHandler -//import apoc.kafka.producer.StreamsTransactionEventHandler + import apoc.kafka.producer.events.StreamsEventBuilder import apoc.kafka.producer.kafka.KafkaEventRouter import apoc.kafka.utils.KafkaUtil @@ -20,9 +18,8 @@ import java.util.stream.Stream data class StreamPublishResult(@JvmField val value: Map) -data class StreamsEventSinkStoreEntry(val eventRouter: KafkaEventRouter, -// val txHandler: StreamsTransactionEventHandler -) +data class StreamsEventSinkStoreEntry(val eventRouter: KafkaEventRouter) + class PublishProcedures { @JvmField @Context @@ -101,9 +98,8 @@ class PublishProcedures { fun register( db: GraphDatabaseAPI, evtRouter: KafkaEventRouter, -// txHandler: StreamsTransactionEventHandler ) { - streamsEventRouterStore[KafkaUtil.getName(db)] = StreamsEventSinkStoreEntry(evtRouter/*, txHandler*/) + streamsEventRouterStore[KafkaUtil.getName(db)] = StreamsEventSinkStoreEntry(evtRouter) } fun unregister(db: GraphDatabaseAPI) { diff --git a/extended/src/main/kotlin/apoc/kafka/config/StreamsConfig.kt b/extended/src/main/kotlin/apoc/kafka/config/StreamsConfig.kt index 67a9b07e7d..250fe42ac6 100644 --- a/extended/src/main/kotlin/apoc/kafka/config/StreamsConfig.kt +++ b/extended/src/main/kotlin/apoc/kafka/config/StreamsConfig.kt @@ -21,15 +21,6 @@ class StreamsConfig { const val SOURCE_ENABLED_VALUE = true const val PROCEDURES_ENABLED = "apoc.kafka.procedures.enabled" const val PROCEDURES_ENABLED_VALUE = true - const val SINK_ENABLED = "apoc.kafka.sink.enabled" - const val SINK_ENABLED_VALUE = false - const val CHECK_APOC_TIMEOUT = "apoc.kafka.check.apoc.timeout" - const val CHECK_APOC_INTERVAL = "apoc.kafka.check.apoc.interval" - const val CLUSTER_ONLY = "apoc.kafka.cluster.only" - const val CHECK_WRITEABLE_INSTANCE_INTERVAL = "apoc.kafka.check.writeable.instance.interval" - const val POLL_INTERVAL = "apoc.kafka.sink.poll.interval" - const val INSTANCE_WAIT_TIMEOUT = "apoc.kafka.wait.timeout" - const val INSTANCE_WAIT_TIMEOUT_VALUE = 120000L fun isSourceGloballyEnabled(config: Map) = config.getOrDefault(SOURCE_ENABLED, SOURCE_ENABLED_VALUE).toString().toBoolean() @@ -39,12 +30,6 @@ class StreamsConfig { fun hasProceduresEnabled(config: Map, dbName: String) = config.getOrDefault("${PROCEDURES_ENABLED}.$dbName", hasProceduresGloballyEnabled(config)).toString().toBoolean() - fun isSinkGloballyEnabled(config: Map) = config.getOrDefault(SINK_ENABLED, SINK_ENABLED_VALUE).toString().toBoolean() - - fun isSinkEnabled(config: Map, dbName: String) = config.getOrDefault("${SINK_ENABLED}.to.$dbName", isSinkGloballyEnabled(config)).toString().toBoolean() - - fun getInstanceWaitTimeout(config: Map) = config.getOrDefault(INSTANCE_WAIT_TIMEOUT, INSTANCE_WAIT_TIMEOUT_VALUE).toString().toLong() - fun convert(props: Map, config: Map): Map { val mutProps = props.toMutableMap() val mappingKeys = mapOf( diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventConsumer.kt b/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventConsumer.kt deleted file mode 100644 index cdce9d7da2..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventConsumer.kt +++ /dev/null @@ -1,24 +0,0 @@ -package apoc.kafka.consumer - -import org.neo4j.logging.Log -import apoc.kafka.service.StreamsSinkEntity - - -abstract class StreamsEventConsumer(log: Log, topics: Set) { - - abstract fun stop() - - abstract fun start() - - abstract fun read(topicConfig: Map = emptyMap(), action: (String, List) -> Unit) - - abstract fun read(action: (String, List) -> Unit) - - fun invalidTopics(): List = emptyList() - -} - - -abstract class StreamsEventConsumerFactory { - abstract fun createStreamsEventConsumer(config: Map, log: Log, topics: Set): StreamsEventConsumer -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventSink.kt b/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventSink.kt deleted file mode 100644 index 1b9c99eaac..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventSink.kt +++ /dev/null @@ -1,22 +0,0 @@ -package apoc.kafka.consumer - -import apoc.kafka.consumer.kafka.KafkaEventSink -import org.neo4j.kernel.internal.GraphDatabaseAPI -import org.neo4j.logging.Log -import apoc.kafka.events.StreamsPluginStatus - -object StreamsEventSinkFactory { - fun getStreamsEventSink(config: Map, //streamsQueryExecution: StreamsEventSinkQueryExecution, - /* streamsTopicService: StreamsTopicService, */log: Log, db: GraphDatabaseAPI): KafkaEventSink { - return KafkaEventSink(/*config, streamsQueryExecution, streamsTopicService, log, */db) - } -} - -open class StreamsEventSinkConfigMapper(private val streamsConfigMap: Map, private val mappingKeys: Map) { - open fun convert(config: Map): Map { - val props = streamsConfigMap - .toMutableMap() - props += config.mapKeys { mappingKeys.getOrDefault(it.key, it.key) } - return props - } -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventSinkQueryExecution.kt b/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventSinkQueryExecution.kt deleted file mode 100644 index 0220ae54c6..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsEventSinkQueryExecution.kt +++ /dev/null @@ -1,32 +0,0 @@ -package apoc.kafka.consumer - -import org.neo4j.kernel.internal.GraphDatabaseAPI -import org.neo4j.logging.Log -import apoc.kafka.extensions.execute -import apoc.kafka.service.StreamsSinkService -import apoc.kafka.service.StreamsStrategyStorage -import apoc.kafka.consumer.utils.ConsumerUtils - -class NotInWriteableInstanceException(message: String): RuntimeException(message) - -class StreamsEventSinkQueryExecution(private val db: GraphDatabaseAPI, - private val log: Log, - streamsStrategyStorage: StreamsStrategyStorage): - StreamsSinkService(streamsStrategyStorage) { - - override fun write(query: String, params: Collection) { - if (params.isEmpty()) return - if (ConsumerUtils.isWriteableInstance(db)) { - db.execute(query, mapOf("events" to params)) { - if (log.isDebugEnabled) { - log.debug("Query statistics:\n${it.queryStatistics}") - } - } - } else { - if (log.isDebugEnabled) { - log.debug("Not writeable instance") - } - NotInWriteableInstanceException("Not writeable instance") - } - } -} diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsSinkConfigurationListener.kt b/extended/src/main/kotlin/apoc/kafka/consumer/StreamsSinkConfigurationListener.kt deleted file mode 100644 index 26a15395ff..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsSinkConfigurationListener.kt +++ /dev/null @@ -1,58 +0,0 @@ -package apoc.kafka.consumer - -import apoc.kafka.config.StreamsConfig -import apoc.kafka.consumer.kafka.KafkaEventSink -import apoc.kafka.consumer.kafka.KafkaSinkConfiguration -import apoc.kafka.consumer.procedures.StreamsSinkProcedures -import apoc.kafka.consumer.utils.ConsumerUtils -import apoc.kafka.extensions.isDefaultDb -import apoc.kafka.utils.KafkaUtil -import apoc.kafka.utils.KafkaUtil.getProducerProperties -import kotlinx.coroutines.sync.Mutex -import org.neo4j.kernel.internal.GraphDatabaseAPI -import org.neo4j.logging.Log - -class StreamsSinkConfigurationListener(private val db: GraphDatabaseAPI, - private val log: Log) { - -// private val mutex = Mutex() -// - var eventSink: KafkaEventSink? = null -// -// private val streamsTopicService = StreamsTopicService() -// -// private var lastConfig: KafkaSinkConfiguration? = null -// -// private val producerConfig = getProducerProperties() -// -// private fun KafkaSinkConfiguration.excludeSourceProps() = this.asProperties() -// ?.filterNot { producerConfig.contains(it.key) || it.key.toString().startsWith("apoc.kafka.source") } - - - fun shutdown() { -// val isShuttingDown = eventSink != null -// if (isShuttingDown) { -// log.info("[Sink] Shutting down the Streams Sink Module") -// } -// eventSink?.stop() -// eventSink = null - StreamsSinkProcedures.unregisterStreamsEventSink(db) -// if (isShuttingDown) { -// log.info("[Sink] Shutdown of the Streams Sink Module completed") -// } - } - - fun start(configMap: Map) { - - eventSink = StreamsEventSinkFactory - .getStreamsEventSink(configMap, - // streamsQueryExecution, - // streamsTopicService, - log, - db) - -// log.info("[Sink] Registering the Streams Sink procedures") - StreamsSinkProcedures.registerStreamsEventSink(db, eventSink!!) - } - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsTopicService.kt b/extended/src/main/kotlin/apoc/kafka/consumer/StreamsTopicService.kt deleted file mode 100644 index f6fae70065..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/StreamsTopicService.kt +++ /dev/null @@ -1,95 +0,0 @@ -package apoc.kafka.consumer - -import kotlinx.coroutines.runBlocking -import kotlinx.coroutines.sync.Mutex -import kotlinx.coroutines.sync.withLock -import apoc.kafka.service.TopicType -import apoc.kafka.service.Topics -import java.util.Collections -import java.util.concurrent.ConcurrentHashMap - -class StreamsTopicService { - - private val storage = ConcurrentHashMap() - - private val mutex = Mutex() - - fun clearAll() { - storage.clear() - } - - private fun throwRuntimeException(data: Any, topicType: TopicType): Unit = - throw RuntimeException("Unsupported data $data for topic type $topicType") - - fun set(topicType: TopicType, data: Any) = runBlocking { - mutex.withLock { - var oldData = storage[topicType] - oldData = oldData ?: when (data) { - is Map<*, *> -> emptyMap() - is Collection<*> -> emptyList() - else -> throwRuntimeException(data, topicType) - } - val newData = when (oldData) { - is Map<*, *> -> oldData + (data as Map) - is Collection<*> -> oldData + (data as Collection) - else -> throwRuntimeException(data, topicType) - } - storage[topicType] = newData - } - } - - fun remove(topicType: TopicType, topic: String) = runBlocking { - mutex.withLock { - val topicData = storage[topicType] ?: return@runBlocking - - val runtimeException = RuntimeException("Unsupported data $topicData for topic type $topicType") - val filteredData = when (topicData) { - is Map<*, *> -> topicData.filterKeys { it.toString() != topic } - is Collection<*> -> topicData.filter { it.toString() != topic } - else -> throw runtimeException - } - - storage[topicType] = filteredData - } - } - - fun getTopicType(topic: String) = runBlocking { - TopicType.values() - .find { - mutex.withLock { - when (val topicData = storage[it]) { - is Map<*, *> -> topicData.containsKey(topic) - is Collection<*> -> topicData.contains(topic) - else -> false - } - } - } - } - - fun getTopics() = runBlocking { - TopicType.values() - .flatMap { - mutex.withLock { - when (val data = storage[it]) { - is Map<*, *> -> data.keys - is Collection<*> -> data.toSet() - else -> emptySet() - } - } - }.toSet() as Set - } - - fun setAll(topics: Topics) { - topics.asMap().forEach { (topicType, data) -> - set(topicType, data) - } - } - - fun getCypherTemplate(topic: String) = (storage.getOrDefault(TopicType.CYPHER, emptyMap()) as Map) - .let { it[topic] } - - fun getAll(): Map = Collections.unmodifiableMap(storage) - - fun getByTopicType(topicType: TopicType): Any? = storage[topicType] - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaAutoCommitEventConsumer.kt b/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaAutoCommitEventConsumer.kt deleted file mode 100644 index 37ea6aef17..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaAutoCommitEventConsumer.kt +++ /dev/null @@ -1,145 +0,0 @@ -package apoc.kafka.consumer.kafka - -//import io.confluent.kafka.serializers.KafkaAvroDeserializer -import org.apache.avro.generic.GenericRecord -import org.apache.kafka.clients.consumer.ConsumerRecord -import org.apache.kafka.clients.consumer.KafkaConsumer -import org.apache.kafka.clients.consumer.OffsetAndMetadata -import org.apache.kafka.common.TopicPartition -import org.apache.kafka.common.serialization.ByteArrayDeserializer -import org.neo4j.logging.Log -import apoc.kafka.consumer.StreamsEventConsumer -import apoc.kafka.extensions.offsetAndMetadata -import apoc.kafka.extensions.toStreamsSinkEntity -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.service.errors.* -import java.time.Duration -import java.util.concurrent.atomic.AtomicBoolean - -data class KafkaTopicConfig(val commit: Boolean, val topicPartitionsMap: Map) { - companion object { - private fun toTopicPartitionMap(topicConfig: Map>>): Map = topicConfig - .flatMap { topicConfigEntry -> - topicConfigEntry.value.map { - val partition = it.getValue("partition").toString().toInt() - val offset = it.getValue("offset").toString().toLong() - TopicPartition(topicConfigEntry.key, partition) to offset - } - } - .toMap() - - fun fromMap(map: Map): KafkaTopicConfig { - val commit = map.getOrDefault("commit", true).toString().toBoolean() - val topicPartitionsMap = toTopicPartitionMap(map - .getOrDefault("partitions", emptyMap>>()) as Map>>) - return KafkaTopicConfig(commit = commit, topicPartitionsMap = topicPartitionsMap) - } - } -} - -abstract class KafkaEventConsumer(config: KafkaSinkConfiguration, - log: Log, - topics: Set): StreamsEventConsumer(log, topics) { - abstract fun wakeup() -} - -open class KafkaAutoCommitEventConsumer(private val config: KafkaSinkConfiguration, - private val log: Log, - val topics: Set, - private val dbName: String): KafkaEventConsumer(config, log, topics) { - - private val errorService: ErrorService = KafkaErrorService(config.asProperties(), - ErrorService.ErrorConfig.from(emptyMap()), - { s, e -> log.error(s,e as Throwable) }) - - // override fun invalidTopics(): List = config.sinkConfiguration.topics.invalid - - private val isSeekSet = AtomicBoolean() - - val consumer: KafkaConsumer<*, *> = when { - config.keyDeserializer == ByteArrayDeserializer::class.java.name && config.valueDeserializer == ByteArrayDeserializer::class.java.name -> KafkaConsumer(config.asProperties()) -// config.keyDeserializer == ByteArrayDeserializer::class.java.name && config.valueDeserializer == KafkaAvroDeserializer::class.java.name -> KafkaConsumer(config.asProperties()) -// config.keyDeserializer == KafkaAvroDeserializer::class.java.name && config.valueDeserializer == KafkaAvroDeserializer::class.java.name -> KafkaConsumer(config.asProperties()) -// config.keyDeserializer == KafkaAvroDeserializer::class.java.name && config.valueDeserializer == ByteArrayDeserializer::class.java.name -> KafkaConsumer(config.asProperties()) - else -> throw RuntimeException("Invalid config") - } - - override fun start() { - if (topics.isEmpty()) { - log.info("No topics specified Kafka Consumer will not started") - return - } - this.consumer.subscribe(topics) - } - - override fun stop() { - consumer.close() - errorService.close() - } - - private fun readSimple(action: (String, List) -> Unit) { - val records = consumer.poll(Duration.ZERO) - if (records.isEmpty) return - this.topics.forEach { topic -> - val topicRecords = records.records(topic) - executeAction(action, topic, topicRecords) - } - } - - fun executeAction(action: (String, List) -> Unit, topic: String, topicRecords: Iterable>) { - try { - action(topic, topicRecords.map { it.toStreamsSinkEntity() }) - } catch (e: Exception) { - errorService.report(topicRecords.map { ErrorData.from(it, e, this::class.java, dbName) }) - } - } - - fun readFromPartition(kafkaTopicConfig: KafkaTopicConfig, - action: (String, List) -> Unit): Map { - setSeek(kafkaTopicConfig.topicPartitionsMap) - val records = consumer.poll(Duration.ZERO) - return when (records.isEmpty) { - true -> emptyMap() - else -> kafkaTopicConfig.topicPartitionsMap - .mapValues { records.records(it.key) } - .filterValues { it.isNotEmpty() } - .mapValues { (topic, topicRecords) -> - executeAction(action, topic.topic(), topicRecords) - topicRecords.last().offsetAndMetadata() - } - } - } - - override fun read(action: (String, List) -> Unit) { - readSimple(action) - } - - override fun read(topicConfig: Map, action: (String, List) -> Unit) { - val kafkaTopicConfig = KafkaTopicConfig.fromMap(topicConfig) - if (kafkaTopicConfig.topicPartitionsMap.isEmpty()) { - readSimple(action) - } else { - readFromPartition(kafkaTopicConfig, action) - } - } - - private fun setSeek(topicPartitionsMap: Map) { - if (!isSeekSet.compareAndSet(false, true)) { - return - } - consumer.poll(0) // dummy call see: https://stackoverflow.com/questions/41008610/kafkaconsumer-0-10-java-api-error-message-no-current-assignment-for-partition - topicPartitionsMap.forEach { - when (it.value) { - -1L -> consumer.seekToBeginning(listOf(it.key)) - -2L -> consumer.seekToEnd(listOf(it.key)) - else -> consumer.seek(it.key, it.value) - } - } - } - - override fun wakeup() { - consumer.wakeup() - } -} - diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaEventSink.kt b/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaEventSink.kt deleted file mode 100644 index 0cfc4f2dc2..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaEventSink.kt +++ /dev/null @@ -1,72 +0,0 @@ -package apoc.kafka.consumer.kafka - -import apoc.kafka.config.StreamsConfig -import apoc.kafka.consumer.StreamsEventConsumer -import apoc.kafka.consumer.StreamsEventConsumerFactory -import apoc.kafka.consumer.StreamsEventSinkQueryExecution -//import apoc.kafka.consumer.StreamsSinkConfiguration -import apoc.kafka.consumer.StreamsTopicService -import apoc.kafka.consumer.utils.ConsumerUtils -import apoc.kafka.events.StreamsPluginStatus -import apoc.kafka.extensions.isDefaultDb -import apoc.kafka.utils.KafkaUtil -import apoc.kafka.utils.KafkaUtil.getInvalidTopicsError -import kotlinx.coroutines.CancellationException -import kotlinx.coroutines.Dispatchers -import kotlinx.coroutines.GlobalScope -import kotlinx.coroutines.Job -import kotlinx.coroutines.cancelAndJoin -import kotlinx.coroutines.delay -import kotlinx.coroutines.isActive -import kotlinx.coroutines.launch -import kotlinx.coroutines.runBlocking -import kotlinx.coroutines.sync.Mutex -import kotlinx.coroutines.sync.withLock -import org.apache.kafka.common.errors.WakeupException -import org.neo4j.kernel.internal.GraphDatabaseAPI -import org.neo4j.logging.Log - -class KafkaEventSink(//private val config: Map, - //private val queryExecution: StreamsEventSinkQueryExecution, - // private val streamsTopicService: StreamsTopicService, - // private val log: Log, - private val db: GraphDatabaseAPI) { - - private val mutex = Mutex() - - private lateinit var eventConsumer: KafkaEventConsumer - private var job: Job? = null - -// val streamsSinkConfiguration: StreamsSinkConfiguration = StreamsSinkConfiguration.from(configMap = config, -// dbName = db.databaseName(), isDefaultDb = db.isDefaultDb()) -// -// private val streamsConfig: StreamsSinkConfiguration = StreamsSinkConfiguration.from(configMap = config, -// dbName = db.databaseName(), isDefaultDb = db.isDefaultDb()) - - fun getEventConsumerFactory(): StreamsEventConsumerFactory { - return object: StreamsEventConsumerFactory() { - override fun createStreamsEventConsumer(config: Map, log: Log, topics: Set): StreamsEventConsumer { - val dbName = db.databaseName() - val kafkaConfig = KafkaSinkConfiguration.from(config, dbName, db.isDefaultDb()) - val topics1 = topics as Set - return if (kafkaConfig.enableAutoCommit) { - KafkaAutoCommitEventConsumer(kafkaConfig, log, topics1, dbName) - } else { - KafkaManualCommitEventConsumer(kafkaConfig, log, topics1, dbName) - } - } - } - } - - fun status(): StreamsPluginStatus = runBlocking { - mutex.withLock(job) { - status(job) - } - } - - private fun status(job: Job?): StreamsPluginStatus = when (job?.isActive) { - true -> StreamsPluginStatus.RUNNING - else -> StreamsPluginStatus.STOPPED - } - -} diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaManualCommitEventConsumer.kt b/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaManualCommitEventConsumer.kt deleted file mode 100644 index 6871f036f8..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaManualCommitEventConsumer.kt +++ /dev/null @@ -1,118 +0,0 @@ -package apoc.kafka.consumer.kafka - -import org.apache.kafka.clients.consumer.CommitFailedException -import org.apache.kafka.clients.consumer.ConsumerRebalanceListener -import org.apache.kafka.clients.consumer.OffsetAndMetadata -import org.apache.kafka.common.TopicPartition -import org.apache.kafka.common.errors.WakeupException -import org.neo4j.logging.Log -import apoc.kafka.extensions.offsetAndMetadata -import apoc.kafka.extensions.topicPartition -import apoc.kafka.service.StreamsSinkEntity -import java.time.Duration - -class KafkaManualCommitEventConsumer(config: KafkaSinkConfiguration, - private val log: Log, - topics: Set, - dbName: String): KafkaAutoCommitEventConsumer(config, log, topics, dbName) { - - private val asyncCommit = config.asyncCommit - - override fun stop() { - if (asyncCommit) { - doCommitSync() - } - super.stop() - } - - private fun doCommitSync() { - try { - /* - * While everything is fine, we use commitAsync. - * It is faster, and if one commit fails, the next commit will serve as a retry. - * But if we are closing, there is no "next commit". We call commitSync(), - * because it will retry until it succeeds or suffers unrecoverable failure. - */ - consumer.commitSync() - } catch (e: WakeupException) { - // we're shutting down, but finish the commit first and then - // rethrow the exception so that the main loop can exit - doCommitSync() - throw e - } catch (e: CommitFailedException) { - // the commit failed with an unrecoverable error. if there is any - // internal state which depended on the commit, you can clean it - // up here. otherwise it's reasonable to ignore the error and go on - log.warn("Commit failed", e) - } - } - - override fun start() { - if (asyncCommit) { - if (topics.isEmpty()) { - log.info("No topics specified Kafka Consumer will not started") - return - } - this.consumer.subscribe(topics, object : ConsumerRebalanceListener { - override fun onPartitionsRevoked(partitions: Collection) = doCommitSync() - - override fun onPartitionsAssigned(partitions: Collection) {} - }) - } else { - super.start() - } - } - - private fun commitData(commit: Boolean, topicMap: Map) { - if (commit && topicMap.isNotEmpty()) { - if (asyncCommit) { - if (log.isDebugEnabled) { - log.debug("Committing data in async") - } - consumer.commitAsync(topicMap) { offsets: MutableMap, exception: Exception? -> - if (exception != null) { - log.warn(""" - |These offsets `$offsets` - |cannot be committed because of the following exception: - """.trimMargin(), exception) - } - } - } else { - if (log.isDebugEnabled) { - log.debug("Committing data in sync") - } - consumer.commitSync(topicMap) - } - } - } - - override fun read(action: (String, List) -> Unit) { - val topicMap = readSimple(action) - commitData(true, topicMap) - } - - override fun read(topicConfig: Map, action: (String, List) -> Unit) { - val kafkaTopicConfig = KafkaTopicConfig.fromMap(topicConfig) - val topicMap = if (kafkaTopicConfig.topicPartitionsMap.isEmpty()) { - readSimple(action) - } else { - readFromPartition(kafkaTopicConfig, action) - } - commitData(kafkaTopicConfig.commit, topicMap) - } - - private fun readSimple(action: (String, List) -> Unit): Map { - val records = consumer.poll(Duration.ZERO) - return when (records.isEmpty) { - true -> emptyMap() - else -> records.partitions() - .map { topicPartition -> - val topicRecords = records.records(topicPartition) - executeAction(action, topicPartition.topic(), topicRecords) - val last = topicRecords.last() - last.topicPartition() to last.offsetAndMetadata() - } - .toMap() - } - } -} diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaSinkConfiguration.kt b/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaSinkConfiguration.kt deleted file mode 100644 index 79a5252d1e..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/kafka/KafkaSinkConfiguration.kt +++ /dev/null @@ -1,100 +0,0 @@ -package apoc.kafka.consumer.kafka - -//import io.confluent.kafka.serializers.KafkaAvroDeserializer -import org.apache.kafka.clients.CommonClientConfigs -import org.apache.kafka.clients.consumer.ConsumerConfig -import org.apache.kafka.common.serialization.ByteArrayDeserializer -//import apoc.kafka.consumer.StreamsSinkConfiguration -import apoc.kafka.extensions.toPointCase -import apoc.kafka.utils.JSONUtils -import apoc.kafka.utils.KafkaUtil.getInvalidTopics -import apoc.kafka.utils.KafkaUtil.validateConnection -import java.util.Properties - - -private const val kafkaConfigPrefix = "apoc.kafka." - -//private val SUPPORTED_DESERIALIZER = listOf(ByteArrayDeserializer::class.java.name, KafkaAvroDeserializer::class.java.name) - -private fun validateDeserializers(config: KafkaSinkConfiguration) { -// val key = if (!SUPPORTED_DESERIALIZER.contains(config.keyDeserializer)) { -// ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -// } else if (!SUPPORTED_DESERIALIZER.contains(config.valueDeserializer)) { -// ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG -// } else { -// "" -// } -// if (key.isNotBlank()) { -// throw RuntimeException("The property `kafka.$key` contains an invalid deserializer. Supported deserializers are $SUPPORTED_DESERIALIZER") -// } -} - -data class KafkaSinkConfiguration(val bootstrapServers: String = "localhost:9092", - val keyDeserializer: String = "org.apache.kafka.common.serialization.ByteArrayDeserializer", - val valueDeserializer: String = "org.apache.kafka.common.serialization.ByteArrayDeserializer", - val groupId: String = "neo4j", - val autoOffsetReset: String = "earliest", -// val sinkConfiguration: StreamsSinkConfiguration = StreamsSinkConfiguration(), - val enableAutoCommit: Boolean = true, - val asyncCommit: Boolean = false, - val extraProperties: Map = emptyMap()) { - - companion object { - - fun from(cfg: Map, dbName: String, isDefaultDb: Boolean): KafkaSinkConfiguration { - val kafkaCfg = create(cfg, dbName, isDefaultDb) - validate(kafkaCfg) -// val invalidTopics = getInvalidTopics(kafkaCfg.asProperties(), kafkaCfg.sinkConfiguration.topics.allTopics()) -// return if (invalidTopics.isNotEmpty()) { -// kafkaCfg.copy(sinkConfiguration = StreamsSinkConfiguration.from(cfg, dbName, invalidTopics, isDefaultDb)) -// } else { - return kafkaCfg -// } - } - - // Visible for testing - fun create(cfg: Map, dbName: String, isDefaultDb: Boolean): KafkaSinkConfiguration { - val config = cfg - .filterKeys { it.startsWith(kafkaConfigPrefix) && !it.startsWith("${kafkaConfigPrefix}sink") } - .mapKeys { it.key.substring(kafkaConfigPrefix.length) } - val default = KafkaSinkConfiguration() - - val keys = JSONUtils.asMap(default).keys.map { it.toPointCase() } - val extraProperties = config.filterKeys { !keys.contains(it) } - -// val streamsSinkConfiguration = StreamsSinkConfiguration.from(configMap = cfg, dbName = dbName, isDefaultDb = isDefaultDb) - - - return default.copy(keyDeserializer = config.getOrDefault(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, default.keyDeserializer), - valueDeserializer = config.getOrDefault(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, default.valueDeserializer), - bootstrapServers = config.getOrDefault(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, default.bootstrapServers), - autoOffsetReset = config.getOrDefault(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, default.autoOffsetReset), - groupId = config.getOrDefault(ConsumerConfig.GROUP_ID_CONFIG, default.groupId) + (if (isDefaultDb) "" else "-$dbName"), - enableAutoCommit = config.getOrDefault(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, default.enableAutoCommit).toString().toBoolean(), - asyncCommit = config.getOrDefault("async.commit", default.asyncCommit).toString().toBoolean(), -// sinkConfiguration = streamsSinkConfiguration, - extraProperties = extraProperties // for what we don't provide a default configuration - ) - } - - private fun validate(config: KafkaSinkConfiguration) { - validateConnection(config.bootstrapServers, CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, false) - val schemaRegistryUrlKey = "schema.registry.url" - if (config.extraProperties.containsKey(schemaRegistryUrlKey)) { - val schemaRegistryUrl = config.extraProperties.getOrDefault(schemaRegistryUrlKey, "") - validateConnection(schemaRegistryUrl, schemaRegistryUrlKey, false) - } - validateDeserializers(config) - } - } - - fun asProperties(): Properties { - val props = Properties() - val map = JSONUtils.asMap(this) - .filterKeys { it != "extraProperties" && it != "sinkConfiguration" } - .mapKeys { it.key.toPointCase() } - props.putAll(map) - props.putAll(extraProperties) - return props - } -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/procedures/QueueBasedSpliterator.kt b/extended/src/main/kotlin/apoc/kafka/consumer/procedures/QueueBasedSpliterator.kt deleted file mode 100644 index 315ae49201..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/procedures/QueueBasedSpliterator.kt +++ /dev/null @@ -1,67 +0,0 @@ -package apoc.kafka.consumer.procedures - -import org.neo4j.graphdb.NotInTransactionException -import org.neo4j.graphdb.TransactionTerminatedException -import org.neo4j.procedure.TerminationGuard -import java.util.Spliterator -import java.util.concurrent.BlockingQueue -import java.util.concurrent.TimeUnit -import java.util.function.Consumer - -/** - * @author mh - * @since 08.05.16 in APOC - */ -class QueueBasedSpliterator constructor(private val queue: BlockingQueue, - private val tombstone: T, - private val terminationGuard: TerminationGuard, - private val timeout: Long = 10) : Spliterator { - private var entry: T? - - init { - entry = poll() - } - - override fun tryAdvance(action: Consumer): Boolean { - if (transactionIsTerminated(terminationGuard)) return false - if (isEnd) return false - action.accept(entry) - entry = poll() - return !isEnd - } - - private fun transactionIsTerminated(terminationGuard: TerminationGuard): Boolean { - return try { - terminationGuard.check() - false - } catch (e: Exception) { - when (e) { - is TransactionTerminatedException, is NotInTransactionException -> true - else -> throw e - } - } - } - - private val isEnd: Boolean - private get() = entry == null || entry === tombstone - - private fun poll(): T? { - return try { - queue.poll(timeout, TimeUnit.SECONDS) - } catch (e: InterruptedException) { - null - } - } - - override fun trySplit(): Spliterator? { - return null - } - - override fun estimateSize(): Long { - return Long.MAX_VALUE - } - - override fun characteristics(): Int { - return Spliterator.NONNULL - } -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/procedures/StreamsSinkProcedures.kt b/extended/src/main/kotlin/apoc/kafka/consumer/procedures/StreamsSinkProcedures.kt deleted file mode 100644 index eb10845379..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/procedures/StreamsSinkProcedures.kt +++ /dev/null @@ -1,125 +0,0 @@ -package apoc.kafka.consumer.procedures - -import apoc.kafka.config.StreamsConfig -import apoc.kafka.consumer.StreamsEventConsumer -//import apoc.kafka.consumer.StreamsSinkConfiguration -import apoc.kafka.consumer.kafka.KafkaEventSink -import apoc.kafka.events.StreamsPluginStatus -import apoc.kafka.extensions.isDefaultDb -import apoc.kafka.utils.KafkaUtil -import apoc.kafka.utils.KafkaUtil.checkEnabled -import kotlinx.coroutines.Dispatchers -import kotlinx.coroutines.GlobalScope -import kotlinx.coroutines.launch -import kotlinx.coroutines.runBlocking -import org.apache.commons.lang3.exception.ExceptionUtils -import org.neo4j.graphdb.GraphDatabaseService -import org.neo4j.kernel.internal.GraphDatabaseAPI -import org.neo4j.logging.Log -import org.neo4j.procedure.Context -import org.neo4j.procedure.Description -import org.neo4j.procedure.Mode -import org.neo4j.procedure.Name -import org.neo4j.procedure.Procedure -import org.neo4j.procedure.TerminationGuard -import java.util.concurrent.ArrayBlockingQueue -import java.util.concurrent.ConcurrentHashMap -import java.util.stream.Collectors -import java.util.stream.Stream -import java.util.stream.StreamSupport - -class StreamResult(@JvmField val event: Map) -class KeyValueResult(@JvmField val name: String, @JvmField val value: Any?) - -class StreamsSinkProcedures { - - - @JvmField @Context - var log: Log? = null - - @JvmField @Context - var db: GraphDatabaseAPI? = null - - @JvmField @Context - var terminationGuard: TerminationGuard? = null - - @Procedure(mode = Mode.READ, name = "apoc.kafka.consume") - @Description("apoc.kafka.consume(topic, {timeout: , from: , groupId: , commit: , partitions:[{partition: , offset: }]}) " + - "YIELD event - Allows to consume custom topics") - fun consume(@Name("topic") topic: String?, - @Name(value = "config", defaultValue = "{}") config: Map?): Stream = runBlocking { - checkEnabled() - if (topic.isNullOrEmpty()) { - log?.info("Topic empty, no message sent") - Stream.empty() - } else { - val properties = config?.mapValues { it.value.toString() } ?: emptyMap() - - val configuration = StreamsConfig.getConfiguration(properties) - readData(topic, config ?: emptyMap(), configuration) - } - } - - private fun checkLeader(lambda: () -> Stream): Stream = if (KafkaUtil.isWriteableInstance(db as GraphDatabaseAPI)) { - lambda() - } else { - Stream.of(KeyValueResult("error", "You can use this procedure only in the LEADER or in a single instance configuration.")) - } - - private fun readData(topic: String, procedureConfig: Map, consumerConfig: Map): Stream { - val cfg = procedureConfig.mapValues { if (it.key != "partitions") it.value else mapOf(topic to it.value) } - val timeout = cfg.getOrDefault("timeout", 1000).toString().toLong() - val data = ArrayBlockingQueue(1000) - val tombstone = StreamResult(emptyMap()) - GlobalScope.launch(Dispatchers.IO) { - val consumer = createConsumer(consumerConfig, topic) - consumer.start() - try { - val start = System.currentTimeMillis() - while ((System.currentTimeMillis() - start) < timeout) { - consumer.read(cfg) { _, topicData -> - data.addAll(topicData.mapNotNull { it.value }.map { StreamResult(mapOf("data" to it)) }) - } - } - data.add(tombstone) - } catch (e: Exception) { - if (log?.isDebugEnabled!!) { - log?.error("Error while consuming data", e) - } - } finally { - consumer.stop() - } - } - if (log?.isDebugEnabled!!) { - log?.debug("Data retrieved from topic $topic after $timeout milliseconds: $data") - } - - return StreamSupport.stream(QueueBasedSpliterator(data, tombstone, terminationGuard!!, timeout), false) - } - - private fun createConsumer(consumerConfig: Map, topic: String): StreamsEventConsumer = runBlocking { - val copy = StreamsConfig.getConfiguration() - .filter { it.value is String } - .mapValues { it.value } - .toMutableMap() - copy.putAll(consumerConfig) - getStreamsEventSink(db!!)!!.getEventConsumerFactory() - .createStreamsEventConsumer(copy, log!!, setOf(topic)) - } - - companion object { - private val streamsEventSinkStore = ConcurrentHashMap() - - private fun getStreamsEventSink(db: GraphDatabaseService) = streamsEventSinkStore[KafkaUtil.getName(db)] - - fun registerStreamsEventSink(db: GraphDatabaseAPI, streamsEventSink: KafkaEventSink) { - streamsEventSinkStore[KafkaUtil.getName(db)] = streamsEventSink - } - - fun unregisterStreamsEventSink(db: GraphDatabaseAPI) = streamsEventSinkStore.remove(KafkaUtil.getName(db)) - - fun hasStatus(db: GraphDatabaseAPI, status: StreamsPluginStatus) = getStreamsEventSink(db)?.status() == status - - fun isRegistered(db: GraphDatabaseAPI) = getStreamsEventSink(db) != null - } -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/consumer/utils/ConsumerUtils.kt b/extended/src/main/kotlin/apoc/kafka/consumer/utils/ConsumerUtils.kt deleted file mode 100644 index d67bddcfb4..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/consumer/utils/ConsumerUtils.kt +++ /dev/null @@ -1,13 +0,0 @@ -package apoc.kafka.consumer.utils - -import org.neo4j.kernel.internal.GraphDatabaseAPI -import apoc.kafka.utils.KafkaUtil - -object ConsumerUtils { - - fun isWriteableInstance(db: GraphDatabaseAPI): Boolean = KafkaUtil.isWriteableInstance(db) - - fun executeInWriteableInstance(db: GraphDatabaseAPI, - action: () -> T?): T? = KafkaUtil.executeInWriteableInstance(db, action) - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/extensions/CommonExtensions.kt b/extended/src/main/kotlin/apoc/kafka/extensions/CommonExtensions.kt index 4c975816da..bb639ab092 100644 --- a/extended/src/main/kotlin/apoc/kafka/extensions/CommonExtensions.kt +++ b/extended/src/main/kotlin/apoc/kafka/extensions/CommonExtensions.kt @@ -3,28 +3,14 @@ package apoc.kafka.extensions import org.apache.avro.Schema import org.apache.avro.generic.GenericEnumSymbol import org.apache.avro.generic.GenericFixed -import org.apache.avro.generic.GenericRecord import org.apache.avro.generic.IndexedRecord -import org.apache.kafka.clients.consumer.ConsumerRecord -import org.apache.kafka.clients.consumer.OffsetAndMetadata -import org.apache.kafka.common.TopicPartition import org.neo4j.graphdb.Node import apoc.kafka.utils.JSONUtils -import apoc.kafka.service.StreamsSinkEntity import java.nio.ByteBuffer -import java.util.* import javax.lang.model.SourceVersion fun Map.getInt(name:String, defaultValue: Int) = this.get(name)?.toInt() ?: defaultValue -fun Map<*, *>.asProperties() = this.let { - val properties = Properties() - properties.putAll(it) - properties -} -fun Node.labelNames() : List { - return this.labels.map { it.name() } -} fun String.toPointCase(): String { return this.split("(?<=[a-z])(?=[A-Z])".toRegex()).joinToString(separator = ".").toLowerCase() @@ -45,8 +31,6 @@ fun Map.flatten(map: Map = this, prefix: String = "" }.toMap() } -fun ConsumerRecord<*, *>.topicPartition() = TopicPartition(this.topic(), this.partition()) -fun ConsumerRecord<*, *>.offsetAndMetadata(metadata: String = "") = OffsetAndMetadata(this.offset() + 1, metadata) private fun convertAvroData(rawValue: Any?): Any? = when (rawValue) { is IndexedRecord -> rawValue.toMap() @@ -66,16 +50,3 @@ fun IndexedRecord.toMap() = this.schema.fields fun Schema.toMap() = JSONUtils.asMap(this.toString()) -private fun convertData(data: Any?, stringWhenFailure: Boolean = false): Any? { - return when (data) { - null -> null - is ByteArray -> JSONUtils.readValue(data, Any::class.java) - is GenericRecord -> data.toMap() - else -> if (stringWhenFailure) data.toString() else throw RuntimeException("Unsupported type ${data::class.java.name}") - } -} -fun ConsumerRecord<*, *>.toStreamsSinkEntity(): StreamsSinkEntity { - val key = convertData(this.key(), true) - val value = convertData(this.value()) - return StreamsSinkEntity(key, value) -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/extensions/DatabaseManagementServiceExtensions.kt b/extended/src/main/kotlin/apoc/kafka/extensions/DatabaseManagementServiceExtensions.kt deleted file mode 100644 index 08d7ed2688..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/extensions/DatabaseManagementServiceExtensions.kt +++ /dev/null @@ -1,28 +0,0 @@ -package apoc.kafka.extensions - -import apoc.kafka.utils.KafkaUtil -import org.neo4j.dbms.api.DatabaseManagementService -import org.neo4j.kernel.internal.GraphDatabaseAPI - -fun DatabaseManagementService.getSystemDb() = this.database(KafkaUtil.SYSTEM_DATABASE_NAME) as GraphDatabaseAPI - -fun DatabaseManagementService.getDefaultDbName() = getSystemDb().let { - try { - it.beginTx().use { - val col = it.execute("SHOW DEFAULT DATABASE").columnAs("name") - if (col.hasNext()) { - col.next() - } else { - null - } - } - } catch (e: Exception) { - null - } -} - -fun DatabaseManagementService.getDefaultDb() = getDefaultDbName()?.let { this.database(it) as GraphDatabaseAPI } - -fun DatabaseManagementService.isAvailable(timeout: Long) = this.listDatabases() - .all { this.database(it).isAvailable(timeout) } - diff --git a/extended/src/main/kotlin/apoc/kafka/extensions/GraphDatabaseServerExtensions.kt b/extended/src/main/kotlin/apoc/kafka/extensions/GraphDatabaseServerExtensions.kt index 62aec6c725..ff36472837 100644 --- a/extended/src/main/kotlin/apoc/kafka/extensions/GraphDatabaseServerExtensions.kt +++ b/extended/src/main/kotlin/apoc/kafka/extensions/GraphDatabaseServerExtensions.kt @@ -15,18 +15,3 @@ fun GraphDatabaseService.execute(cypher: String, lambda: ((Result) -> T)) = fun GraphDatabaseService.execute(cypher: String, params: Map, lambda: ((Result) -> T)) = this.executeTransactionally(cypher, params, lambda) - -fun GraphDatabaseService.isSystemDb() = this.databaseName() == KafkaUtil.SYSTEM_DATABASE_NAME - -fun GraphDatabaseService.databaseManagementService() = (this as GraphDatabaseAPI).dependencyResolver - .resolveDependency(DatabaseManagementService::class.java, DependencyResolver.SelectionStrategy.SINGLE) - -fun GraphDatabaseService.isDefaultDb() = databaseManagementService().getDefaultDbName() == databaseName() - -fun GraphDatabaseService.registerTransactionEventListener(txHandler: TransactionEventListener<*>) { - databaseManagementService().registerTransactionEventListener(this.databaseName(), txHandler) -} - -fun GraphDatabaseService.unregisterTransactionEventListener(txHandler: TransactionEventListener<*>) { - databaseManagementService().unregisterTransactionEventListener(this.databaseName(), txHandler) -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaAdminService.kt b/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaAdminService.kt index 63e0188e25..7e278a091f 100644 --- a/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaAdminService.kt +++ b/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaAdminService.kt @@ -16,7 +16,7 @@ import apoc.kafka.utils.KafkaUtil import java.util.Collections import java.util.concurrent.ConcurrentHashMap -class KafkaAdminService(private val props: KafkaConfiguration, /*private val allTopics: List, */private val log: Log) { +class KafkaAdminService(private val props: KafkaConfiguration, private val log: Log) { private val client = AdminClient.create(props.asProperties()) private val kafkaTopics: MutableSet = Collections.newSetFromMap(ConcurrentHashMap()) private val isAutoCreateTopicsEnabled = isAutoCreateTopicsEnabled(client) diff --git a/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaEventRouter.kt b/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaEventRouter.kt index c54234d53f..3b677b9245 100644 --- a/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaEventRouter.kt +++ b/extended/src/main/kotlin/apoc/kafka/producer/kafka/KafkaEventRouter.kt @@ -30,7 +30,7 @@ class KafkaEventRouter(private val config: Map, private val db: GraphDatabaseService, private val log: Log) { - /*override*/ val eventRouterConfiguration: StreamsEventRouterConfiguration = StreamsEventRouterConfiguration + val eventRouterConfiguration: StreamsEventRouterConfiguration = StreamsEventRouterConfiguration .from(config, db.databaseName(), db.isDefaultDb(), log) @@ -45,7 +45,7 @@ class KafkaEventRouter(private val config: Map, else -> StreamsPluginStatus.STOPPED } - /*override*/ fun start() = runBlocking { + fun start() = runBlocking { mutex.withLock(producer) { if (status(producer) == StreamsPluginStatus.RUNNING) { return@runBlocking @@ -59,7 +59,7 @@ class KafkaEventRouter(private val config: Map, } } - /*override*/ fun stop() = runBlocking { + fun stop() = runBlocking { mutex.withLock(producer) { if (status(producer) == StreamsPluginStatus.STOPPED) { return@runBlocking diff --git a/extended/src/main/kotlin/apoc/kafka/service/StreamsSinkService.kt b/extended/src/main/kotlin/apoc/kafka/service/StreamsSinkService.kt deleted file mode 100644 index 80343b6f74..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/StreamsSinkService.kt +++ /dev/null @@ -1,42 +0,0 @@ -package apoc.kafka.service - -import apoc.kafka.service.sink.strategy.IngestionStrategy - - -const val STREAMS_TOPIC_KEY: String = "apoc.kafka.sink.topic" -const val STREAMS_TOPIC_CDC_KEY: String = "apoc.kafka.sink.topic.cdc" - -enum class TopicTypeGroup { CYPHER, CDC, PATTERN, CUD } -enum class TopicType(val group: TopicTypeGroup, val key: String) { - CDC_SOURCE_ID(group = TopicTypeGroup.CDC, key = "$STREAMS_TOPIC_CDC_KEY.sourceId"), - CYPHER(group = TopicTypeGroup.CYPHER, key = "$STREAMS_TOPIC_KEY.cypher"), - PATTERN_NODE(group = TopicTypeGroup.PATTERN, key = "$STREAMS_TOPIC_KEY.pattern.node"), - PATTERN_RELATIONSHIP(group = TopicTypeGroup.PATTERN, key = "$STREAMS_TOPIC_KEY.pattern.relationship"), - CDC_SCHEMA(group = TopicTypeGroup.CDC, key = "$STREAMS_TOPIC_CDC_KEY.schema"), - CUD(group = TopicTypeGroup.CUD, key = "$STREAMS_TOPIC_KEY.cud") -} - -data class StreamsSinkEntity(val key: Any?, val value: Any?) - -abstract class StreamsStrategyStorage { - abstract fun getTopicType(topic: String): TopicType? - - abstract fun getStrategy(topic: String): IngestionStrategy -} - -abstract class StreamsSinkService(private val streamsStrategyStorage: StreamsStrategyStorage) { - - abstract fun write(query: String, events: Collection) - - private fun writeWithStrategy(data: Collection, strategy: IngestionStrategy) { - strategy.mergeNodeEvents(data).forEach { write(it.query, it.events) } - strategy.deleteNodeEvents(data).forEach { write(it.query, it.events) } - - strategy.mergeRelationshipEvents(data).forEach { write(it.query, it.events) } - strategy.deleteRelationshipEvents(data).forEach { write(it.query, it.events) } - } - - fun writeForTopic(topic: String, params: Collection) { - writeWithStrategy(params, streamsStrategyStorage.getStrategy(topic)) - } -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/Topics.kt b/extended/src/main/kotlin/apoc/kafka/service/Topics.kt deleted file mode 100644 index 30009730db..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/Topics.kt +++ /dev/null @@ -1,127 +0,0 @@ -package apoc.kafka.service - -import apoc.kafka.service.sink.strategy.* -import kotlin.reflect.jvm.javaType - -class TopicValidationException(message: String): RuntimeException(message) - -private fun TopicType.replaceKeyBy(replacePrefix: Pair) = if (replacePrefix.first.isNullOrBlank()) - this.key - else - this.key.replace(replacePrefix.first, replacePrefix.second) - -data class Topics(val cypherTopics: Map = emptyMap(), - val cdcSourceIdTopics: Set = emptySet(), - val cdcSchemaTopics: Set = emptySet(), - val cudTopics: Set = emptySet(), - val nodePatternTopics: Map = emptyMap(), - val relPatternTopics: Map = emptyMap(), - val invalid: List = emptyList()) { - - operator fun plus(other: Topics): Topics { - return Topics(cypherTopics = this.cypherTopics + other.cypherTopics, - cdcSourceIdTopics = this.cdcSourceIdTopics + other.cdcSourceIdTopics, - cdcSchemaTopics = this.cdcSchemaTopics + other.cdcSchemaTopics, - cudTopics = this.cudTopics + other.cudTopics, - nodePatternTopics = this.nodePatternTopics + other.nodePatternTopics, - relPatternTopics = this.relPatternTopics + other.relPatternTopics, - invalid = this.invalid + other.invalid) - } - - fun allTopics(): List = this.asMap() - .map { - if (it.key.group == TopicTypeGroup.CDC || it.key.group == TopicTypeGroup.CUD) { - (it.value as Set).toList() - } else { - (it.value as Map).keys.toList() - } - } - .flatten() - - fun asMap(): Map = mapOf(TopicType.CYPHER to cypherTopics, TopicType.CUD to cudTopics, - TopicType.CDC_SCHEMA to cdcSchemaTopics, TopicType.CDC_SOURCE_ID to cdcSourceIdTopics, - TopicType.PATTERN_NODE to nodePatternTopics, TopicType.PATTERN_RELATIONSHIP to relPatternTopics) - - companion object { - fun from(map: Map, replacePrefix: Pair = ("" to ""), dbName: String = "", invalidTopics: List = emptyList()): Topics { - val config = map - .filterKeys { if (dbName.isNotBlank()) it.toLowerCase().endsWith(".to.$dbName") else !it.contains(".to.") } - .mapKeys { if (dbName.isNotBlank()) it.key.replace(".to.$dbName", "", true) else it.key } - val cypherTopicPrefix = TopicType.CYPHER.replaceKeyBy(replacePrefix) - val sourceIdKey = TopicType.CDC_SOURCE_ID.replaceKeyBy(replacePrefix) - val schemaKey = TopicType.CDC_SCHEMA.replaceKeyBy(replacePrefix) - val cudKey = TopicType.CUD.replaceKeyBy(replacePrefix) - val nodePatterKey = TopicType.PATTERN_NODE.replaceKeyBy(replacePrefix) - val relPatterKey = TopicType.PATTERN_RELATIONSHIP.replaceKeyBy(replacePrefix) - val cypherTopics = TopicUtils.filterByPrefix(config, cypherTopicPrefix) - val nodePatternTopics = TopicUtils - .filterByPrefix(config, nodePatterKey, invalidTopics) - .mapValues { NodePatternConfiguration.parse(it.value) } - val relPatternTopics = TopicUtils - .filterByPrefix(config, relPatterKey, invalidTopics) - .mapValues { RelationshipPatternConfiguration.parse(it.value) } - val cdcSourceIdTopics = TopicUtils.splitTopics(config[sourceIdKey] as? String, invalidTopics) - val cdcSchemaTopics = TopicUtils.splitTopics(config[schemaKey] as? String, invalidTopics) - val cudTopics = TopicUtils.splitTopics(config[cudKey] as? String, invalidTopics) - return Topics(cypherTopics, cdcSourceIdTopics, cdcSchemaTopics, cudTopics, nodePatternTopics, relPatternTopics) - } - } -} - -object TopicUtils { - - @JvmStatic val TOPIC_SEPARATOR = ";" - - fun filterByPrefix(config: Map<*, *>, prefix: String, invalidTopics: List = emptyList()): Map { - val fullPrefix = "$prefix." - return config - .filterKeys { it.toString().startsWith(fullPrefix) } - .mapKeys { it.key.toString().replace(fullPrefix, "") } - .filterKeys { !invalidTopics.contains(it) } - .mapValues { it.value.toString() } - } - - fun splitTopics(cdcMergeTopicsString: String?, invalidTopics: List = emptyList()): Set { - return if (cdcMergeTopicsString.isNullOrBlank()) { - emptySet() - } else { - cdcMergeTopicsString.split(TOPIC_SEPARATOR) - .filter { !invalidTopics.contains(it) } - .toSet() - } - } - - inline fun validate(topics: Topics) { - val exceptionStringConstructor = T::class.constructors - .first { it.parameters.size == 1 && it.parameters[0].type.javaType == String::class.java }!! - val crossDefinedTopics = topics.allTopics() - .groupBy({ it }, { 1 }) - .filterValues { it.sum() > 1 } - .keys - if (crossDefinedTopics.isNotEmpty()) { - throw exceptionStringConstructor - .call("The following topics are cross defined: $crossDefinedTopics") - } - } - - fun toStrategyMap(topics: Topics, sourceIdStrategyConfig: SourceIdIngestionStrategyConfig): Map { - return topics.asMap() - .filterKeys { it != TopicType.CYPHER } - .mapValues { (type, config) -> - when (type) { - TopicType.CDC_SOURCE_ID -> SourceIdIngestionStrategy(sourceIdStrategyConfig) - TopicType.CDC_SCHEMA -> SchemaIngestionStrategy() - TopicType.CUD -> CUDIngestionStrategy() - TopicType.PATTERN_NODE -> { - val map = config as Map - map.mapValues { NodePatternIngestionStrategy(it.value) } - } - TopicType.PATTERN_RELATIONSHIP -> { - val map = config as Map - map.mapValues { RelationshipPatternIngestionStrategy(it.value) } - } - else -> throw RuntimeException("Unsupported topic type $type") - } - } - } -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/CUDIngestionStrategy.kt b/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/CUDIngestionStrategy.kt deleted file mode 100644 index 9e3d294620..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/CUDIngestionStrategy.kt +++ /dev/null @@ -1,282 +0,0 @@ -package apoc.kafka.service.sink.strategy - -import apoc.kafka.events.EntityType -import apoc.kafka.extensions.quote -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.service.sink.strategy.CUDIngestionStrategy.Companion.FROM_KEY -import apoc.kafka.service.sink.strategy.CUDIngestionStrategy.Companion.TO_KEY -import apoc.kafka.utils.JSONUtils -import apoc.kafka.utils.KafkaUtil.getLabelsAsString -import apoc.kafka.utils.KafkaUtil.getNodeKeysAsString -import apoc.kafka.utils.KafkaUtil - - -enum class CUDOperations { create, merge, update, delete, match } - -abstract class CUD { - abstract val op: CUDOperations - abstract val type: EntityType - abstract val properties: Map -} - -data class CUDNode(override val op: CUDOperations, - override val properties: Map = emptyMap(), - val ids: Map = emptyMap(), - val detach: Boolean = true, - val labels: List = emptyList()): CUD() { - override val type = EntityType.node - - fun toMap(): Map { - return when (op) { - CUDOperations.delete -> mapOf("ids" to ids) - else -> mapOf("ids" to ids, "properties" to properties) - } - } -} - -data class CUDNodeRel(val ids: Map = emptyMap(), - val labels: List, - val op: CUDOperations = CUDOperations.match) - -data class CUDRelationship(override val op: CUDOperations, - override val properties: Map = emptyMap(), - val rel_type: String, - val from: CUDNodeRel, - val to: CUDNodeRel): CUD() { - override val type = EntityType.relationship - - fun toMap(): Map { - val from = mapOf("ids" to from.ids) - val to = mapOf("ids" to to.ids) - return when (op) { - CUDOperations.delete -> mapOf(FROM_KEY to from, - TO_KEY to to) - else -> mapOf(FROM_KEY to from, - TO_KEY to to, - "properties" to properties) - } - } -} - - -class CUDIngestionStrategy: IngestionStrategy { - - companion object { - @JvmStatic val ID_KEY = "ids" - @JvmStatic val PHYSICAL_ID_KEY = "_id" - @JvmStatic val FROM_KEY = "from" - @JvmStatic val TO_KEY = "to" - - private val LIST_VALID_CUD_NODE_REL = listOf(CUDOperations.merge, CUDOperations.create, CUDOperations.match) - private val LIST_VALID_CUD_REL = listOf(CUDOperations.create, CUDOperations.merge, CUDOperations.update) - } - - data class NodeRelMetadata(val labels: List, val ids: Set, val op: CUDOperations = CUDOperations.match) - - private fun CUDRelationship.isValidOperation(): Boolean = from.op in LIST_VALID_CUD_NODE_REL && to.op in LIST_VALID_CUD_NODE_REL && op in LIST_VALID_CUD_REL - - private fun NodeRelMetadata.getOperation() = op.toString().toUpperCase() - - private fun buildNodeLookupByIds(keyword: String = "MATCH", ids: Set, labels: List, identifier: String = "n", field: String = ""): String { - val fullField = if (field.isNotBlank()) "$field." else field - val quotedIdentifier = identifier.quote() - return when (ids.contains(PHYSICAL_ID_KEY)) { - true -> "MATCH ($quotedIdentifier) WHERE id($quotedIdentifier) = event.$fullField$ID_KEY._id" - else -> "$keyword ($quotedIdentifier${getLabelsAsString(labels)} {${getNodeKeysAsString(keys = ids, prefix = "$fullField$ID_KEY")}})" - } - } - - private fun buildNodeCreateStatement(labels: List): String = """ - |${KafkaUtil.UNWIND} - |CREATE (n${getLabelsAsString(labels)}) - |SET n = event.properties - """.trimMargin() - - private fun buildRelCreateStatement(from: NodeRelMetadata, to: NodeRelMetadata, - rel_type: String): String = """ - |${KafkaUtil.UNWIND} - |${buildNodeLookupByIds(keyword = from.getOperation(), ids = from.ids, labels = from.labels, identifier = FROM_KEY, field = FROM_KEY)} - |${KafkaUtil.WITH_EVENT_FROM} - |${buildNodeLookupByIds(keyword = to.getOperation(), ids = to.ids, labels = to.labels, identifier = TO_KEY, field = TO_KEY)} - |CREATE ($FROM_KEY)-[r:${rel_type.quote()}]->($TO_KEY) - |SET r = event.properties - """.trimMargin() - - private fun buildNodeMergeStatement(labels: List, ids: Set): String = """ - |${KafkaUtil.UNWIND} - |${buildNodeLookupByIds(keyword = "MERGE", ids = ids, labels = labels)} - |SET n += event.properties - """.trimMargin() - - private fun buildRelMergeStatement(from: NodeRelMetadata, to: NodeRelMetadata, - rel_type: String): String = """ - |${KafkaUtil.UNWIND} - |${buildNodeLookupByIds(keyword = from.getOperation(), ids = from.ids, labels = from.labels, identifier = FROM_KEY, field = FROM_KEY)} - |${KafkaUtil.WITH_EVENT_FROM} - |${buildNodeLookupByIds(keyword = to.getOperation(), ids = to.ids, labels = to.labels, identifier = TO_KEY, field = TO_KEY)} - |MERGE ($FROM_KEY)-[r:${rel_type.quote()}]->($TO_KEY) - |SET r += event.properties - """.trimMargin() - - private fun buildNodeUpdateStatement(labels: List, ids: Set): String = """ - |${KafkaUtil.UNWIND} - |${buildNodeLookupByIds(ids = ids, labels = labels)} - |SET n += event.properties - """.trimMargin() - - private fun buildRelUpdateStatement(from: NodeRelMetadata, to: NodeRelMetadata, - rel_type: String): String = """ - |${KafkaUtil.UNWIND} - |${buildNodeLookupByIds(ids = from.ids, labels = from.labels, identifier = FROM_KEY, field = FROM_KEY)} - |${buildNodeLookupByIds(ids = to.ids, labels = to.labels, identifier = TO_KEY, field = TO_KEY)} - |MATCH ($FROM_KEY)-[r:${rel_type.quote()}]->($TO_KEY) - |SET r += event.properties - """.trimMargin() - - private fun buildDeleteStatement(labels: List, ids: Set, detach: Boolean): String = """ - |${KafkaUtil.UNWIND} - |${buildNodeLookupByIds(ids = ids, labels = labels)} - |${if (detach) "DETACH " else ""}DELETE n - """.trimMargin() - - private fun buildRelDeleteStatement(from: NodeRelMetadata, to: NodeRelMetadata, - rel_type: String): String = """ - |${KafkaUtil.UNWIND} - |${buildNodeLookupByIds(ids = from.ids, labels = from.labels, identifier = FROM_KEY, field = FROM_KEY)} - |${buildNodeLookupByIds(ids = to.ids, labels = to.labels, identifier = TO_KEY, field = TO_KEY)} - |MATCH ($FROM_KEY)-[r:${rel_type.quote()}]->($TO_KEY) - |DELETE r - """.trimMargin() - - private inline fun toCUDEntity(it: Any): T? { - return when (it) { - is T -> it - is Map<*, *> -> { - val type = it["type"]?.toString() - val entityType = if (type == null) null else EntityType.valueOf(type) - when { - entityType == null -> throw RuntimeException("No `type` field found") - entityType != null && EntityType.node == entityType && T::class.java != CUDNode::class.java -> null - entityType != null && EntityType.relationship == entityType && T::class.java != CUDRelationship::class.java -> null - else -> JSONUtils.convertValue(it) - } - } - else -> null - } - } - - private fun getLabels(relNode: CUDNodeRel) = if (relNode.ids.containsKey(PHYSICAL_ID_KEY)) emptyList() else relNode.labels - private fun getLabels(node: CUDNode) = if (node.ids.containsKey(PHYSICAL_ID_KEY)) emptyList() else node.labels - - override fun mergeNodeEvents(events: Collection): List { - val data = events - .mapNotNull { - it.value?.let { - try { - val data = toCUDEntity(it) - when (data?.op) { - CUDOperations.merge -> if (data.ids.isNotEmpty() && data.properties.isNotEmpty()) data else null // TODO send to the DLQ the null - CUDOperations.update, CUDOperations.create -> if (data.properties.isNotEmpty()) data else null // TODO send to the DLQ the null - else -> null - } - } catch (e: Exception) { - null - } - } - } - .groupBy({ it.op }, { it }) - - val create = data[CUDOperations.create] - .orEmpty() - .groupBy { getLabels(it) } - .map { QueryEvents(buildNodeCreateStatement(it.key), it.value.map { it.toMap() }) } - val merge = data[CUDOperations.merge] - .orEmpty() - .groupBy { getLabels(it) to it.ids.keys } - .map { QueryEvents(buildNodeMergeStatement(it.key.first, it.key.second), it.value.map { it.toMap() }) } - val update = data[CUDOperations.update] - .orEmpty() - .groupBy { getLabels(it) to it.ids.keys } - .map { QueryEvents(buildNodeUpdateStatement(it.key.first, it.key.second), it.value.map { it.toMap() }) } - return (create + merge + update) // we'll group the data because of in case of `_id` key is present the generated queries are the same for update/merge - .map { it.query to it.events } - .groupBy({ it.first }, { it.second }) - .map { QueryEvents(it.key, it.value.flatten()) } - } - - override fun deleteNodeEvents(events: Collection): List { - return events - .mapNotNull { - it.value?.let { - try { - val data = toCUDEntity(it) - when (data?.op) { - CUDOperations.delete -> if (data.ids.isNotEmpty() && data.properties.isEmpty()) data else null // TODO send to the DLQ the null - else -> null // TODO send to the DLQ the null - } - } catch (e: Exception) { - null - } - } - } - .groupBy { Triple(it.labels, it.ids.keys, it.detach) } - .map { - val (labels, keys, detach) = it.key - QueryEvents(buildDeleteStatement(labels, keys, detach), it.value.map { it.toMap() }) - } - } - - override fun mergeRelationshipEvents(events: Collection): List { - val data = events - .mapNotNull { - it.value?.let { - try { - val data = toCUDEntity(it) - when { - data!!.isValidOperation() -> if (data.from.ids.isNotEmpty() && data.to.ids.isNotEmpty()) data else null // TODO send to the DLQ the null - else -> null // TODO send to the DLQ the null - } - } catch (e: Exception) { - null - } - } - } - .groupBy({ it.op }, { it }) - - return data.flatMap { (op, list) -> - list.groupBy { Triple(NodeRelMetadata(getLabels(it.from), it.from.ids.keys, it.from.op), NodeRelMetadata(getLabels(it.to), it.to.ids.keys, it.to.op), it.rel_type) } - .map { - val (from, to, rel_type) = it.key - val query = when (op) { - CUDOperations.create -> buildRelCreateStatement(from, to, rel_type) - CUDOperations.merge -> buildRelMergeStatement(from, to, rel_type) - else -> buildRelUpdateStatement(from, to, rel_type) - } - QueryEvents(query, it.value.map { it.toMap() }) - } - } - } - - override fun deleteRelationshipEvents(events: Collection): List { - return events - .mapNotNull { - it.value?.let { - try { - val data = toCUDEntity(it) - when (data?.op) { - CUDOperations.delete -> if (data.from.ids.isNotEmpty() && data.to.ids.isNotEmpty()) data else null // TODO send to the DLQ the null - else -> null // TODO send to the DLQ the null - } - } catch (e: Exception) { - null - } - } - } - .groupBy { Triple(NodeRelMetadata(getLabels(it.from), it.from.ids.keys), NodeRelMetadata(getLabels(it.to), it.to.ids.keys), it.rel_type) } - .map { - val (from, to, rel_type) = it.key - QueryEvents(buildRelDeleteStatement(from, to, rel_type), it.value.map { it.toMap() }) - } - } - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/IngestionStrategy.kt b/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/IngestionStrategy.kt deleted file mode 100644 index 714406baf6..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/IngestionStrategy.kt +++ /dev/null @@ -1,37 +0,0 @@ -package apoc.kafka.service.sink.strategy - -import apoc.kafka.events.Constraint -import apoc.kafka.events.RelationshipPayload -import apoc.kafka.service.StreamsSinkEntity - - -data class QueryEvents(val query: String, val events: List>) - -interface IngestionStrategy { - fun mergeNodeEvents(events: Collection): List - fun deleteNodeEvents(events: Collection): List - fun mergeRelationshipEvents(events: Collection): List - fun deleteRelationshipEvents(events: Collection): List -} - -data class RelationshipSchemaMetadata(val label: String, - val startLabels: List, - val endLabels: List, - val startKeys: Set, - val endKeys: Set) { - constructor(payload: RelationshipPayload) : this(label = payload.label, - startLabels = payload.start.labels.orEmpty(), - endLabels = payload.end.labels.orEmpty(), - startKeys = payload.start.ids.keys, - endKeys = payload.end.ids.keys) -} - -data class NodeSchemaMetadata(val constraints: List, - val labelsToAdd: List, - val labelsToDelete: List, - val keys: Set) - - - -data class NodeMergeMetadata(val labelsToAdd: Set, - val labelsToDelete: Set) \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/NodePatternIngestionStrategy.kt b/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/NodePatternIngestionStrategy.kt deleted file mode 100644 index b22bdf8080..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/NodePatternIngestionStrategy.kt +++ /dev/null @@ -1,91 +0,0 @@ -package apoc.kafka.service.sink.strategy - -import apoc.kafka.extensions.flatten -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.utils.JSONUtils -import apoc.kafka.utils.KafkaUtil.containsProp -import apoc.kafka.utils.KafkaUtil.getLabelsAsString -import apoc.kafka.utils.KafkaUtil.getNodeMergeKeys -import apoc.kafka.utils.KafkaUtil - -class NodePatternIngestionStrategy(private val nodePatternConfiguration: NodePatternConfiguration): IngestionStrategy { - - private val mergeNodeTemplate: String = """ - |${KafkaUtil.UNWIND} - |MERGE (n${getLabelsAsString(nodePatternConfiguration.labels)}{${ - getNodeMergeKeys("keys", nodePatternConfiguration.keys) - }}) - |SET n = event.properties - |SET n += event.keys - """.trimMargin() - - private val deleteNodeTemplate: String = """ - |${KafkaUtil.UNWIND} - |MATCH (n${getLabelsAsString(nodePatternConfiguration.labels)}{${ - getNodeMergeKeys("keys", nodePatternConfiguration.keys) - }}) - |DETACH DELETE n - """.trimMargin() - - override fun mergeNodeEvents(events: Collection): List { - val data = events - .mapNotNull { if (it.value != null) JSONUtils.asMap(it.value) else null } - .mapNotNull { toData(nodePatternConfiguration, it) } - return if (data.isEmpty()) { - emptyList() - } else { - listOf(QueryEvents(mergeNodeTemplate, data)) - } - } - - override fun deleteNodeEvents(events: Collection): List { - val data = events - .filter { it.value == null && it.key != null } - .mapNotNull { if (it.key != null) JSONUtils.asMap(it.key) else null } - .mapNotNull { toData(nodePatternConfiguration, it, false) } - return if (data.isEmpty()) { - emptyList() - } else { - listOf(QueryEvents(deleteNodeTemplate, data)) - } - } - - override fun mergeRelationshipEvents(events: Collection): List { - return emptyList() - } - - override fun deleteRelationshipEvents(events: Collection): List { - return emptyList() - } - - companion object { - fun toData(nodePatternConfiguration: NodePatternConfiguration, props: Map, withProperties: Boolean = true): Map>? { - val properties = props.flatten() - val containsKeys = nodePatternConfiguration.keys.all { properties.containsKey(it) } - return if (containsKeys) { - val filteredProperties = when (nodePatternConfiguration.type) { - PatternConfigurationType.ALL -> properties.filterKeys { !nodePatternConfiguration.keys.contains(it) } - PatternConfigurationType.EXCLUDE -> properties.filterKeys { key -> - val containsProp = containsProp(key, nodePatternConfiguration.properties) - !nodePatternConfiguration.keys.contains(key) && !containsProp - } - PatternConfigurationType.INCLUDE -> properties.filterKeys { key -> - val containsProp = containsProp(key, nodePatternConfiguration.properties) - !nodePatternConfiguration.keys.contains(key) && containsProp - } - } - if (withProperties) { - mapOf("keys" to properties.filterKeys { nodePatternConfiguration.keys.contains(it) }, - "properties" to filteredProperties) - } else { - mapOf("keys" to properties.filterKeys { nodePatternConfiguration.keys.contains(it) }) - } - } else { - null - } - } - - - } - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/PatternConfiguration.kt b/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/PatternConfiguration.kt deleted file mode 100644 index 8e6b36093d..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/PatternConfiguration.kt +++ /dev/null @@ -1,198 +0,0 @@ -package apoc.kafka.service.sink.strategy - -import apoc.kafka.extensions.quote - -enum class PatternConfigurationType { ALL, INCLUDE, EXCLUDE } - -private const val ID_PREFIX = "!" -private const val MINUS_PREFIX = "-" -private const val LABEL_SEPARATOR = ":" -private const val PROPERTIES_SEPARATOR = "," - -private fun getPatternConfiguredType(properties: List): PatternConfigurationType { - if (properties.isEmpty()) { - return PatternConfigurationType.ALL - } - return when (properties[0].trim()[0]) { - '*' -> PatternConfigurationType.ALL - '-' -> PatternConfigurationType.EXCLUDE - else -> PatternConfigurationType.INCLUDE - } -} - -private fun isHomogeneousPattern(type: PatternConfigurationType, properties: List, pattern: String, entityType: String) { - val isHomogeneous = when (type) { - PatternConfigurationType.INCLUDE -> properties.all { it.trim()[0].isJavaIdentifierStart() } - PatternConfigurationType.EXCLUDE -> properties.all { it.trim().startsWith(MINUS_PREFIX) } - PatternConfigurationType.ALL -> properties.isEmpty() || properties == listOf("*") - } - if (!isHomogeneous) { - throw IllegalArgumentException("The $entityType pattern $pattern is not homogeneous") - } -} - -private fun cleanProperties(type: PatternConfigurationType, properties: List): List { - return when (type) { - PatternConfigurationType.INCLUDE -> properties.map { it.trim() } - PatternConfigurationType.EXCLUDE -> properties.map { it.trim().replace(MINUS_PREFIX, "") } - PatternConfigurationType.ALL -> emptyList() - } -} - -interface PatternConfiguration - -data class NodePatternConfiguration(val keys: Set, val type: PatternConfigurationType, - val labels: List, val properties: List): PatternConfiguration { - companion object { - - // (:LabelA{!id,foo,bar}) - @JvmStatic private val cypherNodePatternConfigured = """\((:\w+\s*(?::\s*(?:\w+)\s*)*)\s*(?:\{\s*(-?[\w!\.]+\s*(?:,\s*-?[!\w\*\.]+\s*)*)\})?\)$""".toRegex() - // LabelA{!id,foo,bar} - @JvmStatic private val simpleNodePatternConfigured = """^(\w+\s*(?::\s*(?:\w+)\s*)*)\s*(?:\{\s*(-?[\w!\.]+\s*(?:,\s*-?[!\w\*\.]+\s*)*)\})?$""".toRegex() - fun parse(pattern: String): NodePatternConfiguration { - val isCypherPattern = pattern.startsWith("(") - val regex = if (isCypherPattern) cypherNodePatternConfigured else simpleNodePatternConfigured - val matcher = regex.matchEntire(pattern) - if (matcher == null) { - throw IllegalArgumentException("The Node pattern $pattern is invalid") - } else { - val labels = matcher.groupValues[1] - .split(LABEL_SEPARATOR) - .let { - if (isCypherPattern) it.drop(1) else it - } - .map { it.quote() } - val allProperties = matcher.groupValues[2].split(PROPERTIES_SEPARATOR) - val keys = allProperties - .filter { it.startsWith(ID_PREFIX) } - .map { it.trim().substring(1) }.toSet() - if (keys.isEmpty()) { - throw IllegalArgumentException("The Node pattern $pattern must contains at lest one key") - } - val properties = allProperties.filter { !it.startsWith(ID_PREFIX) } - val type = getPatternConfiguredType(properties) - isHomogeneousPattern(type, properties, pattern, "Node") - val cleanedProperties = cleanProperties(type, properties) - - return NodePatternConfiguration(keys = keys, type = type, - labels = labels, properties = cleanedProperties) - } - } - } -} - - -data class RelationshipPatternConfiguration(val start: NodePatternConfiguration, val end: NodePatternConfiguration, - val relType: String, val type: PatternConfigurationType, - val properties: List): PatternConfiguration { - companion object { - - // we don't allow ALL for start/end nodes in rels - // it's public for testing purpose - fun getNodeConf(pattern: String): NodePatternConfiguration { - val start = NodePatternConfiguration.parse(pattern) - return if (start.type == PatternConfigurationType.ALL) { - NodePatternConfiguration(keys = start.keys, type = PatternConfigurationType.INCLUDE, - labels = start.labels, properties = start.properties) - } else { - start - } - } - - // (:Source{!id})-[:REL_TYPE{foo, -bar}]->(:Target{!targetId}) - private val cypherRelationshipPatternConfigured = """^\(:(.*?)\)(<)?-\[(?::)([\w\_]+)(\{\s*(-?[\w\*\.]+\s*(?:,\s*-?[\w\*\.]+\s*)*)\})?\]-(>)?\(:(.*?)\)$""".toRegex() - // LabelA{!id} REL_TYPE{foo, -bar} LabelB{!targetId} - private val simpleRelationshipPatternConfigured = """^(.*?) ([\w\_]+)(\{\s*(-?[\w\*\.]+\s*(?:,\s*-?[\w\*\.]+\s*)*)\})? (.*?)$""".toRegex() - - data class RelationshipPatternMetaData(val startPattern: String, val endPattern: String, val relType: String, val properties: List) { - companion object { - - private fun toProperties(propGroup: String): List = if (propGroup.isNullOrBlank()) { - emptyList() - } else { - propGroup.split(PROPERTIES_SEPARATOR) - } - - fun create(isCypherPattern: Boolean, isLeftToRight: Boolean, groupValues: List): RelationshipPatternMetaData { - lateinit var start: String - lateinit var end: String - lateinit var relType: String - lateinit var props: List - - if (isCypherPattern) { - if (isLeftToRight) { - start = groupValues[1] - end = groupValues[7] - } else { - start = groupValues[7] - end = groupValues[1] - } - relType = groupValues[3] - props = toProperties(groupValues[5]) - } else { - if (isLeftToRight) { - start = groupValues[1] - end = groupValues[5] - } else { - start = groupValues[5] - end = groupValues[1] - } - relType = groupValues[2] - props = toProperties(groupValues[4]) - } - - return RelationshipPatternMetaData(startPattern = start, - endPattern = end, relType = relType, - properties = props) - } - } - } - - fun parse(pattern: String): RelationshipPatternConfiguration { - val isCypherPattern = pattern.startsWith("(") - val regex = if (isCypherPattern) { - cypherRelationshipPatternConfigured - } else { - simpleRelationshipPatternConfigured - } - val matcher = regex.matchEntire(pattern) - if (matcher == null) { - throw IllegalArgumentException("The Relationship pattern $pattern is invalid") - } else { - val isLeftToRight = (!isCypherPattern || isUndirected(matcher) || isDirectedToRight(matcher)) - val isRightToLeft = if (isCypherPattern) isDirectedToLeft(matcher) else false - - if (!isLeftToRight && !isRightToLeft) { - throw IllegalArgumentException("The Relationship pattern $pattern has an invalid direction") - } - - val metadata = RelationshipPatternMetaData.create(isCypherPattern, isLeftToRight, matcher.groupValues) - - val start = try { - getNodeConf(metadata.startPattern) - } catch (e: Exception) { - throw IllegalArgumentException("The Relationship pattern $pattern is invalid") - } - val end = try { - getNodeConf(metadata.endPattern) - } catch (e: Exception) { - throw IllegalArgumentException("The Relationship pattern $pattern is invalid") - } - val type = getPatternConfiguredType(metadata.properties) - isHomogeneousPattern(type, metadata.properties, pattern, "Relationship") - val cleanedProperties = cleanProperties(type, metadata.properties) - return RelationshipPatternConfiguration(start = start, end = end, relType = metadata.relType, - properties = cleanedProperties, type = type) - } - } - - private fun isDirectedToLeft(matcher: MatchResult) = - (matcher.groupValues[2] == "<" && matcher.groupValues[6] == "") - - private fun isDirectedToRight(matcher: MatchResult) = - (matcher.groupValues[2] == "" && matcher.groupValues[6] == ">") - - private fun isUndirected(matcher: MatchResult) = - (matcher.groupValues[2] == "" && matcher.groupValues[6] == "") - } -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/RelationshipPatternIngestionStrategy.kt b/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/RelationshipPatternIngestionStrategy.kt deleted file mode 100644 index f8188eb78e..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/RelationshipPatternIngestionStrategy.kt +++ /dev/null @@ -1,120 +0,0 @@ -package apoc.kafka.service.sink.strategy - -import apoc.kafka.extensions.flatten -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.utils.JSONUtils -import apoc.kafka.utils.KafkaUtil.containsProp -import apoc.kafka.utils.KafkaUtil.getLabelsAsString -import apoc.kafka.utils.KafkaUtil.getNodeMergeKeys -import apoc.kafka.utils.KafkaUtil - -class RelationshipPatternIngestionStrategy(private val relationshipPatternConfiguration: RelationshipPatternConfiguration): IngestionStrategy { - - private val mergeRelationshipTemplate: String = """ - |${KafkaUtil.UNWIND} - |MERGE (start${getLabelsAsString(relationshipPatternConfiguration.start.labels)}{${ - getNodeMergeKeys("start.keys", relationshipPatternConfiguration.start.keys) - }}) - |SET start = event.start.properties - |SET start += event.start.keys - |MERGE (end${getLabelsAsString(relationshipPatternConfiguration.end.labels)}{${ - getNodeMergeKeys("end.keys", relationshipPatternConfiguration.end.keys) - }}) - |SET end = event.end.properties - |SET end += event.end.keys - |MERGE (start)-[r:${relationshipPatternConfiguration.relType}]->(end) - |SET r = event.properties - """.trimMargin() - - private val deleteRelationshipTemplate: String = """ - |${KafkaUtil.UNWIND} - |MATCH (start${getLabelsAsString(relationshipPatternConfiguration.start.labels)}{${ - getNodeMergeKeys("start.keys", relationshipPatternConfiguration.start.keys) - }}) - |MATCH (end${getLabelsAsString(relationshipPatternConfiguration.end.labels)}{${ - getNodeMergeKeys("end.keys", relationshipPatternConfiguration.end.keys) - }}) - |MATCH (start)-[r:${relationshipPatternConfiguration.relType}]->(end) - |DELETE r - """.trimMargin() - - override fun mergeNodeEvents(events: Collection): List { - return emptyList() - } - - override fun deleteNodeEvents(events: Collection): List { - return emptyList() - } - - override fun mergeRelationshipEvents(events: Collection): List { - val data = events - .mapNotNull { if (it.value != null) JSONUtils.asMap(it.value) else null } - .mapNotNull { props -> - val properties = props.flatten() - val containsKeys = relationshipPatternConfiguration.start.keys.all { properties.containsKey(it) } - && relationshipPatternConfiguration.end.keys.all { properties.containsKey(it) } - if (containsKeys) { - val filteredProperties = when (relationshipPatternConfiguration.type) { - PatternConfigurationType.ALL -> properties.filterKeys { isRelationshipProperty(it) } - PatternConfigurationType.EXCLUDE -> properties.filterKeys { - val containsProp = containsProp(it, relationshipPatternConfiguration.properties) - isRelationshipProperty(it) && !containsProp - } - PatternConfigurationType.INCLUDE -> properties.filterKeys { - val containsProp = containsProp(it, relationshipPatternConfiguration.properties) - isRelationshipProperty(it) && containsProp - } - } - val startConf = relationshipPatternConfiguration.start - val endConf = relationshipPatternConfiguration.end - - val start = NodePatternIngestionStrategy.toData(startConf, props) - val end = NodePatternIngestionStrategy.toData(endConf, props) - - mapOf("start" to start, "end" to end, "properties" to filteredProperties) - } else { - null - } - } - return if (data.isEmpty()) { - emptyList() - } else { - listOf(QueryEvents(mergeRelationshipTemplate, data)) - } - } - - private fun isRelationshipProperty(propertyName: String): Boolean { - return (!relationshipPatternConfiguration.start.keys.contains(propertyName) - && !relationshipPatternConfiguration.start.properties.contains(propertyName) - && !relationshipPatternConfiguration.end.keys.contains(propertyName) - && !relationshipPatternConfiguration.end.properties.contains(propertyName)) - } - - override fun deleteRelationshipEvents(events: Collection): List { - val data = events - .filter { it.value == null && it.key != null } - .mapNotNull { if (it.key != null) JSONUtils.asMap(it.key) else null } - .mapNotNull { props -> - val properties = props.flatten() - val containsKeys = relationshipPatternConfiguration.start.keys.all { properties.containsKey(it) } - && relationshipPatternConfiguration.end.keys.all { properties.containsKey(it) } - if (containsKeys) { - val startConf = relationshipPatternConfiguration.start - val endConf = relationshipPatternConfiguration.end - - val start = NodePatternIngestionStrategy.toData(startConf, props) - val end = NodePatternIngestionStrategy.toData(endConf, props) - - mapOf("start" to start, "end" to end) - } else { - null - } - } - return if (data.isEmpty()) { - emptyList() - } else { - listOf(QueryEvents(deleteRelationshipTemplate, data)) - } - } - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/SchemaIngestionStrategy.kt b/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/SchemaIngestionStrategy.kt deleted file mode 100644 index daaf717017..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/SchemaIngestionStrategy.kt +++ /dev/null @@ -1,185 +0,0 @@ -package apoc.kafka.service.sink.strategy - -import apoc.kafka.events.* -import apoc.kafka.extensions.quote -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.utils.KafkaUtil -import apoc.kafka.utils.KafkaUtil.getLabelsAsString -import apoc.kafka.utils.KafkaUtil.getNodeKeysAsString -import apoc.kafka.utils.KafkaUtil.getNodeKeys -import apoc.kafka.utils.KafkaUtil.toStreamsTransactionEvent - - -class SchemaIngestionStrategy: IngestionStrategy { - - private fun prepareRelationshipEvents(events: List, withProperties: Boolean = true): Map>> = events - .mapNotNull { - val payload = it.payload as RelationshipPayload - - val startNodeConstraints = getNodeConstraints(it) { - it.type == StreamsConstraintType.UNIQUE && payload.start.labels.orEmpty().contains(it.label) - } - val endNodeConstraints = getNodeConstraints(it) { - it.type == StreamsConstraintType.UNIQUE && payload.end.labels.orEmpty().contains(it.label) - } - - if (constraintsAreEmpty(startNodeConstraints, endNodeConstraints)) { - null - } else { - createRelationshipMetadata(payload, startNodeConstraints, endNodeConstraints, withProperties) - } - } - .groupBy { it.first } - .mapValues { it.value.map { it.second } } - - private fun createRelationshipMetadata(payload: RelationshipPayload, startNodeConstraints: List, endNodeConstraints: List, withProperties: Boolean): Pair>>? { - val startNodeKeys = getNodeKeys( - labels = payload.start.labels.orEmpty(), - propertyKeys = payload.start.ids.keys, - constraints = startNodeConstraints) - val endNodeKeys = getNodeKeys( - labels = payload.end.labels.orEmpty(), - propertyKeys = payload.end.ids.keys, - constraints = endNodeConstraints) - val start = payload.start.ids.filterKeys { startNodeKeys.contains(it) } - val end = payload.end.ids.filterKeys { endNodeKeys.contains(it) } - - return if (idsAreEmpty(start, end)) { - null - } else { - val value = if (withProperties) { - val properties = payload.after?.properties ?: payload.before?.properties ?: emptyMap() - mapOf("start" to start, "end" to end, "properties" to properties) - } else { - mapOf("start" to start, "end" to end) - } - val key = RelationshipSchemaMetadata( - label = payload.label, - startLabels = payload.start.labels.orEmpty().filter { label -> startNodeConstraints.any { it.label == label } }, - endLabels = payload.end.labels.orEmpty().filter { label -> endNodeConstraints.any { it.label == label } }, - startKeys = start.keys, - endKeys = end.keys - ) - key to value - } - } - - private fun idsAreEmpty(start: Map, end: Map) = - start.isEmpty() || end.isEmpty() - - private fun constraintsAreEmpty(startNodeConstraints: List, endNodeConstraints: List) = - startNodeConstraints.isEmpty() || endNodeConstraints.isEmpty() - - override fun mergeRelationshipEvents(events: Collection): List { - return prepareRelationshipEvents(events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.relationship - && it.meta.operation != OperationType.deleted } }) - .map { - val label = it.key.label.quote() - val query = """ - |${KafkaUtil.UNWIND} - |MERGE (start${getLabelsAsString(it.key.startLabels)}{${getNodeKeysAsString("start", it.key.startKeys)}}) - |MERGE (end${getLabelsAsString(it.key.endLabels)}{${getNodeKeysAsString("end", it.key.endKeys)}}) - |MERGE (start)-[r:$label]->(end) - |SET r = event.properties - """.trimMargin() - QueryEvents(query, it.value) - } - } - - override fun deleteRelationshipEvents(events: Collection): List { - return prepareRelationshipEvents(events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.relationship - && it.meta.operation == OperationType.deleted } }, false) - .map { - val label = it.key.label.quote() - val query = """ - |${KafkaUtil.UNWIND} - |MATCH (start${getLabelsAsString(it.key.startLabels)}{${getNodeKeysAsString("start", it.key.startKeys)}}) - |MATCH (end${getLabelsAsString(it.key.endLabels)}{${getNodeKeysAsString("end", it.key.endKeys)}}) - |MATCH (start)-[r:$label]->(end) - |DELETE r - """.trimMargin() - QueryEvents(query, it.value) - } - } - - override fun deleteNodeEvents(events: Collection): List { - return events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.node && it.meta.operation == OperationType.deleted } } - .mapNotNull { - val changeEvtBefore = it.payload.before as NodeChange - val constraints = getNodeConstraints(it) { it.type == StreamsConstraintType.UNIQUE } - if (constraints.isEmpty()) { - null - } else { - constraints to mapOf("properties" to changeEvtBefore.properties) - } - } - .groupBy({ it.first }, { it.second }) - .map { - val labels = it.key.mapNotNull { it.label } - val nodeKeys = it.key.flatMap { it.properties }.toSet() - val query = """ - |${KafkaUtil.UNWIND} - |MATCH (n${getLabelsAsString(labels)}{${getNodeKeysAsString(keys = nodeKeys)}}) - |DETACH DELETE n - """.trimMargin() - QueryEvents(query, it.value) - } - } - - override fun mergeNodeEvents(events: Collection): List { - val filterLabels: (List, List) -> List = { labels, constraints -> - labels.filter { label -> !constraints.any { constraint -> constraint.label == label } } - .map { it.quote() } - } - return events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.node && it.meta.operation != OperationType.deleted } } - .mapNotNull { - val changeEvtAfter = it.payload.after as NodeChange - val labelsAfter = changeEvtAfter.labels ?: emptyList() - val labelsBefore = (it.payload.before as? NodeChange)?.labels.orEmpty() - - val constraints = getNodeConstraints(it) { it.type == StreamsConstraintType.UNIQUE } - if (constraints.isEmpty()) { - null - } else { - val labelsToAdd = filterLabels((labelsAfter - labelsBefore), constraints) - val labelsToDelete = filterLabels((labelsBefore - labelsAfter), constraints) - - val propertyKeys = changeEvtAfter.properties?.keys ?: emptySet() - val keys = getNodeKeys(labelsAfter, propertyKeys, constraints) - - if (keys.isEmpty()) { - null - } else { - val key = NodeSchemaMetadata(constraints = constraints, - labelsToAdd = labelsToAdd, labelsToDelete = labelsToDelete, - keys = keys) - val value = mapOf("properties" to changeEvtAfter.properties) - key to value - } - } - } - .groupBy({ it.first }, { it.second }) - .map { map -> - var query = """ - |${KafkaUtil.UNWIND} - |MERGE (n${getLabelsAsString(map.key.constraints.mapNotNull { it.label })}{${getNodeKeysAsString(keys = map.key.keys)}}) - |SET n = event.properties - """.trimMargin() - if (map.key.labelsToAdd.isNotEmpty()) { - query += "\nSET n${getLabelsAsString(map.key.labelsToAdd)}" - } - if (map.key.labelsToDelete.isNotEmpty()) { - query += "\nREMOVE n${getLabelsAsString(map.key.labelsToDelete)}" - } - QueryEvents(query, map.value) - } - } - - private fun getNodeConstraints(event: StreamsTransactionEvent, - filter: (Constraint) -> Boolean): List = event.schema.constraints.filter { filter(it) } - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/SourceIdIngestionStrategy.kt b/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/SourceIdIngestionStrategy.kt deleted file mode 100644 index ac426953ae..0000000000 --- a/extended/src/main/kotlin/apoc/kafka/service/sink/strategy/SourceIdIngestionStrategy.kt +++ /dev/null @@ -1,110 +0,0 @@ -package apoc.kafka.service.sink.strategy - -import apoc.kafka.events.EntityType -import apoc.kafka.events.NodeChange -import apoc.kafka.events.OperationType -import apoc.kafka.events.RelationshipChange -import apoc.kafka.events.RelationshipPayload -import apoc.kafka.extensions.quote -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.utils.KafkaUtil -import apoc.kafka.utils.KafkaUtil.getLabelsAsString -import apoc.kafka.utils.KafkaUtil.toStreamsTransactionEvent - -data class SourceIdIngestionStrategyConfig(val labelName: String = "SourceEvent", val idName: String = "sourceId") - -class SourceIdIngestionStrategy(config: SourceIdIngestionStrategyConfig = SourceIdIngestionStrategyConfig()): IngestionStrategy { - - private val quotedLabelName = config.labelName.quote() - private val quotedIdName = config.idName.quote() - - override fun mergeRelationshipEvents(events: Collection): List { - return events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.relationship && it.meta.operation != OperationType.deleted } } - .map { data -> - val payload = data.payload as RelationshipPayload - val changeEvt = when (data.meta.operation) { - OperationType.deleted -> { - data.payload.before as RelationshipChange - } - else -> data.payload.after as RelationshipChange - } - payload.label to mapOf("id" to payload.id, - "start" to payload.start.id, "end" to payload.end.id, "properties" to changeEvt.properties) - } - .groupBy({ it.first }, { it.second }) - .map { - val query = """ - |${KafkaUtil.UNWIND} - |MERGE (start:$quotedLabelName{$quotedIdName: event.start}) - |MERGE (end:$quotedLabelName{$quotedIdName: event.end}) - |MERGE (start)-[r:${it.key.quote()}{$quotedIdName: event.id}]->(end) - |SET r = event.properties - |SET r.$quotedIdName = event.id - """.trimMargin() - QueryEvents(query, it.value) - } - } - - override fun deleteRelationshipEvents(events: Collection): List { - return events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.relationship && it.meta.operation == OperationType.deleted } } - .map { data -> - val payload = data.payload as RelationshipPayload - payload.label to mapOf("id" to data.payload.id) - } - .groupBy({ it.first }, { it.second }) - .map { - val query = "${KafkaUtil.UNWIND} MATCH ()-[r:${it.key.quote()}{$quotedIdName: event.id}]-() DELETE r" - QueryEvents(query, it.value) - } - } - - override fun deleteNodeEvents(events: Collection): List { - val data = events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.node && it.meta.operation == OperationType.deleted } } - .map { mapOf("id" to it.payload.id) } - if (data.isNullOrEmpty()) { - return emptyList() - } - val query = "${KafkaUtil.UNWIND} MATCH (n:$quotedLabelName{$quotedIdName: event.id}) DETACH DELETE n" - return listOf(QueryEvents(query, data)) - } - - override fun mergeNodeEvents(events: Collection): List { - return events - .mapNotNull { toStreamsTransactionEvent(it) { it.payload.type == EntityType.node && it.meta.operation != OperationType.deleted } } - .map { data -> - val changeEvtAfter = data.payload.after as NodeChange - val labelsAfter = changeEvtAfter.labels ?: emptyList() - val labelsBefore = if (data.payload.before != null) { - val changeEvtBefore = data.payload.before as NodeChange - changeEvtBefore.labels ?: emptyList() - } else { - emptyList() - } - val labelsToAdd = (labelsAfter - labelsBefore) - .toSet() - val labelsToDelete = (labelsBefore - labelsAfter) - .toSet() - NodeMergeMetadata(labelsToAdd = labelsToAdd, labelsToDelete = labelsToDelete) to mapOf("id" to data.payload.id, "properties" to changeEvtAfter.properties) - } - .groupBy({ it.first }, { it.second }) - .map { - var query = """ - |${KafkaUtil.UNWIND} - |MERGE (n:$quotedLabelName{$quotedIdName: event.id}) - |SET n = event.properties - |SET n.$quotedIdName = event.id - """.trimMargin() - if (it.key.labelsToDelete.isNotEmpty()) { - query += "\nREMOVE n${getLabelsAsString(it.key.labelsToDelete)}" - } - if (it.key.labelsToAdd.isNotEmpty()) { - query += "\nSET n${getLabelsAsString(it.key.labelsToAdd)}" - } - QueryEvents(query, it.value) - } - } - -} \ No newline at end of file diff --git a/extended/src/main/kotlin/apoc/kafka/utils/JSONUtils.kt b/extended/src/main/kotlin/apoc/kafka/utils/JSONUtils.kt index cbf167772a..1d1a65a790 100644 --- a/extended/src/main/kotlin/apoc/kafka/utils/JSONUtils.kt +++ b/extended/src/main/kotlin/apoc/kafka/utils/JSONUtils.kt @@ -11,7 +11,6 @@ import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.databind.SerializationFeature import com.fasterxml.jackson.databind.SerializerProvider import com.fasterxml.jackson.databind.module.SimpleModule -import com.fasterxml.jackson.module.kotlin.convertValue import com.fasterxml.jackson.module.kotlin.jacksonObjectMapper import com.fasterxml.jackson.module.kotlin.readValue import org.neo4j.driver.internal.value.PointValue @@ -124,10 +123,6 @@ object JSONUtils { return getObjectMapper().readValue(value) } - inline fun convertValue(value: Any, objectMapper: ObjectMapper = getObjectMapper()): T { - return objectMapper.convertValue(value) - } - fun asStreamsTransactionEvent(obj: Any): StreamsTransactionEvent { return try { val evt = when (obj) { diff --git a/extended/src/main/kotlin/apoc/kafka/utils/KafkaUtil.kt b/extended/src/main/kotlin/apoc/kafka/utils/KafkaUtil.kt index e809d5863b..7c2ea84efe 100644 --- a/extended/src/main/kotlin/apoc/kafka/utils/KafkaUtil.kt +++ b/extended/src/main/kotlin/apoc/kafka/utils/KafkaUtil.kt @@ -5,32 +5,20 @@ import apoc.ExtendedApocConfig.APOC_KAFKA_ENABLED import apoc.kafka.events.Constraint import apoc.kafka.events.RelKeyStrategy import apoc.kafka.events.StreamsConstraintType -import apoc.kafka.events.StreamsTransactionEvent -import apoc.kafka.extensions.execute import apoc.kafka.extensions.quote import apoc.kafka.service.StreamsSinkEntity -import kotlinx.coroutines.Dispatchers -import kotlinx.coroutines.GlobalScope import kotlinx.coroutines.delay -import kotlinx.coroutines.launch -import kotlinx.coroutines.runBlocking import org.apache.kafka.clients.CommonClientConfigs import org.apache.kafka.clients.admin.AdminClient import org.apache.kafka.clients.admin.AdminClientConfig -import org.apache.kafka.clients.consumer.ConsumerConfig -import org.apache.kafka.clients.producer.ProducerConfig import org.apache.kafka.common.config.ConfigResource import org.apache.kafka.common.config.SaslConfigs import org.apache.kafka.common.config.SslConfigs import org.apache.kafka.common.config.TopicConfig import org.neo4j.dbms.api.DatabaseManagementService import org.neo4j.dbms.systemgraph.TopologyGraphDbmsModel.HostedOnMode -import org.neo4j.exceptions.UnsatisfiedDependencyException import org.neo4j.graphdb.GraphDatabaseService -import org.neo4j.graphdb.QueryExecutionException import org.neo4j.kernel.internal.GraphDatabaseAPI -import org.neo4j.logging.Log -import org.neo4j.logging.internal.LogService import java.io.IOException import java.lang.invoke.MethodHandles import java.lang.invoke.MethodType @@ -40,15 +28,9 @@ import java.net.URI import java.util.* object KafkaUtil { - const val labelSeparator = ":" - const val keySeparator = ", " @JvmStatic val UNWIND: String = "UNWIND \$events AS event" - @JvmStatic val WITH_EVENT_FROM: String = "WITH event, from" - - @JvmStatic val LEADER = "LEADER" - @JvmStatic val SYSTEM_DATABASE_NAME = "system" @JvmStatic @@ -65,31 +47,6 @@ object KafkaUtil { .asType(MethodType.methodType(Boolean::class.java, Any::class.java)) } - fun clusterMemberRole(db: GraphDatabaseAPI): String { - val fallback: (Exception?) -> String = { e: Exception? -> - val userLog = db.dependencyResolver - .resolveDependency(LogService::class.java) - .getUserLog(KafkaUtil::class.java) - e?.let { userLog.warn("Cannot call the APIs, trying with the cypher query", e) } - ?: userLog.warn("Cannot call the APIs, trying with the cypher query") - db.execute("CALL dbms.cluster.role(\$database)", - mapOf("database" to db.databaseName()) - ) { it.columnAs("role").next() } - } - val execute = { - coreMetadata?.let { - try { - val raftMachine: Any = db.dependencyResolver.resolveDependency(coreMetadata) - val isLeader = isLeaderMethodHandle!!.invokeExact(raftMachine) as Boolean - if (isLeader) "LEADER" else "FOLLOWER" - } catch (e: UnsatisfiedDependencyException) { - "LEADER" - } - } ?: "LEADER" - } - return executeOrFallback(execute, fallback) - } - fun isCluster(db: GraphDatabaseAPI): Boolean = db.mode() != HostedOnMode.SINGLE && db.mode() != HostedOnMode.VIRTUAL fun isCluster(dbms: DatabaseManagementService): Boolean = dbms.listDatabases() @@ -103,32 +60,11 @@ object KafkaUtil { fallback(e) } - fun getLabelsAsString(labels: Collection): String = labels - .map { it.quote() } - .joinToString(labelSeparator) - .let { if (it.isNotBlank()) "$labelSeparator$it" else it } - - fun getNodeKeysAsString(prefix: String = "properties", keys: Set): String = keys - .map { toQuotedProperty(prefix, it) } - .joinToString(keySeparator) - private fun toQuotedProperty(prefix: String = "properties", property: String): String { val quoted = property.quote() return "$quoted: event.$prefix.$quoted" } - fun getNodeMergeKeys(prefix: String, keys: Set): String = keys - .map { - val quoted = it.quote() - "$quoted: event.$prefix.$quoted" - } - .joinToString(keySeparator) - - fun containsProp(key: String, properties: List): Boolean = if (key.contains(".")) { - properties.contains(key) || properties.any { key.startsWith("$it.") } - } else { - properties.contains(key) - } suspend fun retryForException(exceptions: Array>, retries: Int, delayTime: Long, action: () -> T): T { return try { @@ -233,10 +169,6 @@ object KafkaUtil { + getConfigProperties(TopicConfig::class.java) + getConfigProperties(SslConfigs::class.java)) - fun getProducerProperties() = ProducerConfig.configNames() - getBaseConfigs() - - fun getConsumerProperties() = ConsumerConfig.configNames() - getBaseConfigs() - fun getNodeKeys(labels: List, propertyKeys: Set, constraints: List, keyStrategy: RelKeyStrategy = RelKeyStrategy.DEFAULT): Set = constraints .filter { constraint -> @@ -258,16 +190,7 @@ object KafkaUtil { RelKeyStrategy.ALL -> it.flatMap { it.properties }.toSet() } } - - - fun toStreamsTransactionEvent(streamsSinkEntity: StreamsSinkEntity, - evaluation: (StreamsTransactionEvent) -> Boolean) - : StreamsTransactionEvent? = if (streamsSinkEntity.value != null) { - val data = JSONUtils.asStreamsTransactionEvent(streamsSinkEntity.value) - if (evaluation(data)) data else null - } else { - null - } + fun ignoreExceptions(action: () -> T, vararg toIgnore: Class): T? { return try { @@ -284,58 +207,10 @@ object KafkaUtil { } } - fun blockUntilFalseOrTimeout(timeout: Long, delay: Long = 1000, action: () -> Boolean): Boolean = runBlocking { - val start = System.currentTimeMillis() - var success = action() - while (System.currentTimeMillis() - start < timeout && !success) { - delay(delay) - success = action() - } - success - } - fun getName(db: GraphDatabaseService) = db.databaseName() fun isWriteableInstance(db: GraphDatabaseAPI) = apoc.util.Util.isWriteableInstance(db) - private fun clusterHasLeader(db: GraphDatabaseAPI): Boolean = try { - db.execute(""" - |CALL dbms.cluster.overview() YIELD databases - |RETURN databases[${'$'}database] AS role - """.trimMargin(), mapOf("database" to db.databaseName())) { - it.columnAs("role") - .stream() - .toList() - .contains(KafkaUtil.LEADER) - } - } catch (e: QueryExecutionException) { - if (e.statusCode.equals("Neo.ClientError.Procedure.ProcedureNotFound", ignoreCase = true)) { - false - } - throw e - } - fun executeInWriteableInstance(db: GraphDatabaseAPI, - action: () -> T?): T? = if (isWriteableInstance(db)) { - action() - } else { - null - } - - fun isClusterCorrectlyFormed(dbms: DatabaseManagementService) = dbms.listDatabases() - .filterNot { it == KafkaUtil.SYSTEM_DATABASE_NAME } - .map { dbms.database(it) as GraphDatabaseAPI } - .all { clusterHasLeader(it) } - - fun waitForTheLeaders(dbms: DatabaseManagementService, log: Log, timeout: Long = 120000, action: () -> Unit) { - GlobalScope.launch(Dispatchers.IO) { - val start = System.currentTimeMillis() - val delay: Long = 2000 - while (!isClusterCorrectlyFormed(dbms) && System.currentTimeMillis() - start < timeout) { - log.info("${KafkaUtil.LEADER} not found, new check comes in $delay milliseconds...") - delay(delay) - } - action() - } - } + } \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/common/strategy/CUDIngestionStrategyTest.kt b/extended/src/test/kotlin/apoc/kafka/common/strategy/CUDIngestionStrategyTest.kt deleted file mode 100644 index e2216d6d2e..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/common/strategy/CUDIngestionStrategyTest.kt +++ /dev/null @@ -1,1185 +0,0 @@ -package apoc.kafka.common.strategy - -import apoc.kafka.extensions.quote -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.service.sink.strategy.CUDIngestionStrategy -import apoc.kafka.service.sink.strategy.CUDNode -import apoc.kafka.service.sink.strategy.CUDNodeRel -import apoc.kafka.service.sink.strategy.CUDOperations -import apoc.kafka.service.sink.strategy.CUDRelationship -import apoc.kafka.service.sink.strategy.QueryEvents -import apoc.kafka.utils.KafkaUtil -import org.junit.Test -import kotlin.test.assertEquals -import kotlin.test.assertTrue - -class CUDIngestionStrategyTest { - - private fun findEventByQuery(query: String, evts: List) = evts.find { it.query == query }!! - - private fun assertNodeEventsContainsKey(qe: QueryEvents, vararg keys: String) = assertTrue { - qe.events.all { - val ids = it[CUDIngestionStrategy.ID_KEY] as Map - ids.keys.containsAll(keys.toList()) - } - } - - private fun assertRelationshipEventsContainsKey(qe: QueryEvents, fromKey: String, toKey: String) = assertTrue { - qe.events.all { - val from = it["from"] as Map - val idsFrom = from[CUDIngestionStrategy.ID_KEY] as Map - val to = it["to"] as Map - val idsTo = to[CUDIngestionStrategy.ID_KEY] as Map - idsFrom.containsKey(fromKey) && idsTo.containsKey(toKey) - } - } - - @Test - fun `should create, merge and update nodes`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val key = "key" - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val (op, ids) = when (it) { - in mergeMarkers -> CUDOperations.merge to mapOf(key to it) - in updateMarkers -> CUDOperations.update to mapOf(key to it) - else -> CUDOperations.create to emptyMap() - } - val cudNode = CUDNode(op = op, - labels = labels, - ids = ids, - properties = properties) - StreamsSinkEntity(null, cudNode) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(6, nodeEvents.size) - assertEquals(10, nodeEvents.map { it.events.size }.sum()) - val createNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:Foo:Bar) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(3, createNodeFooBar.events.size) - val createNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:Foo:Bar:Label) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, createNodeFooBarLabel.events.size) - val mergeNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (n:Foo:Bar {${key.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${key.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, mergeNodeFooBar.events.size) - assertNodeEventsContainsKey(mergeNodeFooBar, key) - val mergeNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (n:Foo:Bar:Label {${key.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${key.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, mergeNodeFooBarLabel.events.size) - assertNodeEventsContainsKey(mergeNodeFooBarLabel, key) - val updateNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n:Foo:Bar {${key.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${key.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, updateNodeFooBar.events.size) - assertNodeEventsContainsKey(updateNodeFooBar, key) - val updateNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n:Foo:Bar:Label {${key.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${key.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, updateNodeFooBarLabel.events.size) - assertNodeEventsContainsKey(updateNodeFooBarLabel, key) - } - - @Test - fun `should create, merge, update and delete nodes with garbage data`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val deleteMarkers = listOf(10) - val key = "not..... SO SIMPLE!" - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("WellBehaved", "C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l") else listOf("WellBehaved", "C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l", "Label") - val properties = if (it in deleteMarkers) emptyMap() else mapOf("foo" to "foo-value-$it", "id" to it) - val (op, ids) = when (it) { - in mergeMarkers -> CUDOperations.merge to mapOf(key to it) - in updateMarkers -> CUDOperations.update to mapOf(key to it) - in deleteMarkers -> CUDOperations.delete to mapOf(key to it) - else -> CUDOperations.create to emptyMap() - } - val cudNode = CUDNode(op = op, - labels = labels, - ids = ids, - properties = properties) - StreamsSinkEntity(null, cudNode) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), relationshipEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(6, nodeEvents.size) - assertEquals(9, nodeEvents.map { it.events.size }.sum()) - val createNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, createNodeFooBar.events.size) - val createNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`:Label) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, createNodeFooBarLabel.events.size) - val mergeNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l` {`$key`: event.${CUDIngestionStrategy.ID_KEY}.`$key`}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, mergeNodeFooBar.events.size) - assertNodeEventsContainsKey(mergeNodeFooBar, key) - val mergeNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`:Label {`$key`: event.${CUDIngestionStrategy.ID_KEY}.`$key`}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, mergeNodeFooBarLabel.events.size) - assertNodeEventsContainsKey(mergeNodeFooBarLabel, key) - val updateNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l` {`$key`: event.${CUDIngestionStrategy.ID_KEY}.`$key`}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, updateNodeFooBar.events.size) - assertNodeEventsContainsKey(updateNodeFooBar, key) - val updateNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`:Label {`$key`: event.${CUDIngestionStrategy.ID_KEY}.`$key`}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, updateNodeFooBarLabel.events.size) - assertNodeEventsContainsKey(updateNodeFooBarLabel, key) - - assertEquals(1, nodeDeleteEvents.size) - val nodeDeleteEvent = nodeDeleteEvents.first() - assertEquals(""" - |${KafkaUtil.UNWIND} - |MATCH (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l` {`$key`: event.${CUDIngestionStrategy.ID_KEY}.`$key`}) - |DETACH DELETE n - """.trimMargin(), nodeDeleteEvent.query) - } - - @Test - fun `should create nodes only with valid CUD operations`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val invalidMarkers = listOf(3, 6, 9) - val list = (1..10).map { - val labels = listOf("Foo", "Bar", "Label") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val (op, ids) = when (it) { - in mergeMarkers -> CUDOperations.merge to mapOf("_id" to it) - in invalidMarkers -> CUDOperations.match to mapOf("_id" to it) - else -> CUDOperations.create to emptyMap() - } - val cudNode = CUDNode(op = op, - labels = labels, - ids = ids, - properties = properties) - StreamsSinkEntity(null, cudNode) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(2, nodeEvents.size) - assertEquals(7, nodeEvents.map { it.events.size }.sum()) - val createNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:Foo:Bar:Label) - |SET n = event.properties - """.trimMargin(),nodeEvents) - assertEquals(4, createNodeFooBarLabel.events.size) - val mergeNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n) WHERE id(n) = event.ids._id - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(3, mergeNodeFooBar.events.size) - assertNodeEventsContainsKey(mergeNodeFooBar, "_id") - } - - @Test - fun `should create, merge and update relationships only with valid node operations`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val invalidMarker = listOf(3, 4, 6, 9) - val key = "key" - val list = (1..10).map { - val labels = listOf("Foo", "Bar", "Label") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val op = when (it) { - in mergeMarkers -> CUDOperations.merge - else -> CUDOperations.create - } - val start = CUDNodeRel(ids = mapOf(key to it), labels = labels, op= if (it in invalidMarker) CUDOperations.delete else CUDOperations.create) - val end = CUDNodeRel(ids = mapOf(key to it + 1), labels = labels) - val rel = CUDRelationship(op = op, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(2, relationshipEvents.size) - assertEquals(6, relationshipEvents.map { it.events.size }.sum()) - val createRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(3, createRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(createRelFooBarLabel, key, key) - val mergeRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(3, mergeRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBarLabel, key, key) - } - - @Test - fun `should delete nodes with internal id reference`() { - // given - val detachMarkers = listOf(1, 3, 8, 10) - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val detach = it in detachMarkers - val properties = emptyMap() - val cudNode = CUDNode(op = CUDOperations.delete, - labels = labels, - ids = mapOf("_id" to it), - properties = properties, - detach = detach) - StreamsSinkEntity(null, cudNode) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), relationshipEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(4, nodeDeleteEvents.size) - assertEquals(10, nodeDeleteEvents.map { it.events.size }.sum()) - val deleteNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n) WHERE id(n) = event.${CUDIngestionStrategy.ID_KEY}._id - |DELETE n - """.trimMargin(), nodeDeleteEvents) - assertEquals(3, deleteNodeFooBar.events.size) - val key = "_id" - assertNodeEventsContainsKey(deleteNodeFooBar, key) - val deleteNodeFooBarDetach = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n) WHERE id(n) = event.${CUDIngestionStrategy.ID_KEY}._id - |DETACH DELETE n - """.trimMargin(), nodeDeleteEvents) - assertEquals(2, deleteNodeFooBarDetach.events.size) - assertNodeEventsContainsKey(deleteNodeFooBarDetach, key) - val deleteNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n) WHERE id(n) = event.${CUDIngestionStrategy.ID_KEY}._id - |DELETE n - """.trimMargin(), nodeDeleteEvents) - assertEquals(3, deleteNodeFooBarLabel.events.size) - assertNodeEventsContainsKey(deleteNodeFooBarLabel, key) - val deleteNodeFooBarLabelDetach = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n) WHERE id(n) = event.${CUDIngestionStrategy.ID_KEY}._id - |DETACH DELETE n - """.trimMargin(), nodeDeleteEvents) - assertEquals(2, deleteNodeFooBarLabelDetach.events.size) - assertNodeEventsContainsKey(deleteNodeFooBarLabelDetach, key) - } - - @Test - fun `should create, merge and update nodes with internal id reference`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val (op, ids) = when (it) { - in mergeMarkers -> CUDOperations.merge to mapOf("_id" to it) - in updateMarkers -> CUDOperations.update to mapOf("_id" to it) - else -> CUDOperations.create to emptyMap() - } - val cudNode = CUDNode(op = op, - labels = labels, - ids = ids, - properties = properties) - StreamsSinkEntity(null, cudNode) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(3, nodeEvents.size) - assertEquals(10, nodeEvents.map { it.events.size }.sum()) - val createNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:Foo:Bar) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(3, createNodeFooBar.events.size) - val createNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:Foo:Bar:Label) - |SET n = event.properties - """.trimMargin(),nodeEvents) - assertEquals(2, createNodeFooBarLabel.events.size) - val mergeNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n) WHERE id(n) = event.ids._id - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(5, mergeNodeFooBar.events.size) - assertNodeEventsContainsKey(mergeNodeFooBar, "_id") - } - - @Test - fun `should create, merge and update relationships`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val key = "key" - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val op = when (it) { - in mergeMarkers -> CUDOperations.merge - in updateMarkers -> CUDOperations.update - else -> CUDOperations.create - } - val start = CUDNodeRel(ids = mapOf(key to it), labels = labels) - val end = CUDNodeRel(ids = mapOf(key to it + 1), labels = labels) - val rel = CUDRelationship(op = op, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(6, relationshipEvents.size) - assertEquals(10, relationshipEvents.map { it.events.size }.sum()) - val createRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(3, createRelFooBar.events.size) - assertRelationshipEventsContainsKey(createRelFooBar, key, key) - val mergeRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, mergeRelFooBar.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBar, key, key) - val updateRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, updateRelFooBar.events.size) - assertRelationshipEventsContainsKey(updateRelFooBar, key, key) - val createRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, createRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(createRelFooBarLabel, key, key) - val mergeRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, mergeRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBarLabel, key, key) - val updateRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, updateRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(updateRelFooBarLabel, key, key) - } - - @Test - fun `should create and delete relationship also without properties field`() { - val key = "key" - val startNode = "SourceNode" - val endNode = "TargetNode" - val relType = "MY_REL" - val start = CUDNodeRel(ids = mapOf(key to 1), labels = listOf(startNode)) - val end = CUDNodeRel(ids = mapOf(key to 2), labels = listOf(endNode)) - val list = listOf(CUDOperations.create, CUDOperations.delete, CUDOperations.update).map { - val rel = CUDRelationship(op = it, from = start, to = end, rel_type = relType) - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - assertEquals(2, relationshipEvents.size) - val createRel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:$startNode {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:$endNode {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:$relType]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, createRel.events.size) - val updateRel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:$startNode {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:$endNode {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, updateRel.events.size) - - assertEquals(1, relationshipDeleteEvents.size) - val deleteRel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:$startNode {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:$endNode {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:$relType]->(to) - |DELETE r - """.trimMargin(), relationshipDeleteEvents) - assertEquals(1, deleteRel.events.size) - } - - @Test - fun `should create, merge and update relationships with merge op in 'from' and 'to' node`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val key = "key" - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val op = when (it) { - in mergeMarkers -> CUDOperations.merge - in updateMarkers -> CUDOperations.update - else -> CUDOperations.create - } - val start = CUDNodeRel(ids = mapOf(key to it), labels = labels, op = CUDOperations.merge) - val end = CUDNodeRel(ids = mapOf(key to it + 1), labels = labels, op = CUDOperations.merge) - val rel = CUDRelationship(op = op, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(6, relationshipEvents.size) - assertEquals(10, relationshipEvents.map { it.events.size }.sum()) - val createRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MERGE (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(3, createRelFooBar.events.size) - assertRelationshipEventsContainsKey(createRelFooBar, key, key) - val mergeRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MERGE (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, mergeRelFooBar.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBar, key, key) - val updateRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, updateRelFooBar.events.size) - assertRelationshipEventsContainsKey(updateRelFooBar, key, key) - val createRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MERGE (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, createRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(createRelFooBarLabel, key, key) - val mergeRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MERGE (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, mergeRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBarLabel, key, key) - val updateRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, updateRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(updateRelFooBarLabel, key, key) - } - - @Test - fun `should create, merge and update relationships with match op in 'from' node and merge or create in 'to' node`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val key = "key" - val list = (1..10).map { - val labels = listOf("Foo", "Bar") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val op = when (it) { - in mergeMarkers -> CUDOperations.merge - in updateMarkers -> CUDOperations.update - else -> CUDOperations.create - } - val start = CUDNodeRel(ids = mapOf(key to it), labels = labels, op = CUDOperations.match) - val end = CUDNodeRel(ids = mapOf(key to it + 1), labels = labels, op = if (it <= 5) CUDOperations.merge else CUDOperations.create) - val rel = CUDRelationship(op = op, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(6, relationshipEvents.size) - assertEquals(10, relationshipEvents.map { it.events.size }.sum()) - val createRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MERGE (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, createRelFooBar.events.size) - assertRelationshipEventsContainsKey(createRelFooBar, key, key) - val mergeRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MERGE (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, mergeRelFooBar.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBar, key, key) - val matchMergeAndMergeRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MERGE (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, matchMergeAndMergeRelFooBar.events.size) - assertRelationshipEventsContainsKey(matchMergeAndMergeRelFooBar, key, key) - val matchMergeAndCreateRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |CREATE (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, matchMergeAndCreateRelFooBar.events.size) - assertRelationshipEventsContainsKey(matchMergeAndCreateRelFooBar, key, key) - val updateRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, updateRelFooBar.events.size) - assertRelationshipEventsContainsKey(updateRelFooBar, key, key) - val mergeRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |CREATE (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(3, mergeRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBarLabel, key, key) - } - - @Test - fun `should create, merge and update relationships with match op in 'to' node and merge or create in 'from' node`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val key = "key" - val list = (1..10).map { - val labels = listOf("Foo", "Bar") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val op = when (it) { - in mergeMarkers -> CUDOperations.merge - in updateMarkers -> CUDOperations.update - else -> CUDOperations.create - } - val start = CUDNodeRel(ids = mapOf(key to it), labels = labels, op = if (it <= 5) CUDOperations.merge else CUDOperations.create) - val end = CUDNodeRel(ids = mapOf(key to it + 1), labels = labels, CUDOperations.match) - val rel = CUDRelationship(op = op, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(6, relationshipEvents.size) - assertEquals(10, relationshipEvents.map { it.events.size }.sum()) - val createRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, createRelFooBar.events.size) - assertRelationshipEventsContainsKey(createRelFooBar, key, key) - val mergeRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, mergeRelFooBar.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBar, key, key) - val matchMergeAndMergeRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, matchMergeAndMergeRelFooBar.events.size) - assertRelationshipEventsContainsKey(matchMergeAndMergeRelFooBar, key, key) - val matchMergeAndCreateRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, matchMergeAndCreateRelFooBar.events.size) - assertRelationshipEventsContainsKey(matchMergeAndCreateRelFooBar, key, key) - val updateRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(1, updateRelFooBar.events.size) - assertRelationshipEventsContainsKey(updateRelFooBar, key, key) - val mergeRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(3, mergeRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBarLabel, key, key) - } - - @Test - fun `should delete relationships`() { - // given - val key = "key" - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val properties = emptyMap() - val start = CUDNodeRel(ids = mapOf(key to it), labels = labels) - val end = CUDNodeRel(ids = mapOf(key to it + 1), labels = labels) - val rel = CUDRelationship(op = CUDOperations.delete, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipEvents) - - assertEquals(2, relationshipDeleteEvents.size) - assertEquals(10, relationshipDeleteEvents.map { it.events.size }.sum()) - val deleteRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |DELETE r - """.trimMargin(), relationshipDeleteEvents) - assertEquals(5, deleteRelFooBar.events.size) - assertRelationshipEventsContainsKey(deleteRelFooBar, key, key) - val deleteRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from:Foo:Bar:Label {key: event.from.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (to:Foo:Bar:Label {key: event.to.${CUDIngestionStrategy.ID_KEY}.key}) - |MATCH (from)-[r:MY_REL]->(to) - |DELETE r - """.trimMargin(), relationshipDeleteEvents) - assertEquals(5, deleteRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(deleteRelFooBarLabel, key, key) - } - - @Test - fun `should delete relationships with internal id reference`() { - // given - val key = "_id" - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val properties = emptyMap() - val start = CUDNodeRel(ids = mapOf(key to it), labels = labels) - val relKey = if (it % 2 == 0) key else "key" - val end = CUDNodeRel(ids = mapOf(relKey to it + 1), labels = labels) - val rel = CUDRelationship(op = CUDOperations.delete, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipEvents) - - assertEquals(2, relationshipDeleteEvents.size) - assertEquals(10, relationshipDeleteEvents.map { it.events.size }.sum()) - val deleteRel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from) WHERE id(from) = event.from.${CUDIngestionStrategy.ID_KEY}._id - |MATCH (to) WHERE id(to) = event.to.${CUDIngestionStrategy.ID_KEY}._id - |MATCH (from)-[r:MY_REL]->(to) - |DELETE r - """.trimMargin(), relationshipDeleteEvents) - assertEquals(5, deleteRel.events.size) - assertRelationshipEventsContainsKey(deleteRel, key, key) - val relKey = "key" - val deleteRelFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from) WHERE id(from) = event.from.${CUDIngestionStrategy.ID_KEY}._id - |MATCH (to:Foo:Bar:Label {$relKey: event.to.${CUDIngestionStrategy.ID_KEY}.$relKey}) - |MATCH (from)-[r:MY_REL]->(to) - |DELETE r - """.trimMargin(), relationshipDeleteEvents) - assertEquals(5, deleteRelFooBarLabel.events.size) - assertRelationshipEventsContainsKey(deleteRelFooBarLabel, key, relKey) - } - - @Test - fun `should create, merge and update relationships with internal id reference`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("Foo", "Bar") else listOf("Foo", "Bar", "Label") - val properties = mapOf("foo" to "foo-value-$it", "id" to it) - val op = when (it) { - in mergeMarkers -> CUDOperations.merge - in updateMarkers -> CUDOperations.update - else -> CUDOperations.create - } - val start = CUDNodeRel(ids = mapOf("_id" to it), labels = labels) - val end = CUDNodeRel(ids = mapOf("_id" to it + 1), labels = labels) - val rel = CUDRelationship(op = op, properties = properties, from = start, to = end, rel_type = "MY_REL") - StreamsSinkEntity(null, rel) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), nodeEvents) - assertEquals(emptyList(), nodeDeleteEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(3, relationshipEvents.size) - assertEquals(10, relationshipEvents.map { it.events.size }.sum()) - val createRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from) WHERE id(from) = event.from.${CUDIngestionStrategy.ID_KEY}._id - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to) WHERE id(to) = event.to.${CUDIngestionStrategy.ID_KEY}._id - |CREATE (from)-[r:MY_REL]->(to) - |SET r = event.properties - """.trimMargin(), relationshipEvents) - assertEquals(5, createRelFooBar.events.size) - val key = "_id" - assertRelationshipEventsContainsKey(createRelFooBar, key, key) - val mergeRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from) WHERE id(from) = event.from.${CUDIngestionStrategy.ID_KEY}._id - |${KafkaUtil.WITH_EVENT_FROM} - |MATCH (to) WHERE id(to) = event.to.${CUDIngestionStrategy.ID_KEY}._id - |MERGE (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(3, mergeRelFooBar.events.size) - assertRelationshipEventsContainsKey(mergeRelFooBar, key, key) - val updateRelFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (from) WHERE id(from) = event.from.${CUDIngestionStrategy.ID_KEY}._id - |MATCH (to) WHERE id(to) = event.to.${CUDIngestionStrategy.ID_KEY}._id - |MATCH (from)-[r:MY_REL]->(to) - |SET r += event.properties - """.trimMargin(), relationshipEvents) - assertEquals(2, updateRelFooBar.events.size) - assertRelationshipEventsContainsKey(updateRelFooBar, key, key) - } - - @Test - fun `should create, merge, update and delete nodes with compound keys`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val deleteMarkers = listOf(10) - val firstKey = "not..... SO SIMPLE!" - val secondKey = "otherKey" - val list = (1..10).map { - val labels = if (it % 2 == 0) listOf("WellBehaved", "C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l") else listOf("WellBehaved", "C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l", "Label") - val properties = if (it in deleteMarkers) emptyMap() else mapOf("foo" to "foo-value-$it", "id" to it) - val (op, ids) = when (it) { - in mergeMarkers -> CUDOperations.merge to mapOf(firstKey to it, secondKey to it) - in updateMarkers -> CUDOperations.update to mapOf(firstKey to it, secondKey to it) - in deleteMarkers -> CUDOperations.delete to mapOf(firstKey to it, secondKey to it) - else -> CUDOperations.create to emptyMap() - } - val cudNode = CUDNode(op = op, - labels = labels, - ids = ids, - properties = properties) - StreamsSinkEntity(null, cudNode) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), relationshipEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(6, nodeEvents.size) - assertEquals(9, nodeEvents.map { it.events.size }.sum()) - val createNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, createNodeFooBar.events.size) - val createNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`:Label) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, createNodeFooBarLabel.events.size) - - val mergeNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l` {${firstKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${firstKey.quote()}, ${secondKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${secondKey.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, mergeNodeFooBar.events.size) - assertNodeEventsContainsKey(mergeNodeFooBar, firstKey, secondKey) - - val mergeNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`:Label {${firstKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${firstKey.quote()}, ${secondKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${secondKey.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, mergeNodeFooBarLabel.events.size) - assertNodeEventsContainsKey(mergeNodeFooBarLabel, firstKey, secondKey) - - val updateNodeFooBar = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l` {${firstKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${firstKey.quote()}, ${secondKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${secondKey.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, updateNodeFooBar.events.size) - assertNodeEventsContainsKey(updateNodeFooBar, firstKey, secondKey) - - val updateNodeFooBarLabel = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l`:Label {${firstKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${firstKey.quote()}, ${secondKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${secondKey.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(1, updateNodeFooBarLabel.events.size) - assertNodeEventsContainsKey(updateNodeFooBarLabel, firstKey, secondKey) - - assertEquals(1, nodeDeleteEvents.size) - val nodeDeleteEvent = nodeDeleteEvents.first() - assertEquals(""" - |${KafkaUtil.UNWIND} - |MATCH (n:WellBehaved:`C̸r̵a̵z̵y̵ ̶.̵ ̶ ̴ ̸ ̶ ̶ ̵ ̴L̴a̵b̸e̶l` {${firstKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${firstKey.quote()}, ${secondKey.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${secondKey.quote()}}) - |DETACH DELETE n - """.trimMargin(), nodeDeleteEvent.query) - assertEquals(1, nodeDeleteEvent.events.size) - assertNodeEventsContainsKey(updateNodeFooBar, firstKey, secondKey) - } - - @Test - fun `should create, merge, update and delete nodes without labels`() { - // given - val mergeMarkers = listOf(2, 5, 7) - val updateMarkers = listOf(3, 6) - val deleteMarkers = listOf(10) - val key = "key" - val list = (1..10).map { - val labels = emptyList() - val properties = if (it in deleteMarkers) emptyMap() else mapOf("foo" to "foo-value-$it", "id" to it) - val (op, ids) = when (it) { - in mergeMarkers -> CUDOperations.merge to mapOf(key to it) - in updateMarkers -> CUDOperations.update to mapOf(key to it) - in deleteMarkers -> CUDOperations.delete to mapOf(key to it) - else -> CUDOperations.create to emptyMap() - } - val cudNode = CUDNode(op = op, - labels = labels, - ids = ids, - properties = properties) - StreamsSinkEntity(null, cudNode) - } - - // when - val cudQueryStrategy = CUDIngestionStrategy() - val nodeEvents = cudQueryStrategy.mergeNodeEvents(list) - val nodeDeleteEvents = cudQueryStrategy.deleteNodeEvents(list) - - val relationshipEvents = cudQueryStrategy.mergeRelationshipEvents(list) - val relationshipDeleteEvents = cudQueryStrategy.deleteRelationshipEvents(list) - - // then - assertEquals(emptyList(), relationshipEvents) - assertEquals(emptyList(), relationshipDeleteEvents) - - assertEquals(3, nodeEvents.size) - assertEquals(9, nodeEvents.map { it.events.size }.sum()) - val createNode = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |CREATE (n) - |SET n = event.properties - """.trimMargin(), nodeEvents) - assertEquals(4, createNode.events.size) - - val mergeNode = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MERGE (n {${key.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${key.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(3, mergeNode.events.size) - assertNodeEventsContainsKey(mergeNode, key) - - val updateNode = findEventByQuery(""" - |${KafkaUtil.UNWIND} - |MATCH (n {${key.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${key.quote()}}) - |SET n += event.properties - """.trimMargin(), nodeEvents) - assertEquals(2, updateNode.events.size) - assertNodeEventsContainsKey(updateNode, key) - - assertEquals(1, nodeDeleteEvents.size) - val nodeDeleteEvent = nodeDeleteEvents.first() - assertEquals(""" - |${KafkaUtil.UNWIND} - |MATCH (n {${key.quote()}: event.${CUDIngestionStrategy.ID_KEY}.${key.quote()}}) - |DETACH DELETE n - """.trimMargin(), nodeDeleteEvent.query) - assertEquals(1, nodeDeleteEvent.events.size) - assertNodeEventsContainsKey(updateNode, key) - } - -} \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/common/strategy/NodePatternIngestionStrategyTest.kt b/extended/src/test/kotlin/apoc/kafka/common/strategy/NodePatternIngestionStrategyTest.kt deleted file mode 100644 index 78ab9d1795..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/common/strategy/NodePatternIngestionStrategyTest.kt +++ /dev/null @@ -1,196 +0,0 @@ -package apoc.kafka.common.strategy - -import org.junit.Test -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.service.sink.strategy.NodePatternConfiguration -import apoc.kafka.service.sink.strategy.NodePatternIngestionStrategy -import apoc.kafka.utils.KafkaUtil -import kotlin.test.assertEquals - -class NodePatternIngestionStrategyTest { - - @Test - fun `should get all properties`() { - // given - val config = NodePatternConfiguration.parse("(:LabelA:LabelB{!id})") - val strategy = NodePatternIngestionStrategy(config) - val data = mapOf("id" to 1, "foo" to "foo", "bar" to "bar", "foobar" to "foobar") - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeNodeEvents(events) - - // then - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (n:LabelA:LabelB{id: event.keys.id}) - |SET n = event.properties - |SET n += event.keys - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("keys" to mapOf("id" to 1), - "properties" to mapOf("foo" to "foo", "bar" to "bar", "foobar" to "foobar")) - ), - queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - } - - @Test - fun `should get nested properties`() { - // given - val config = NodePatternConfiguration.parse("(:LabelA:LabelB{!id, foo.bar})") - val strategy = NodePatternIngestionStrategy(config) - val data = mapOf("id" to 1, "foo" to mapOf("bar" to "bar", "foobar" to "foobar")) - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeNodeEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (n:LabelA:LabelB{id: event.keys.id}) - |SET n = event.properties - |SET n += event.keys - """.trimMargin(), - queryEvents[0].query) - assertEquals(listOf(mapOf("keys" to mapOf("id" to 1), - "properties" to mapOf("foo.bar" to "bar"))), - queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - } - - @Test - fun `should exclude nested properties`() { - // given - val config = NodePatternConfiguration.parse("(:LabelA:LabelB{!id, -foo})") - val strategy = NodePatternIngestionStrategy(config) - val map = mapOf("id" to 1, "foo" to mapOf("bar" to "bar", "foobar" to "foobar"), "prop" to 100) - - // when - val events = listOf(StreamsSinkEntity(map, map)) - val queryEvents = strategy.mergeNodeEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (n:LabelA:LabelB{id: event.keys.id}) - |SET n = event.properties - |SET n += event.keys - """.trimMargin(), - queryEvents[0].query) - assertEquals(listOf(mapOf("keys" to mapOf("id" to 1), - "properties" to mapOf("prop" to 100))), - queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - } - - @Test - fun `should include nested properties`() { - // given - val config = NodePatternConfiguration.parse("(:LabelA:LabelB{!id, foo})") - val strategy = NodePatternIngestionStrategy(config) - val data = mapOf("id" to 1, "foo" to mapOf("bar" to "bar", "foobar" to "foobar"), "prop" to 100) - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeNodeEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (n:LabelA:LabelB{id: event.keys.id}) - |SET n = event.properties - |SET n += event.keys - """.trimMargin(), - queryEvents[0].query) - assertEquals(listOf(mapOf("keys" to mapOf("id" to 1), - "properties" to mapOf("foo.bar" to "bar", "foo.foobar" to "foobar"))), - queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - } - - @Test - fun `should exclude the properties`() { - // given - val config = NodePatternConfiguration.parse("(:LabelA:LabelB{!id,-foo,-bar})") - val strategy = NodePatternIngestionStrategy(config) - val data = mapOf("id" to 1, "foo" to "foo", "bar" to "bar", "foobar" to "foobar") - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeNodeEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (n:LabelA:LabelB{id: event.keys.id}) - |SET n = event.properties - |SET n += event.keys - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("keys" to mapOf("id" to 1), "properties" to mapOf("foobar" to "foobar"))), queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - } - - @Test - fun `should include the properties`() { - // given - val config = NodePatternConfiguration.parse("(:LabelA:LabelB{!id,foo,bar})") - val strategy = NodePatternIngestionStrategy(config) - val data = mapOf("id" to 1, "foo" to "foo", "bar" to "bar", "foobar" to "foobar") - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeNodeEvents(events) - - // then - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (n:LabelA:LabelB{id: event.keys.id}) - |SET n = event.properties - |SET n += event.keys - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("keys" to mapOf("id" to 1), "properties" to mapOf("foo" to "foo", "bar" to "bar"))), queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - } - - @Test - fun `should delete the node`() { - // given - val config = NodePatternConfiguration.parse("(:LabelA:LabelB{!id})") - val strategy = NodePatternIngestionStrategy(config) - val data = mapOf("id" to 1, "foo" to "foo", "bar" to "bar", "foobar" to "foobar") - - // when - val events = listOf(StreamsSinkEntity(data, null)) - val queryEvents = strategy.deleteNodeEvents(events) - - // then - assertEquals(""" - |${KafkaUtil.UNWIND} - |MATCH (n:LabelA:LabelB{id: event.keys.id}) - |DETACH DELETE n - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("keys" to mapOf("id" to 1))), - queryEvents[0].events) - assertEquals(emptyList(), strategy.mergeNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - } - -} \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/common/strategy/PatternConfigurationTest.kt b/extended/src/test/kotlin/apoc/kafka/common/strategy/PatternConfigurationTest.kt deleted file mode 100644 index ccbf862c78..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/common/strategy/PatternConfigurationTest.kt +++ /dev/null @@ -1,492 +0,0 @@ -package apoc.kafka.common.strategy - -import apoc.kafka.service.sink.strategy.NodePatternConfiguration -import apoc.kafka.service.sink.strategy.PatternConfigurationType -import apoc.kafka.service.sink.strategy.RelationshipPatternConfiguration -import org.junit.Test -import kotlin.test.assertEquals - -class NodePatternConfigurationTest { - - @Test - fun `should extract all params`() { - // given - val pattern = "(:LabelA:LabelB{!id,*})" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.ALL, - labels = listOf("LabelA", "LabelB"), properties = emptyList()) - assertEquals(expected, result) - } - - @Test - fun `should extract all fixed params`() { - // given - val pattern = "(:LabelA{!id,foo,bar})" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.INCLUDE, - labels = listOf("LabelA"), properties = listOf("foo", "bar")) - assertEquals(expected, result) - } - - @Test - fun `should extract complex params`() { - // given - val pattern = "(:LabelA{!id,foo.bar})" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.INCLUDE, - labels = listOf("LabelA"), properties = listOf("foo.bar")) - assertEquals(expected, result) - } - - @Test - fun `should extract composite keys with fixed params`() { - // given - val pattern = "(:LabelA{!idA,!idB,foo,bar})" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("idA", "idB"), type = PatternConfigurationType.INCLUDE, - labels = listOf("LabelA"), properties = listOf("foo", "bar")) - assertEquals(expected, result) - } - - @Test - fun `should extract all excluded params`() { - // given - val pattern = "(:LabelA{!id,-foo,-bar})" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.EXCLUDE, - labels = listOf("LabelA"), properties = listOf("foo", "bar")) - assertEquals(expected, result) - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because of mixed configuration`() { - // given - val pattern = "(:LabelA{!id,-foo,bar})" - - try { - // when - NodePatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Node pattern $pattern is not homogeneous", e.message) - throw e - } - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because of invalid pattern`() { - // given - val pattern = "(LabelA{!id,-foo,bar})" - - try { - // when - NodePatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Node pattern $pattern is invalid", e.message) - throw e - } - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because the pattern should contains a key`() { - // given - val pattern = "(:LabelA{id,-foo,bar})" - - try { - // when - NodePatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Node pattern $pattern must contains at lest one key", e.message) - throw e - } - } - - @Test - fun `should extract all params - simple`() { - // given - val pattern = "LabelA:LabelB{!id,*}" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.ALL, - labels = listOf("LabelA", "LabelB"), properties = emptyList()) - assertEquals(expected, result) - } - - @Test - fun `should extract all fixed params - simple`() { - // given - val pattern = "LabelA{!id,foo,bar}" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.INCLUDE, - labels = listOf("LabelA"), properties = listOf("foo", "bar")) - assertEquals(expected, result) - } - - @Test - fun `should extract complex params - simple`() { - // given - val pattern = "LabelA{!id,foo.bar}" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.INCLUDE, - labels = listOf("LabelA"), properties = listOf("foo.bar")) - assertEquals(expected, result) - } - - @Test - fun `should extract composite keys with fixed params - simple`() { - // given - val pattern = "LabelA{!idA,!idB,foo,bar}" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("idA", "idB"), type = PatternConfigurationType.INCLUDE, - labels = listOf("LabelA"), properties = listOf("foo", "bar")) - assertEquals(expected, result) - } - - @Test - fun `should extract all excluded params - simple`() { - // given - val pattern = "LabelA{!id,-foo,-bar}" - - // when - val result = NodePatternConfiguration.parse(pattern) - - // then - val expected = NodePatternConfiguration(keys = setOf("id"), type = PatternConfigurationType.EXCLUDE, - labels = listOf("LabelA"), properties = listOf("foo", "bar")) - assertEquals(expected, result) - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because of mixed configuration - simple`() { - // given - val pattern = "LabelA{!id,-foo,bar}" - - try { - // when - NodePatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Node pattern $pattern is not homogeneous", e.message) - throw e - } - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because the pattern should contains a key - simple`() { - // given - val pattern = "LabelA{id,-foo,bar}" - - try { - // when - NodePatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Node pattern $pattern must contains at lest one key", e.message) - throw e - } - } -} - -class RelationshipPatternConfigurationTest { - - @Test - fun `should extract all params`() { - // given - val startPattern = "LabelA{!id,aa}" - val endPattern = "LabelB{!idB,bb}" - val pattern = "(:$startPattern)-[:REL_TYPE]->(:$endPattern)" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = NodePatternConfiguration.parse(startPattern) - val end = NodePatternConfiguration.parse(endPattern) - val properties = emptyList() - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.ALL - ) - assertEquals(expected, result) - } - - @Test - fun `should extract all params with reverse source and target`() { - // given - val startPattern = "LabelA{!id,aa}" - val endPattern = "LabelB{!idB,bb}" - val pattern = "(:$startPattern)<-[:REL_TYPE]-(:$endPattern)" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = NodePatternConfiguration.parse(startPattern) - val end = NodePatternConfiguration.parse(endPattern) - val properties = emptyList() - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = end, end = start, relType = relType, - properties = properties, type = PatternConfigurationType.ALL - ) - assertEquals(expected, result) - } - - @Test - fun `should extract all fixed params`() { - // given - val startPattern = "LabelA{!id}" - val endPattern = "LabelB{!idB}" - val pattern = "(:$startPattern)-[:REL_TYPE{foo, BAR}]->(:$endPattern)" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = RelationshipPatternConfiguration.getNodeConf(startPattern) - val end = RelationshipPatternConfiguration.getNodeConf(endPattern) - val properties = listOf("foo", "BAR") - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.INCLUDE - ) - assertEquals(expected, result) - } - - @Test - fun `should extract complex params`() { - // given - val startPattern = "LabelA{!id}" - val endPattern = "LabelB{!idB}" - val pattern = "(:$startPattern)-[:REL_TYPE{foo.BAR, BAR.foo}]->(:$endPattern)" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = RelationshipPatternConfiguration.getNodeConf(startPattern) - val end = RelationshipPatternConfiguration.getNodeConf(endPattern) - val properties = listOf("foo.BAR", "BAR.foo") - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.INCLUDE - ) - assertEquals(expected, result) - } - - @Test - fun `should extract all excluded params`() { - // given - val startPattern = "LabelA{!id}" - val endPattern = "LabelB{!idB}" - val pattern = "(:$startPattern)-[:REL_TYPE{-foo, -BAR}]->(:$endPattern)" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = RelationshipPatternConfiguration.getNodeConf(startPattern) - val end = RelationshipPatternConfiguration.getNodeConf(endPattern) - val properties = listOf("foo", "BAR") - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.EXCLUDE - ) - assertEquals(expected, result) - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because of mixed configuration`() { - // given - val pattern = "(:LabelA{!id})-[:REL_TYPE{foo, -BAR}]->(:LabelB{!idB})" - - try { - // when - RelationshipPatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Relationship pattern $pattern is not homogeneous", e.message) - throw e - } - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because the pattern should contains nodes with only ids`() { - // given - val pattern = "(:LabelA{id})-[:REL_TYPE{foo,BAR}]->(:LabelB{!idB})" - - try { - // when - RelationshipPatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Relationship pattern $pattern is invalid", e.message) - throw e - } - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because the pattern is invalid`() { - // given - val pattern = "(LabelA{!id})-[:REL_TYPE{foo,BAR}]->(:LabelB{!idB})" - - try { - // when - RelationshipPatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Relationship pattern $pattern is invalid", e.message) - throw e - } - } - - @Test - fun `should extract all params - simple`() { - // given - val startPattern = "LabelA{!id,aa}" - val endPattern = "LabelB{!idB,bb}" - val pattern = "$startPattern REL_TYPE $endPattern" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = NodePatternConfiguration.parse(startPattern) - val end = NodePatternConfiguration.parse(endPattern) - val properties = emptyList() - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.ALL - ) - assertEquals(expected, result) - } - - @Test - fun `should extract all fixed params - simple`() { - // given - val startPattern = "LabelA{!id}" - val endPattern = "LabelB{!idB}" - val pattern = "$startPattern REL_TYPE{foo, BAR} $endPattern" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = RelationshipPatternConfiguration.getNodeConf(startPattern) - val end = RelationshipPatternConfiguration.getNodeConf(endPattern) - val properties = listOf("foo", "BAR") - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.INCLUDE - ) - assertEquals(expected, result) - } - - @Test - fun `should extract complex params - simple`() { - // given - val startPattern = "LabelA{!id}" - val endPattern = "LabelB{!idB}" - val pattern = "$startPattern REL_TYPE{foo.BAR, BAR.foo} $endPattern" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = RelationshipPatternConfiguration.getNodeConf(startPattern) - val end = RelationshipPatternConfiguration.getNodeConf(endPattern) - val properties = listOf("foo.BAR", "BAR.foo") - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.INCLUDE - ) - assertEquals(expected, result) - } - - @Test - fun `should extract all excluded params - simple`() { - // given - val startPattern = "LabelA{!id}" - val endPattern = "LabelB{!idB}" - val pattern = "$startPattern REL_TYPE{-foo, -BAR} $endPattern" - - // when - val result = RelationshipPatternConfiguration.parse(pattern) - - // then - val start = RelationshipPatternConfiguration.getNodeConf(startPattern) - val end = RelationshipPatternConfiguration.getNodeConf(endPattern) - val properties = listOf("foo", "BAR") - val relType = "REL_TYPE" - val expected = RelationshipPatternConfiguration(start = start, end = end, relType = relType, - properties = properties, type = PatternConfigurationType.EXCLUDE - ) - assertEquals(expected, result) - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because of mixed configuration - simple`() { - // given - val pattern = "LabelA{!id} REL_TYPE{foo, -BAR} LabelB{!idB}" - - try { - // when - RelationshipPatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Relationship pattern $pattern is not homogeneous", e.message) - throw e - } - } - - @Test(expected = IllegalArgumentException::class) - fun `should throw an exception because the pattern should contains nodes with only ids - simple`() { - // given - val pattern = "LabelA{id} REL_TYPE{foo,BAR} LabelB{!idB}" - - try { - // when - RelationshipPatternConfiguration.parse(pattern) - } catch (e: IllegalArgumentException) { - // then - assertEquals("The Relationship pattern $pattern is invalid", e.message) - throw e - } - } -} \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/common/strategy/RelationshipPatternIngestionStrategyTest.kt b/extended/src/test/kotlin/apoc/kafka/common/strategy/RelationshipPatternIngestionStrategyTest.kt deleted file mode 100644 index de0ef4fb3a..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/common/strategy/RelationshipPatternIngestionStrategyTest.kt +++ /dev/null @@ -1,196 +0,0 @@ -package apoc.kafka.common.strategy - -import org.junit.Test -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.service.sink.strategy.RelationshipPatternConfiguration -import apoc.kafka.service.sink.strategy.RelationshipPatternIngestionStrategy -import apoc.kafka.utils.KafkaUtil -import kotlin.test.assertEquals - -class RelationshipPatternIngestionStrategyTest { - - @Test - fun `should get all properties`() { - // given - val startPattern = "LabelA{!idStart}" - val endPattern = "LabelB{!idEnd}" - val pattern = "(:$startPattern)-[:REL_TYPE]->(:$endPattern)" - val config = RelationshipPatternConfiguration.parse(pattern) - val strategy = RelationshipPatternIngestionStrategy(config) - val data = mapOf("idStart" to 1, "idEnd" to 2, - "foo" to "foo", - "bar" to "bar") - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeRelationshipEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (start:LabelA{idStart: event.start.keys.idStart}) - |SET start = event.start.properties - |SET start += event.start.keys - |MERGE (end:LabelB{idEnd: event.end.keys.idEnd}) - |SET end = event.end.properties - |SET end += event.end.keys - |MERGE (start)-[r:REL_TYPE]->(end) - |SET r = event.properties - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("start" to mapOf("keys" to mapOf("idStart" to 1), "properties" to emptyMap()), - "end" to mapOf("keys" to mapOf("idEnd" to 2), "properties" to emptyMap()), - "properties" to mapOf("foo" to "foo", "bar" to "bar"))), queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeNodeEvents(events)) - } - - @Test - fun `should get all properties - simple`() { - // given - val startPattern = "LabelA{!idStart}" - val endPattern = "LabelB{!idEnd}" - val pattern = "$startPattern REL_TYPE $endPattern" - val config = RelationshipPatternConfiguration.parse(pattern) - val strategy = RelationshipPatternIngestionStrategy(config) - val data = mapOf("idStart" to 1, "idEnd" to 2, - "foo" to "foo", - "bar" to "bar") - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeRelationshipEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (start:LabelA{idStart: event.start.keys.idStart}) - |SET start = event.start.properties - |SET start += event.start.keys - |MERGE (end:LabelB{idEnd: event.end.keys.idEnd}) - |SET end = event.end.properties - |SET end += event.end.keys - |MERGE (start)-[r:REL_TYPE]->(end) - |SET r = event.properties - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("start" to mapOf("keys" to mapOf("idStart" to 1), "properties" to emptyMap()), - "end" to mapOf("keys" to mapOf("idEnd" to 2), "properties" to emptyMap()), - "properties" to mapOf("foo" to "foo", "bar" to "bar"))), queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeNodeEvents(events)) - } - - @Test - fun `should get all properties with reverse start-end`() { - // given - val startPattern = "LabelA{!idStart}" - val endPattern = "LabelB{!idEnd}" - val pattern = "(:$endPattern)<-[:REL_TYPE]-(:$startPattern)" - val config = RelationshipPatternConfiguration.parse(pattern) - val strategy = RelationshipPatternIngestionStrategy(config) - val data = mapOf("idStart" to 1, "idEnd" to 2, - "foo" to "foo", - "bar" to "bar") - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeRelationshipEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (start:LabelA{idStart: event.start.keys.idStart}) - |SET start = event.start.properties - |SET start += event.start.keys - |MERGE (end:LabelB{idEnd: event.end.keys.idEnd}) - |SET end = event.end.properties - |SET end += event.end.keys - |MERGE (start)-[r:REL_TYPE]->(end) - |SET r = event.properties - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("start" to mapOf("keys" to mapOf("idStart" to 1), "properties" to emptyMap()), - "end" to mapOf("keys" to mapOf("idEnd" to 2), "properties" to emptyMap()), - "properties" to mapOf("foo" to "foo", "bar" to "bar"))), queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeNodeEvents(events)) - } - - @Test - fun `should get nested properties`() { - // given - val startPattern = "LabelA{!idStart, foo.mapFoo}" - val endPattern = "LabelB{!idEnd, bar.mapBar}" - val pattern = "(:$startPattern)-[:REL_TYPE]->(:$endPattern)" - val config = RelationshipPatternConfiguration.parse(pattern) - val strategy = RelationshipPatternIngestionStrategy(config) - val data = mapOf("idStart" to 1, "idEnd" to 2, - "foo" to mapOf("mapFoo" to "mapFoo"), - "bar" to mapOf("mapBar" to "mapBar"), - "rel" to 1, - "map" to mapOf("a" to "a", "inner" to mapOf("b" to "b"))) - - // when - val events = listOf(StreamsSinkEntity(data, data)) - val queryEvents = strategy.mergeRelationshipEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MERGE (start:LabelA{idStart: event.start.keys.idStart}) - |SET start = event.start.properties - |SET start += event.start.keys - |MERGE (end:LabelB{idEnd: event.end.keys.idEnd}) - |SET end = event.end.properties - |SET end += event.end.keys - |MERGE (start)-[r:REL_TYPE]->(end) - |SET r = event.properties - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf( - mapOf("start" to mapOf("keys" to mapOf("idStart" to 1), "properties" to mapOf("foo.mapFoo" to "mapFoo")), - "end" to mapOf("keys" to mapOf("idEnd" to 2), "properties" to mapOf("bar.mapBar" to "mapBar")), - "properties" to mapOf("rel" to 1, "map.a" to "a", "map.inner.b" to "b")) - ), queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.deleteRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeNodeEvents(events)) - } - - @Test - fun `should delete the relationship`() { - // given - val startPattern = "LabelA{!idStart}" - val endPattern = "LabelB{!idEnd}" - val pattern = "(:$startPattern)-[:REL_TYPE]->(:$endPattern)" - val config = RelationshipPatternConfiguration.parse(pattern) - val strategy = RelationshipPatternIngestionStrategy(config) - val data = mapOf("idStart" to 1, "idEnd" to 2, - "foo" to "foo", - "bar" to "bar") - - // when - val events = listOf(StreamsSinkEntity(data, null)) - val queryEvents = strategy.deleteRelationshipEvents(events) - - // then - assertEquals(1, queryEvents.size) - assertEquals(""" - |${KafkaUtil.UNWIND} - |MATCH (start:LabelA{idStart: event.start.keys.idStart}) - |MATCH (end:LabelB{idEnd: event.end.keys.idEnd}) - |MATCH (start)-[r:REL_TYPE]->(end) - |DELETE r - """.trimMargin(), queryEvents[0].query) - assertEquals(listOf(mapOf("start" to mapOf("keys" to mapOf("idStart" to 1), "properties" to emptyMap()), - "end" to mapOf("keys" to mapOf("idEnd" to 2), "properties" to emptyMap()))), queryEvents[0].events) - assertEquals(emptyList(), strategy.deleteNodeEvents(events)) - assertEquals(emptyList(), strategy.mergeRelationshipEvents(events)) - assertEquals(emptyList(), strategy.mergeNodeEvents(events)) - } - -} \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/common/strategy/SchemaIngestionStrategyTest.kt b/extended/src/test/kotlin/apoc/kafka/common/strategy/SchemaIngestionStrategyTest.kt deleted file mode 100644 index 3f9503cbf0..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/common/strategy/SchemaIngestionStrategyTest.kt +++ /dev/null @@ -1,496 +0,0 @@ -package apoc.kafka.common.strategy - -import org.junit.Test -import apoc.kafka.events.* -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.service.sink.strategy.SchemaIngestionStrategy -import apoc.kafka.utils.KafkaUtil -import kotlin.test.assertEquals -import kotlin.test.assertTrue - -class SchemaIngestionStrategyTest { - - @Test - fun `should create the Schema Query Strategy for mixed events`() { - // given - val constraints = listOf(Constraint(label = "User", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("name", "surname"))) - val nodeSchema = Schema(properties = mapOf("name" to "String", "surname" to "String", "comp@ny" to "String"), constraints = constraints) - val relSchema = Schema(properties = mapOf("since" to "Long"), constraints = constraints) - val cdcDataStart = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 0, - txEventsCount = 3, - operation = OperationType.created - ), - payload = NodePayload(id = "0", - before = null, - after = NodeChange(properties = mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA"), labels = listOf("User")) - ), - schema = nodeSchema - ) - val cdcDataEnd = StreamsTransactionEvent(meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 1, - txEventsCount = 3, - operation = OperationType.created - ), - payload = NodePayload(id = "1", - before = null, - after = NodeChange(properties = mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j"), labels = listOf("User")) - ), - schema = nodeSchema - ) - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.created - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "0", labels = listOf("User", "NewLabel"), ids = mapOf("name" to "Andrea", "surname" to "Santurbano")), - end = RelationshipNodeChange(id = "1", labels = listOf("User", "NewLabel"), ids = mapOf("name" to "Michael", "surname" to "Hunger")), - after = RelationshipChange(properties = mapOf("since" to 2014)), - before = null, - label = "KNOWS WHO" - ), - schema = relSchema - ) - val cdcQueryStrategy = SchemaIngestionStrategy() - val txEvents = listOf(StreamsSinkEntity(cdcDataStart, cdcDataStart), - StreamsSinkEntity(cdcDataEnd, cdcDataEnd), - StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val nodeEvents = cdcQueryStrategy.mergeNodeEvents(txEvents) - val nodeDeleteEvents = cdcQueryStrategy.deleteNodeEvents(txEvents) - - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(0, nodeDeleteEvents.size) - assertEquals(1, nodeEvents.size) - val nodeQuery = nodeEvents[0].query - val expectedNodeQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (n:User{surname: event.properties.surname, name: event.properties.name}) - |SET n = event.properties - """.trimMargin() - assertEquals(expectedNodeQuery, nodeQuery.trimIndent()) - val eventsNodeList = nodeEvents[0].events - assertEquals(2, eventsNodeList.size) - val expectedNodeEvents = listOf( - mapOf("properties" to mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA")), - mapOf("properties" to mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j")) - ) - assertEquals(expectedNodeEvents, eventsNodeList) - - assertEquals(0, relationshipDeleteEvents.size) - assertEquals(1, relationshipEvents.size) - val relQuery = relationshipEvents[0].query - val expectedRelQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (start:User{name: event.start.name, surname: event.start.surname}) - |MERGE (end:User{name: event.end.name, surname: event.end.surname}) - |MERGE (start)-[r:`KNOWS WHO`]->(end) - |SET r = event.properties - """.trimMargin() - assertEquals(expectedRelQuery, relQuery.trimIndent()) - val eventsRelList = relationshipEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf( - mapOf("start" to mapOf("name" to "Andrea", "surname" to "Santurbano"), - "end" to mapOf("name" to "Michael", "surname" to "Hunger"), "properties" to mapOf("since" to 2014)) - ) - assertEquals(expectedRelEvents, eventsRelList) - } - - @Test - fun `should create the Schema Query Strategy for nodes`() { - // given - val nodeSchema = Schema(properties = mapOf("name" to "String", "surname" to "String", "comp@ny" to "String"), - constraints = listOf(Constraint(label = "User", type = StreamsConstraintType.UNIQUE, properties = setOf("name", "surname")))) - val cdcDataStart = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 0, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = NodePayload(id = "0", - before = NodeChange(properties = mapOf("name" to "Andrea", "surname" to "Santurbano"), labels = listOf("User", "ToRemove")), - after = NodeChange(properties = mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA"), labels = listOf("User", "NewLabel")) - ), - schema = nodeSchema - ) - val cdcDataEnd = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 1, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = NodePayload(id = "1", - before = NodeChange(properties = mapOf("name" to "Michael", "surname" to "Hunger"), labels = listOf("User", "ToRemove")), - after = NodeChange(properties = mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j"), labels = listOf("User", "NewLabel")) - ), - schema = nodeSchema - ) - val cdcQueryStrategy = SchemaIngestionStrategy() - val txEvents = listOf( - StreamsSinkEntity(cdcDataStart, cdcDataStart), - StreamsSinkEntity(cdcDataEnd, cdcDataEnd)) - - // when - val nodeEvents = cdcQueryStrategy.mergeNodeEvents(txEvents) - val nodeDeleteEvents = cdcQueryStrategy.deleteNodeEvents(txEvents) - - // then - assertEquals(0, nodeDeleteEvents.size) - assertEquals(1, nodeEvents.size) - val nodeQuery = nodeEvents[0].query - val expectedNodeQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (n:User{surname: event.properties.surname, name: event.properties.name}) - |SET n = event.properties - |SET n:NewLabel - |REMOVE n:ToRemove - """.trimMargin() - assertEquals(expectedNodeQuery, nodeQuery.trimIndent()) - val eventsNodeList = nodeEvents[0].events - assertEquals(2, eventsNodeList.size) - val expectedNodeEvents = listOf( - mapOf("properties" to mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA")), - mapOf("properties" to mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j")) - ) - assertEquals(expectedNodeEvents, eventsNodeList) - } - - @Test - fun `should create the Schema Query Strategy for relationships`() { - // given - val relSchema = Schema(properties = mapOf("since" to "Long"), constraints = listOf( - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("name", "surname")), - Constraint(label = "Product Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("name")))) - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "1", labels = listOf("User Ext", "NewLabel"), ids = mapOf("name" to "Michael", "surname" to "Hunger")), - end = RelationshipNodeChange(id = "2", labels = listOf("Product Ext", "NewLabelA"), ids = mapOf("name" to "My Awesome Product")), - after = RelationshipChange(properties = mapOf("since" to 2014)), - before = null, - label = "HAS BOUGHT" - ), - schema = relSchema - ) - val cdcQueryStrategy = SchemaIngestionStrategy() - val txEvents = listOf(StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(0, relationshipDeleteEvents.size) - assertEquals(1, relationshipEvents.size) - val relQuery = relationshipEvents[0].query - val expectedRelQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (start:`User Ext`{name: event.start.name, surname: event.start.surname}) - |MERGE (end:`Product Ext`{name: event.end.name}) - |MERGE (start)-[r:`HAS BOUGHT`]->(end) - |SET r = event.properties - """.trimMargin() - assertEquals(expectedRelQuery, relQuery.trimIndent()) - val eventsRelList = relationshipEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf( - mapOf("start" to mapOf("name" to "Michael", "surname" to "Hunger"), - "end" to mapOf("name" to "My Awesome Product"), - "properties" to mapOf("since" to 2014)) - ) - assertEquals(expectedRelEvents, eventsRelList) - } - - @Test - fun `should create the Schema Query Strategy for relationships with multiple unique constraints`() { - // the Schema Query Strategy leverage the first constraint with lowest properties - // with the same size, we take the first sorted properties list alphabetically - - // given - // we shuffle the constraints to ensure that the result doesn't depend from the ordering - val constraintsList = listOf( - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("address")), - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("country")), - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("name", "surname")), - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("profession", "another_one")), - Constraint(label = "Product Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("code")), - Constraint(label = "Product Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("name")) - ).shuffled() - - val relSchema = Schema(properties = mapOf("since" to "Long"), constraints = constraintsList) - val idsStart = mapOf("name" to "Sherlock", - "surname" to "Holmes", - "country" to "UK", - "profession" to "detective", - "another_one" to "foo", - "address" to "Baker Street") - val idsEnd = mapOf("name" to "My Awesome Product", "code" to 17294) - - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "1", labels = listOf("User Ext", "NewLabel"), ids = idsStart), - end = RelationshipNodeChange(id = "2", labels = listOf("Product Ext", "NewLabelA"), ids = idsEnd), - after = RelationshipChange(properties = mapOf("since" to 2014)), - before = null, - label = "HAS BOUGHT" - ), - schema = relSchema - ) - val cdcQueryStrategy = SchemaIngestionStrategy() - val txEvents = listOf(StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(0, relationshipDeleteEvents.size) - assertEquals(1, relationshipEvents.size) - val relQuery = relationshipEvents[0].query - val expectedRelQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (start:`User Ext`{address: event.start.address}) - |MERGE (end:`Product Ext`{code: event.end.code}) - |MERGE (start)-[r:`HAS BOUGHT`]->(end) - |SET r = event.properties - """.trimMargin() - assertEquals(expectedRelQuery, relQuery.trimIndent()) - val eventsRelList = relationshipEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf( - mapOf("start" to mapOf("address" to "Baker Street"), - "end" to mapOf("code" to 17294), - "properties" to mapOf("since" to 2014)) - ) - assertEquals(expectedRelEvents, eventsRelList) - } - - @Test - fun `should create the Schema Query Strategy for relationships with multiple unique constraints and labels`() { - // the Schema Query Strategy leverage the first constraint with lowest properties - // with the same size, we take the first label in alphabetical order - // finally, with same label name, we take the first sorted properties list alphabetically - - // given - // we shuffle the constraints to ensure that the result doesn't depend from the ordering - val constraintsList = listOf( - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("address")), - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("country")), - Constraint(label = "User AAA", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("another_two")), - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("name", "surname")), - Constraint(label = "User Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("profession", "another_one")), - Constraint(label = "Product Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("code")), - Constraint(label = "Product Ext", type = StreamsConstraintType.UNIQUE, properties = linkedSetOf("name")) - ).shuffled() - - val relSchema = Schema(properties = mapOf("since" to "Long"), constraints = constraintsList) - val idsStart = mapOf("name" to "Sherlock", - "surname" to "Holmes", - "country" to "UK", - "profession" to "detective", - "another_one" to "foo", - "address" to "Baker Street", - "another_two" to "Dunno") - val idsEnd = mapOf("name" to "My Awesome Product", "code" to 17294) - - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "1", labels = listOf("User Ext", "User AAA", "NewLabel"), ids = idsStart), - end = RelationshipNodeChange(id = "2", labels = listOf("Product Ext", "NewLabelA"), ids = idsEnd), - after = RelationshipChange(properties = mapOf("since" to 2014)), - before = null, - label = "HAS BOUGHT" - ), - schema = relSchema - ) - val cdcQueryStrategy = SchemaIngestionStrategy() - val txEvents = listOf(StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(0, relationshipDeleteEvents.size) - assertEquals(1, relationshipEvents.size) - val relQuery = relationshipEvents[0].query - val expectedRelQueryOne = """ - |${KafkaUtil.UNWIND} - |MERGE (start:`User AAA`:`User Ext`{another_two: event.start.another_two}) - |MERGE (end:`Product Ext`{code: event.end.code}) - |MERGE (start)-[r:`HAS BOUGHT`]->(end) - |SET r = event.properties - """.trimMargin() - val expectedRelQueryTwo = """ - |${KafkaUtil.UNWIND} - |MERGE (start:`User Ext`:`User AAA`{another_two: event.start.another_two}) - |MERGE (end:`Product Ext`{code: event.end.code}) - |MERGE (start)-[r:`HAS BOUGHT`]->(end) - |SET r = event.properties - """.trimMargin() - assertTrue { listOf(expectedRelQueryOne, expectedRelQueryTwo).contains(relQuery.trimIndent()) } - val eventsRelList = relationshipEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf( - mapOf("start" to mapOf("another_two" to "Dunno"), - "end" to mapOf("code" to 17294), - "properties" to mapOf("since" to 2014)) - ) - assertEquals(expectedRelEvents, eventsRelList) - } - - @Test - fun `should create the Schema Query Strategy for node deletes`() { - // given - val nodeSchema = Schema(properties = mapOf("name" to "String", "surname" to "String", "comp@ny" to "String"), - constraints = listOf(Constraint(label = "User", type = StreamsConstraintType.UNIQUE, properties = setOf("name", "surname")))) - val cdcDataStart = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 0, - txEventsCount = 3, - operation = OperationType.deleted - ), - payload = NodePayload(id = "0", - before = NodeChange(properties = mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA"), labels = listOf("User")), - after = null - ), - schema = nodeSchema - ) - val cdcDataEnd = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 1, - txEventsCount = 3, - operation = OperationType.deleted - ), - payload = NodePayload(id = "1", - before = NodeChange(properties = mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j"), labels = listOf("User")), - after = null - ), - schema = nodeSchema - ) - val cdcQueryStrategy = SchemaIngestionStrategy() - val txEvents = listOf( - StreamsSinkEntity(cdcDataStart, cdcDataStart), - StreamsSinkEntity(cdcDataEnd, cdcDataEnd)) - - // when - val nodeEvents = cdcQueryStrategy.mergeNodeEvents(txEvents) - val nodeDeleteEvents = cdcQueryStrategy.deleteNodeEvents(txEvents) - - // then - assertEquals(1, nodeDeleteEvents.size) - assertEquals(0, nodeEvents.size) - val nodeQuery = nodeDeleteEvents[0].query - val expectedNodeQuery = """ - |${KafkaUtil.UNWIND} - |MATCH (n:User{surname: event.properties.surname, name: event.properties.name}) - |DETACH DELETE n - """.trimMargin() - assertEquals(expectedNodeQuery, nodeQuery.trimIndent()) - val eventsNodeList = nodeDeleteEvents[0].events - assertEquals(2, eventsNodeList.size) - val expectedNodeEvents = listOf( - mapOf("properties" to mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA")), - mapOf("properties" to mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j")) - ) - assertEquals(expectedNodeEvents, eventsNodeList) - } - - @Test - fun `should create the Schema Query Strategy for relationships deletes`() { - // given - val relSchema = Schema(properties = mapOf("since" to "Long"), - constraints = listOf(Constraint(label = "User", type = StreamsConstraintType.UNIQUE, properties = setOf("name", "surname")))) - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.deleted - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "0", labels = listOf("User", "NewLabel"), ids = mapOf("name" to "Andrea", "surname" to "Santurbano")), - end = RelationshipNodeChange(id = "1", labels = listOf("User", "NewLabel"), ids = mapOf("name" to "Michael", "surname" to "Hunger")), - after = RelationshipChange(properties = mapOf("since" to 2014, "foo" to "label")), - before = RelationshipChange(properties = mapOf("since" to 2014)), - label = "KNOWS WHO" - ), - schema = relSchema - ) - val cdcQueryStrategy = SchemaIngestionStrategy() - val txEvents = listOf(StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(1, relationshipDeleteEvents.size) - assertEquals(0, relationshipEvents.size) - val relQuery = relationshipDeleteEvents[0].query - val expectedRelQuery = """ - |${KafkaUtil.UNWIND} - |MATCH (start:User{name: event.start.name, surname: event.start.surname}) - |MATCH (end:User{name: event.end.name, surname: event.end.surname}) - |MATCH (start)-[r:`KNOWS WHO`]->(end) - |DELETE r - """.trimMargin() - assertEquals(expectedRelQuery, relQuery.trimIndent()) - val eventsRelList = relationshipDeleteEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf( - mapOf("start" to mapOf("name" to "Andrea", "surname" to "Santurbano"), - "end" to mapOf("name" to "Michael", "surname" to "Hunger")) - ) - assertEquals(expectedRelEvents, eventsRelList) - } - -} \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/common/strategy/SourceIdIngestionStrategyTest.kt b/extended/src/test/kotlin/apoc/kafka/common/strategy/SourceIdIngestionStrategyTest.kt deleted file mode 100644 index 773992dc1d..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/common/strategy/SourceIdIngestionStrategyTest.kt +++ /dev/null @@ -1,331 +0,0 @@ -package apoc.kafka.common.strategy - -import org.junit.Test -import apoc.kafka.events.* -import apoc.kafka.service.StreamsSinkEntity -import apoc.kafka.service.sink.strategy.SourceIdIngestionStrategy -import apoc.kafka.service.sink.strategy.SourceIdIngestionStrategyConfig -import apoc.kafka.utils.KafkaUtil -import kotlin.test.assertEquals - -class SourceIdIngestionStrategyTest { - - @Test - fun `should create the Merge Query Strategy for mixed events`() { - // given - val cdcDataStart = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 0, - txEventsCount = 3, - operation = OperationType.created - ), - payload = NodePayload(id = "0", - before = null, - after = NodeChange(properties = mapOf("name" to "Andrea", "comp@ny" to "LARUS-BA"), labels = listOf("User")) - ), - schema = Schema() - ) - val cdcDataEnd = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 1, - txEventsCount = 3, - operation = OperationType.created - ), - payload = NodePayload(id = "1", - before = null, - after = NodeChange(properties = mapOf("name" to "Michael", "comp@ny" to "Neo4j"), labels = listOf("User")) - ), - schema = Schema() - ) - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.created - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "0", labels = listOf("User"), ids = emptyMap()), - end = RelationshipNodeChange(id = "1", labels = listOf("User"), ids = emptyMap()), - after = RelationshipChange(properties = mapOf("since" to 2014)), - before = null, - label = "KNOWS WHO" - ), - schema = Schema() - ) - val config = SourceIdIngestionStrategyConfig(labelName = "Custom SourceEvent", idName = "custom Id") - val cdcQueryStrategy = SourceIdIngestionStrategy(config) - val txEvents = listOf( - StreamsSinkEntity(cdcDataStart, cdcDataStart), - StreamsSinkEntity(cdcDataEnd, cdcDataEnd), - StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val nodeEvents = cdcQueryStrategy.mergeNodeEvents(txEvents) - val nodeDeleteEvents = cdcQueryStrategy.deleteNodeEvents(txEvents) - - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(0, nodeDeleteEvents.size) - assertEquals(1, nodeEvents.size) - val nodeQuery = nodeEvents[0].query - val expectedNodeQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (n:`Custom SourceEvent`{`custom Id`: event.id}) - |SET n = event.properties - |SET n.`custom Id` = event.id - |SET n:User - """.trimMargin() - assertEquals(expectedNodeQuery, nodeQuery.trimIndent()) - val eventsNodeList = nodeEvents[0].events - assertEquals(2, eventsNodeList.size) - val expectedNodeEvents = listOf( - mapOf("id" to "0", "properties" to mapOf("name" to "Andrea", "comp@ny" to "LARUS-BA")), - mapOf("id" to "1", "properties" to mapOf("name" to "Michael", "comp@ny" to "Neo4j")) - ) - assertEquals(expectedNodeEvents, eventsNodeList) - - assertEquals(0, relationshipDeleteEvents.size) - assertEquals(1, relationshipEvents.size) - val relQuery = relationshipEvents[0].query - val expectedRelQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (start:`Custom SourceEvent`{`custom Id`: event.start}) - |MERGE (end:`Custom SourceEvent`{`custom Id`: event.end}) - |MERGE (start)-[r:`KNOWS WHO`{`custom Id`: event.id}]->(end) - |SET r = event.properties - |SET r.`custom Id` = event.id - """.trimMargin() - assertEquals(expectedRelQuery, relQuery.trimIndent()) - val eventsRelList = relationshipEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf( - mapOf("id" to "2", "start" to "0", "end" to "1", "properties" to mapOf("since" to 2014)) - ) - assertEquals(expectedRelEvents, eventsRelList) - } - - @Test - fun `should create the Merge Query Strategy for node updates`() { - // given - val nodeSchema = Schema() - // given - val cdcDataStart = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 0, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = NodePayload(id = "0", - before = NodeChange(properties = mapOf("name" to "Andrea", "surname" to "Santurbano"), labels = listOf("User", "ToRemove")), - after = NodeChange(properties = mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA"), labels = listOf("User", "NewLabel")) - ), - schema = nodeSchema - ) - val cdcDataEnd = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 1, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = NodePayload(id = "1", - before = NodeChange(properties = mapOf("name" to "Michael", "surname" to "Hunger"), labels = listOf("User", "ToRemove")), - after = NodeChange(properties = mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j"), labels = listOf("User", "NewLabel")) - ), - schema = nodeSchema - ) - val cdcQueryStrategy = SourceIdIngestionStrategy() - val txEvents = listOf( - StreamsSinkEntity(cdcDataStart, cdcDataStart), - StreamsSinkEntity(cdcDataEnd, cdcDataEnd)) - - // when - val nodeEvents = cdcQueryStrategy.mergeNodeEvents(txEvents) - val nodeDeleteEvents = cdcQueryStrategy.deleteNodeEvents(txEvents) - - // then - assertEquals(0, nodeDeleteEvents.size) - assertEquals(1, nodeEvents.size) - val nodeQuery = nodeEvents[0].query - val expectedNodeQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (n:SourceEvent{sourceId: event.id}) - |SET n = event.properties - |SET n.sourceId = event.id - |REMOVE n:ToRemove - |SET n:NewLabel - """.trimMargin() - assertEquals(expectedNodeQuery, nodeQuery.trimIndent()) - val eventsNodeList = nodeEvents[0].events - assertEquals(2, eventsNodeList.size) - val expectedNodeEvents = listOf( - mapOf("id" to "0", "properties" to mapOf("name" to "Andrea", "surname" to "Santurbano", "comp@ny" to "LARUS-BA")), - mapOf("id" to "1", "properties" to mapOf("name" to "Michael", "surname" to "Hunger", "comp@ny" to "Neo4j")) - ) - assertEquals(expectedNodeEvents, eventsNodeList) - } - - @Test - fun `should create the Merge Query Strategy for relationships updates`() { - // given - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.updated - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "0", labels = listOf("User"), ids = emptyMap()), - end = RelationshipNodeChange(id = "1", labels = listOf("User"), ids = emptyMap()), - after = RelationshipChange(properties = mapOf("since" to 2014, "foo" to "label")), - before = RelationshipChange(properties = mapOf("since" to 2014)), - label = "KNOWS WHO" - ), - schema = Schema() - ) - val cdcQueryStrategy = SourceIdIngestionStrategy() - val txEvents = listOf(StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(0, relationshipDeleteEvents.size) - assertEquals(1, relationshipEvents.size) - val relQuery = relationshipEvents[0].query - val expectedRelQuery = """ - |${KafkaUtil.UNWIND} - |MERGE (start:SourceEvent{sourceId: event.start}) - |MERGE (end:SourceEvent{sourceId: event.end}) - |MERGE (start)-[r:`KNOWS WHO`{sourceId: event.id}]->(end) - |SET r = event.properties - |SET r.sourceId = event.id - """.trimMargin() - assertEquals(expectedRelQuery, relQuery.trimIndent()) - val eventsRelList = relationshipEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf( - mapOf("id" to "2", "start" to "0", "end" to "1", "properties" to mapOf("since" to 2014, "foo" to "label")) - ) - assertEquals(expectedRelEvents, eventsRelList) - } - - @Test - fun `should create the Merge Query Strategy for node deletes`() { - // given - val nodeSchema = Schema() - // given - val cdcDataStart = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 0, - txEventsCount = 3, - operation = OperationType.deleted - ), - payload = NodePayload(id = "0", - before = NodeChange(properties = mapOf("name" to "Andrea", "surname" to "Santurbano"), labels = listOf("User")), - after = null - ), - schema = nodeSchema - ) - val cdcDataEnd = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 1, - txEventsCount = 3, - operation = OperationType.deleted - ), - payload = NodePayload(id = "1", - before = NodeChange(properties = mapOf("name" to "Michael", "surname" to "Hunger"), labels = listOf("User")), - after = null - ), - schema = nodeSchema - ) - val cdcQueryStrategy = SourceIdIngestionStrategy() - val txEvents = listOf( - StreamsSinkEntity(cdcDataStart, cdcDataStart), - StreamsSinkEntity(cdcDataEnd, cdcDataEnd)) - - // when - val nodeEvents = cdcQueryStrategy.mergeNodeEvents(txEvents) - val nodeDeleteEvents = cdcQueryStrategy.deleteNodeEvents(txEvents) - - // then - assertEquals(1, nodeDeleteEvents.size) - assertEquals(0, nodeEvents.size) - val nodeQuery = nodeDeleteEvents[0].query - val expectedNodeQuery = """ - |${KafkaUtil.UNWIND} MATCH (n:SourceEvent{sourceId: event.id}) DETACH DELETE n - """.trimMargin() - assertEquals(expectedNodeQuery, nodeQuery.trimIndent()) - val eventsNodeList = nodeDeleteEvents[0].events - assertEquals(2, eventsNodeList.size) - val expectedNodeEvents = listOf( - mapOf("id" to "0"), - mapOf("id" to "1") - ) - assertEquals(expectedNodeEvents, eventsNodeList) - } - - @Test - fun `should create the Merge Query Strategy for relationships deletes`() { - // given - val cdcDataRelationship = StreamsTransactionEvent( - meta = Meta(timestamp = System.currentTimeMillis(), - username = "user", - txId = 1, - txEventId = 2, - txEventsCount = 3, - operation = OperationType.deleted - ), - payload = RelationshipPayload( - id = "2", - start = RelationshipNodeChange(id = "0", labels = listOf("User"), ids = emptyMap()), - end = RelationshipNodeChange(id = "1", labels = listOf("User"), ids = emptyMap()), - after = RelationshipChange(properties = mapOf("since" to 2014, "foo" to "label")), - before = RelationshipChange(properties = mapOf("since" to 2014)), - label = "KNOWS WHO" - ), - schema = Schema() - ) - val cdcQueryStrategy = SourceIdIngestionStrategy() - val txEvents = listOf(StreamsSinkEntity(cdcDataRelationship, cdcDataRelationship)) - - // when - val relationshipEvents = cdcQueryStrategy.mergeRelationshipEvents(txEvents) - val relationshipDeleteEvents = cdcQueryStrategy.deleteRelationshipEvents(txEvents) - - // then - assertEquals(1, relationshipDeleteEvents.size) - assertEquals(0, relationshipEvents.size) - val relQuery = relationshipDeleteEvents[0].query - val expectedRelQuery = """ - |${KafkaUtil.UNWIND} MATCH ()-[r:`KNOWS WHO`{sourceId: event.id}]-() DELETE r - """.trimMargin() - assertEquals(expectedRelQuery, relQuery.trimIndent()) - val eventsRelList = relationshipDeleteEvents[0].events - assertEquals(1, eventsRelList.size) - val expectedRelEvents = listOf(mapOf("id" to "2")) - assertEquals(expectedRelEvents, eventsRelList) - } - -} - diff --git a/extended/src/test/kotlin/apoc/kafka/common/support/KafkaTestUtils.kt b/extended/src/test/kotlin/apoc/kafka/common/support/KafkaTestUtils.kt index 9f3e4c0eb2..33ad395e18 100644 --- a/extended/src/test/kotlin/apoc/kafka/common/support/KafkaTestUtils.kt +++ b/extended/src/test/kotlin/apoc/kafka/common/support/KafkaTestUtils.kt @@ -56,7 +56,7 @@ object KafkaTestUtils { fun getDbServices(dbms: DatabaseManagementService): GraphDatabaseService { val db = dbms.database(GraphDatabaseSettings.DEFAULT_DATABASE_NAME) - TestUtil.registerProcedure(db, StreamsSinkProcedures::class.java, GlobalProcedures::class.java, PublishProcedures::class.java); + TestUtil.registerProcedure(db, GlobalProcedures::class.java, PublishProcedures::class.java); return db } } \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/consumer/kafka/SchemaRegistryContainer.kt b/extended/src/test/kotlin/apoc/kafka/consumer/kafka/SchemaRegistryContainer.kt deleted file mode 100644 index fe17a6b78e..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/consumer/kafka/SchemaRegistryContainer.kt +++ /dev/null @@ -1,44 +0,0 @@ -package apoc.kafka.consumer.kafka - -import org.testcontainers.containers.GenericContainer -import org.testcontainers.containers.KafkaContainer -import org.testcontainers.containers.Network -import org.testcontainers.containers.SocatContainer -import java.util.stream.Stream - - -class SchemaRegistryContainer(version: String): GenericContainer("confluentinc/cp-schema-registry:$version") { - - private lateinit var proxy: SocatContainer - - override fun doStart() { - val networkAlias = networkAliases[0] - proxy = SocatContainer() - .withNetwork(network) - .withTarget(PORT, networkAlias) - - proxy.start() - super.doStart() - } - - fun withKafka(kafka: KafkaContainer): SchemaRegistryContainer? { - return kafka.network?.let { withKafka(it, kafka.networkAliases.map { "PLAINTEXT://$it:9092" }.joinToString(",")) } - } - - fun withKafka(network: Network, bootstrapServers: String): SchemaRegistryContainer { - withNetwork(network) - withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry") - withEnv("SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", bootstrapServers) - return self() - } - - fun getSchemaRegistryUrl() = "http://${proxy.containerIpAddress}:${proxy.firstMappedPort}" - - override fun stop() { - Stream.of(Runnable { super.stop() }, Runnable { proxy.stop() }).parallel().forEach { it.run() } - } - - companion object { - @JvmStatic val PORT = 8081 - } -} \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterBaseTSE.kt b/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterBaseTSE.kt index cef4b1ea1a..f89d7833bc 100644 --- a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterBaseTSE.kt +++ b/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterBaseTSE.kt @@ -38,20 +38,6 @@ open class KafkaEventRouterBaseTSE { // TSE (Test Suit Element) KafkaEventRouterSuiteIT.tearDownContainer() } } - - // common methods - fun isValidRelationship(event: StreamsTransactionEvent, type: OperationType) = when (type) { - OperationType.created -> event.payload.before == null - && event.payload.after?.let { it.properties?.let { it.isNullOrEmpty() } } ?: false - && event.schema.properties == emptyMap() - OperationType.updated -> event.payload.before?.let { it.properties?.let { it.isNullOrEmpty() } } ?: false - && event.payload.after?.let { it.properties == mapOf("type" to "update") } ?: false - && event.schema.properties == mapOf("type" to "String") - OperationType.deleted -> event.payload.before?.let { it.properties == mapOf("type" to "update") } ?: false - && event.payload.after == null - && event.schema.properties == mapOf("type" to "String") - else -> throw IllegalArgumentException("Unsupported OperationType") - } } lateinit var kafkaConsumer: KafkaConsumer diff --git a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterProcedureTSE.kt b/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterProcedureTSE.kt index 85be632ccf..9906637601 100644 --- a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterProcedureTSE.kt +++ b/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterProcedureTSE.kt @@ -2,14 +2,12 @@ package apoc.kafka.producer.integrations import apoc.kafka.events.StreamsEvent import apoc.kafka.extensions.execute -// import apoc.kafka.support.start import apoc.kafka.utils.JSONUtils import apoc.util.ExtendedTestUtil import org.apache.kafka.clients.admin.AdminClient import org.apache.kafka.clients.admin.NewTopic import org.junit.Test import org.neo4j.graphdb.QueryExecutionException -import org.neo4j.graphdb.Result import java.util.* import kotlin.test.assertEquals import kotlin.test.assertFailsWith diff --git a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterTestCommon.kt b/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterTestCommon.kt deleted file mode 100644 index 8407423b7a..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventRouterTestCommon.kt +++ /dev/null @@ -1,53 +0,0 @@ -package apoc.kafka.producer.integrations - -import apoc.kafka.extensions.execute -import apoc.kafka.common.support.Assert -import org.apache.kafka.clients.admin.AdminClient -import org.apache.kafka.clients.admin.NewTopic -import org.apache.kafka.clients.consumer.ConsumerRecords -import org.apache.kafka.clients.consumer.KafkaConsumer -import org.hamcrest.Matchers -import org.neo4j.function.ThrowingSupplier -import org.neo4j.graphdb.GraphDatabaseService -import java.time.Duration -import java.util.concurrent.TimeUnit - -object KafkaEventRouterTestCommon { - - private fun createTopic(topic: String, numTopics: Int, withCompact: Boolean) = run { - val newTopic = NewTopic(topic, numTopics, 1) - if (withCompact) { - newTopic.configs(mapOf( - "cleanup.policy" to "compact", - "segment.ms" to "10", - "retention.ms" to "1", - "min.cleanable.dirty.ratio" to "0.01")) - } - newTopic - } - - fun createTopic(topic: String, bootstrapServerMap: Map, numTopics: Int = 1, withCompact: Boolean = true) { - AdminClient.create(bootstrapServerMap).use { - val topics = listOf(createTopic(topic, numTopics, withCompact)) - it.createTopics(topics).all().get() - } - } - - fun assertTopicFilled(kafkaConsumer: KafkaConsumer, - fromBeginning: Boolean = false, - timeout: Long = 30, - assertion: (ConsumerRecords) -> Boolean = { it.count() == 1 } - ) { - Assert.assertEventually(ThrowingSupplier { - if(fromBeginning) { - kafkaConsumer.seekToBeginning(kafkaConsumer.assignment()) - } - val records = kafkaConsumer.poll(Duration.ofSeconds(5)) - assertion(records) - }, Matchers.equalTo(true), timeout, TimeUnit.SECONDS) - } - - fun initDbWithLogStrategy(db: GraphDatabaseService, strategy: String, otherConfigs: Map? = null, constraints: List? = null) { - constraints?.forEach { db.execute(it) } - } -} diff --git a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventSinkSuiteIT.kt b/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventSinkSuiteIT.kt deleted file mode 100644 index b89bc44c85..0000000000 --- a/extended/src/test/kotlin/apoc/kafka/producer/integrations/KafkaEventSinkSuiteIT.kt +++ /dev/null @@ -1,59 +0,0 @@ -package apoc.kafka.producer.integrations - -import apoc.kafka.consumer.kafka.SchemaRegistryContainer -import apoc.kafka.utils.KafkaUtil -import org.junit.AfterClass -import org.junit.Assume.assumeTrue -import org.junit.BeforeClass -import org.testcontainers.containers.KafkaContainer -import org.testcontainers.containers.Network -import org.testcontainers.utility.DockerImageName - -class KafkaEventSinkSuiteIT { - companion object { - /** - * Kafka TestContainers uses Confluent OSS images. - * We need to keep in mind which is the right Confluent Platform version for the Kafka version this project uses - * - * Confluent Platform | Apache Kafka - * | - * 4.0.x | 1.0.x - * 4.1.x | 1.1.x - * 5.0.x | 2.0.x - * - * Please see also https://docs.confluent.io/current/installation/versions-interoperability.html#cp-and-apache-kafka-compatibility - */ - private const val confluentPlatformVersion = "7.6.2" - @JvmStatic lateinit var kafka: KafkaContainer - @JvmStatic lateinit var schemaRegistry: SchemaRegistryContainer - - var isRunning = false - - @BeforeClass - @JvmStatic - fun setUpContainer() { - kafka = KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.6.2")) - .withNetwork(Network.newNetwork()) - kafka.start() - schemaRegistry = SchemaRegistryContainer(confluentPlatformVersion) - .withExposedPorts(8081) - .dependsOn(kafka) - .withKafka(kafka)!! - schemaRegistry.start() - isRunning = true - assumeTrue("Kafka must be running", ::kafka.isInitialized && kafka.isRunning) - assumeTrue("Schema Registry must be running", schemaRegistry.isRunning) - assumeTrue("isRunning must be true", isRunning) - } - - @AfterClass - @JvmStatic - fun tearDownContainer() { - KafkaUtil.ignoreExceptions({ - kafka.stop() - schemaRegistry.stop() - isRunning = false - }, UninitializedPropertyAccessException::class.java) - } - } -} \ No newline at end of file diff --git a/extended/src/test/kotlin/apoc/nlp/azure/AzureProceduresAPITest.kt b/extended/src/test/kotlin/apoc/nlp/azure/AzureProceduresAPITest.kt deleted file mode 100755 index e69de29bb2..0000000000