Skip to content

Releases: marklogic/marklogic-spark-connector

2.4.2

17 Oct 18:44
afd19a3
Compare
Choose a tag to compare

This patch release addresses the following two issues:

  1. spark.marklogic.read.snapshot was added to allow a user to configure a non-consistent snapshot when reading documents by setting the option to false. This avoids bugs where a consistent snapshot is not feasible and the downsides of reading at multiple times are not a concern.
  2. Issues with importing JSON Lines files via Flux - such as keys being reordered and added - can be avoided by setting the existing
    spark.marklogic.read.files.type option to a value of json_lines. The connector will read each line as a separate JSON document and will not perform any modifications on any line, thereby avoiding the issue in Flux of JSON documents being unexpectedly altered.

2.4.1

17 Oct 17:06
7ff9e0f
Compare
Choose a tag to compare

This patch release addresses a single issue:

  • The org.slf4j:slf4j-api transitive dependency is forced to be version 2.0.13, ensuring that no occurrences of the 1.x version of that dependency are included in the connector jar. This resolves a logging issue in the Flux application.

2.4.0

02 Oct 19:25
168cf5f
Compare
Choose a tag to compare

This minor release addresses the following items:

  1. Can now stream regular files, ZIP files, gzip files, and archive files by setting the new spark.marklogic.streamFiles option to a value of true. Using this option in the reader phase results in the reading of files being deferred until the writer phase. Using this option in the writer phase results in each file being streamed to MarkLogic in a separate request to MarkLogic, thus avoiding ever reading the contents of the file or zip entry into memory.
  2. Can now stream documents from MarkLogic to regular files, ZIP files, gzip files, and archive files by setting the same option above - spark.marklogic.streamFiles - to a value of `true. Using this option in the reader phase results in the reading of documents being deferred until the writer phase. Using this option in the writer phase results in each document being streamed from MarkLogic to a file or zip entry, thus avoiding ever reading the contents of the document into memory.
  3. Files with spaces in the path are now handled correctly when reading files into MarkLogic. However, when streaming files into MarkLogic, the spaces in the path will be encoded due to a pending server fix.
  4. Archive files - zip files containing content and metadata - now contain the metadata entry followed by the content entry for each document. This supports streaming archive files. Archive files generated by version 2.3.x of the connector - with the content entry followed by the metadata entry - can still be read, though they cannot be streamed.
  5. Now compiled and tested against Spark 3.5.3.

2.3.1

22 Aug 10:10
0235348
Compare
Choose a tag to compare

This patch release addresses the following issues:

  1. Can now read document URIs that include non-US-ASCII characters. This was fixed via an upgrade of the Java Client to its 7.0.0 release, whose breaking changes do not have impact on this connector release.
  2. Registered collatedString as a known TDE type, thereby avoiding warnings when reading rows from a TDE that uses that type.
  3. Significantly improved performance when reading aggregate XML files and extracting a URI value from an element.
  4. Fixed bug where a message of "Wrote failed documents to archive file at" was logged when no documents failed.

2.3.0

26 Jul 17:55
Compare
Choose a tag to compare

This minor release provides significant new functionality in support of the 1.0.0 release of the new MarkLogic Flux data movement tool. Much of this functionality is documented in the Flux documentation. We will soon have complete documentation of all the new options in this repository's documentation as well.

In the meantime, the new options in this release are listed below.

Read Options

  1. spark.marklogic.read.javascriptFile and spark.marklogic.read.xqueryFile allow for custom code to be read from a file path.
  2. spark.marklogic.read.partitions.javascriptFile and spark.marklogic.read.partitions.xqueryFile allow for custom code to be read from a file path.
  3. Can now read document rows by specifying a list of newline-delimited URIs via the spark.marklogic.read.documents.uris option.
  4. Can now read rows containing semantic triples in MarkLogic via spark.marklogic.read.triples.graphs, spark.marklogic.read.triples.collections, spark.marklogic.read.triples.query, spark.marklogic.read.triples.stringQuery, spark.marklogic.read.triples.uris, spark.marklogic.read.triples.directory, spark.marklogic.read.triples.options, spark.marklogic.read.triples.filtered, and spark.marklogic.read.triples.baseIri.
  5. Can now read Flux and MLCP archives by setting spark.marklogic.read.files.type to archive or mlcp_archive.
  6. Can control which categories of metadata are read from Flux archives via spark.marklogic.read.archives.categories.
  7. Can now specify the encoding of a file to read via spark.marklogic.read.files.encoding.
  8. Can now see progress logged of reading data from MarkLogic via spark.marklogic.read.logProgress.
  9. Can specify whether to fail on a file read error via spark.marklogic.read.files.abortOnFailure.

Write Options

  1. spark.marklogic.write.threadCount has been altered to reflect the common user understanding of "number of threads used to connect to MarkLogic". If you need to specify a thread count per partition, use spark.marklogic.write.threadCountPerPartition.
  2. Can now see progress logged of data written to MarkLogic via spark.marklogic.write.logProgress.
  3. spark.marklogic.write.javascriptFile and spark.marklogic.write.xqueryFile allow for custom code to be read from a file path.
  4. Settingspark.marklogic.write.archivePathForFailedDocuments to a file path will result in any failed documents being added to an archive zip file at that file path.
  5. spark.marklogic.write.jsonRootName allows for a root field to be added to a JSON document constructed from an arbitrary row.
  6. spark.marklogic.write.xmlRootName and spark.marklogic.write.xmlNamespace allow for an XML document to be constructed from an arbitrary row.
  7. Options starting with spark.marklogic.write.json. will be used to configure how the connector serializes a Spark row into a JSON object.
  8. Can use spark.marklogic.write.graph and spark.marklogic.write.graphOverride to specify the graph when writing RDF triples to MarkLogic.
  9. Deprecated spark.marklogic.write.fileRows.documentType in favor of using spark.marklogic.write.documentType to force a document type on documents written to MarkLogic with an extension unrecognized by MarkLogic.
  10. Can use spark.marklogic.write.files.prettyPrint to pretty-print JSON and XML files written by the connector.
  11. Can use spark.marklogic.write.files.encoding to write files in a different encoding.
  12. Can use spark.marklogic.write.files.rdf.format to specify an RDF type when writing triples to RDF files.
  13. Can use spark.marklogic.write.files.rdf.graph to specify a graph when writing RDF files.

2.2.0

22 Feb 21:10
f1bbf9c
Compare
Choose a tag to compare

This minor release provides the following enhancements:

2.1.0

17 Nov 15:03
9546666
Compare
Choose a tag to compare

This minor release provides two new significant enhancements:

These capabilities can be mixed with the existing capabilities for reading rows via Optic and writing rows as documents.

Please see the user guide for more information.

2.0.0

21 Jun 14:10
Compare
Choose a tag to compare

Initial release of the MarkLogic connector for Apache Spark 3. The previous MarkLogic connector was designed for Apache Spark 2 and required use of the MarkLogic Data Hub Framework. This connector requires Apache Spark 3 and does not depend on the Data Hub Framework.

Please see the user guide for further information.