Iceberg is a high-performance format for huge analytic tables. Iceberg brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time.
Background and documentation is available at https://iceberg.apache.org
Iceberg is under active development at the Apache Software Foundation.
The core Java library that tracks table snapshots and metadata is complete, but still evolving. Current work is focused on adding row-level deletes and upserts, and integration work with new engines like Flink and Hive.
The Iceberg format specification is being actively updated and is open for comment. Until the specification is complete and released, it carries no compatibility guarantees. The spec is currently evolving as the Java reference implementation changes.
Java API javadocs are available for the master.
Iceberg tracks issues in GitHub and prefers to receive contributions as pull requests.
Community discussions happen primarily on the dev mailing list or on specific issues.
Iceberg is built using Gradle with Java 8, 11, or 17.
- To invoke a build and run tests:
./gradlew build
- To skip tests:
./gradlew build -x test -x integrationTest
- To fix code style for default versions:
./gradlew spotlessApply
- To fix code style for all versions of Spark/Hive/Flink:
./gradlew spotlessApply -DallVersions
Iceberg table support is organized in library modules:
iceberg-common
contains utility classes used in other modulesiceberg-api
contains the public Iceberg APIiceberg-core
contains implementations of the Iceberg API and support for Avro data files, this is what processing engines should depend oniceberg-parquet
is an optional module for working with tables backed by Parquet filesiceberg-arrow
is an optional module for reading Parquet into Arrow memoryiceberg-orc
is an optional module for working with tables backed by ORC filesiceberg-hive-metastore
is an implementation of Iceberg tables backed by the Hive metastore Thrift clienticeberg-data
is an optional module for working with tables directly from JVM applications
Iceberg also has modules for adding Iceberg support to processing engines:
iceberg-spark
is an implementation of Spark's Datasource V2 API for Iceberg with submodules for each spark versions (use runtime jars for a shaded version)iceberg-flink
contains classes for integrating with Apache Flink (use iceberg-flink-runtime for a shaded version)iceberg-mr
contains an InputFormat and other classes for integrating with Apache Hiveiceberg-pig
is an implementation of Pig's LoadFunc API for Iceberg
See the Multi-Engine Support page to know about Iceberg compatibility with different Spark, Flink and Hive versions. For other engines such as Presto or Trino, please visit their websites for Iceberg integration details.