-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Snowflake bind array inserts operations using adbc_ingest() #1322
Comments
Forgot to add adbc_ingest() needs something similar to https://arrow.apache.org/docs/python/generated/pyarrow.dataset.write_dataset.html which supports max_partitions=None, max_open_files=None, max_rows_per_file=None, min_rows_per_group=None, max_rows_per_group=None, to control/optimized bulk insert throughput.. |
@davlee1972 Can you please share the arrow schema of the table you ingested in this example? I ask because the underlying snowflake client makes certain optimizations with the stream that is handed to it, but the behavior is currently limited to only certain datatypes. |
The original source are csv files and every column in the schema are just string types..
I can confirm that when reading the csv files using multithreaded arrow you end up with a pyarrow table / recordbatch reader that produces record batch lengths of 12k, 13k, 114k, etc.. The adbc ingest function just happens to send multiple snowflake insert ? ? ? ? array inserts that match 12k, 13k, 114k, etc rows inserted in the history log. By merging multiple record batch arrays into lengths of 1 million before calling adbc ingest I can see 1 million row bind inserts in Snowflake. Each bind insert is taking 5 seconds so 36 bind inserts takes 3 minutes.. Before with 3000 record batches the same bind insert was taking 3 hours.. |
Got it. So it does appear likely that your arrow table meets the Snowflake Connectors conditions linked above to be optimized for bulk ingestion. For context, if those conditions are not met then an even slower approach will be taken which renders all the values into the Most likely in your case the connector is taking a faster approach of uploading the arrow records to a temporary stage and then inserting from there. In part, certain limitations on datatypes are imposed because CSV is always used for the format of the temporary stage and all types are first converted to builtin go types. Since you are using CSV as input, this doesn't appear to be an issue. Given that this code is already making use of the Snowflake Connector's optimized ingestion path but still experiencing limitations, I do think it makes sense to handle this on the ADBC side rather than delegate to the connector. I'm bringing some follow-up discussion on potential solutions to #1327. |
# What - Replace Snowflake bulk ingestion with Parquet-based approach with higher throughput and better type support - Previously: INSERT bind parameters were uploaded to a CSV-based stage, once per record batch - Now: Parquet files written concurrently to stage independently of record batch size. Parquet logical types are used to infer schema on COPY. - Tests to validate type support and consistency through Arrow -> Parquet -> Snowflake -> Arrow roundtrip - Improved type mapping between Arrow <-> Snowflake timestamps. [TIMESTAMP_LTZ](https://docs.snowflake.com/en/sql-reference/data-types-datetime#timestamp-ltz-timestamp-ntz-timestamp-tz) is more consistent with Arrow timestamp semantics than TIMESTAMP_TZ, which can lead to lossy roundtrips. - Minor bugfix where Snowflake local timestamps with timezone set to UTC were being interpreted as non-local. # Why - Implements #1327, which comes from improvement request #1322 - BindStream ingestion is significantly faster - Arrow type support is improved # Methodology The general approach for ingestion is most clearly demonstrated by the path taken when `stmt.Bind()` for a single record is used: ### IngestRecord ```mermaid flowchart LR A(Record) --> B(Write Parquet) B --> C(Upload File) C --> D(Execute COPY) D --> E(Check Row Count) ``` The Arrow record is written to a Parquet file due to its logical type support, compressibility, and native Snowflake support. The file is then uploaded to a temporary Snowflake stage via PUT query, and then loaded into the target table via COPY query. Once the COPY has finished, one more query to check the resulting row count is dispatched to accurately return the number of rows affected. This is used instead of counting the Arrow rows written in case there are any undetected losses when importing the uploaded file into Snowflake. A similar approach is taken when ingesting an arbitrarily large stream of records via `stmt.BindStream()`, but makes use of several opportunities to parallelize the work involved at different stages: ### IngestStream ```mermaid flowchart LR A(Read Records) --> B(Write Parquet) A --> C(Write Parquet) A --> D(Write Parquet) A --> E(Write Parquet) B --> J(Buffer Pool) C --> J D --> J E --> J J --> K(Upload File) J --> L(Upload File) K --> M(Finalize COPY) L --> M M --> N(Check Row Count) O(File Ready) --> P(Execute COPY) P --> O ``` The same steps are used, but the stream of records is now distributed among a pool of Parquet writers. This step is inherently CPU-bound, so it is desirable for it to scale independently with the availability of logical cores for writing/compression. These Parquet files are written to a buffer pool in memory to help decouple the upload stage from writing, and so that a writer can start working on the next file _while_ the last file it wrote is being uploaded. Uploads from the buffer pool also benefit from parallelism, but more so to maximize network utilization by limiting idle time between uploads and amortizing potential slowdown in any one upload. Technically, only a single COPY command is required after the last file is uploaded in order to load the Parquet files into the Snowflake table. However, on many warehouses this operation takes as long or even longer than the upload itself but can be made faster by paying for a larger warehouse. Given the batched approach taken and that the COPY command is idempotent, we can execute COPY repeatedly as files are uploaded to load them into the table on an ongoing basis. These COPY queries are executed asynchronously and listen for an upload-completed callback to ensure at least one file will be loaded by the query (otherwise it will no-op so this just prevents spamming Snowflake with a bunch of no-op COPYs). Empirically, ingestion works reasonably well on an XS warehouse. COPY speed is no longer a bottleneck with an S warehouse with high-speed home internet, or on an M warehouse with same-region data center networking. # Performance Running on GCP e2-medium (shared-core 1 vCPU, 4GB RAM) Snowflake warehouse size M, same GCP region as Snowflake account Default ingestion settings Benchmarking TPC-H Lineitem @ SF1 (6M Rows): - Current: 11m50s - New: 14s Benchmarking TPC-H Lineitem @ SF10 (60M Rows): - Current: Didn't attempt - New: 1m16s _This configuration is CPU bound, so I did another attempt with more cores available..._ Now with GCP e2-standard-4 (4 vCPU, 16GB RAM) Benchmarking TPC-H Lineitem @ SF1 (6M Rows): - Current: 11m17s - New: 9.5s Benchmarking TPC-H Lineitem @ SF10 (60M Rows): - Current: 1h47m - New: 45s # Considerations - Snowflake [guides](https://community.snowflake.com/s/article/How-to-Load-Terabytes-Into-Snowflake-Speeds-Feeds-and-Techniques) indicate that ingestion via CSV is the fastest. Experimentally, it does appear to be true that a COPY query on staged CSV files executes much faster than for Parquet files. However by distributing the COPY workloads _in parallel to_ the batched file uploads, overall performance is better with Parquet since it can be compressed _much_ more efficiently allowing the upload to complete in less time and with fewer bytes transferred than with CSV. Type support is also much better. - Single-Record ingestion performance is slightly worse than the previous INSERT-bind approach. As a rough idea, a record that previously ingested in about 1.7s now ingests in about 2.5s. However, the new approach does come with expanded type support and better consistency with the streaming approach. - An ingestion run that fails part-way through may leave the table with partial results. Transaction semantics may be added in the future by overriding the CopyConcurrency parameter to be 0, in which case only the final COPY will execute. # Additional Work ### Blocking - ~Timestamps will roundtrip properly after Arrow [GH-39466](apache/arrow#39466) is closed. A test is included but skipped for now.~ - ~Date64 will roundtrip properly after Arrow [GH-39456](apache/arrow#39456) is closed. A test is included but skipped for now.~ ### Non-Blocking - Compression codec and level are included in `ingestOptions` but are not configurable using `stmt.SetOption()`. It is trivial to add this, but it would be nice to be able to use the currently internal [CompressionCodecFromString](https://github.com/apache/arrow/blob/e6323646558ee01234ce58af273c5a834745f298/go/parquet/internal/gen-go/parquet/parquet.go#L387-L399) method to automatically pick up support for any other codecs added in the future. Captured in #1473. - List and Map types have some issues on ingestion. Snowflake returns `SQL execution internal error` whenever repetition level is greater than 0. Still some more investigation to do here. This is non-blocking because neither type was previously supported for ingestion. - Context cancelation is supported for all goroutines and queries executed as part of ingestion, _except_ for the PUT query (i.e. file uploads). This issue is being tracked in gosnowflake [1028](snowflakedb/gosnowflake#1028). In practice, it likely takes just a few seconds for in-progress uploads to complete and properly conclude cancelation. Once this issue is fixed, the queries would be canceled in Snowflake, allowing the process to exit faster and reduce unnecessary work. - ~The code previously meant to map Snowflake types to Go types is no longer used. It may still be useful for binding an Arrow record to an arbitrary Update query, but `stmt.Prepare` should be implemented first to follow ADBC spec for binding parameters.~
…1456) # What - Replace Snowflake bulk ingestion with Parquet-based approach with higher throughput and better type support - Previously: INSERT bind parameters were uploaded to a CSV-based stage, once per record batch - Now: Parquet files written concurrently to stage independently of record batch size. Parquet logical types are used to infer schema on COPY. - Tests to validate type support and consistency through Arrow -> Parquet -> Snowflake -> Arrow roundtrip - Improved type mapping between Arrow <-> Snowflake timestamps. [TIMESTAMP_LTZ](https://docs.snowflake.com/en/sql-reference/data-types-datetime#timestamp-ltz-timestamp-ntz-timestamp-tz) is more consistent with Arrow timestamp semantics than TIMESTAMP_TZ, which can lead to lossy roundtrips. - Minor bugfix where Snowflake local timestamps with timezone set to UTC were being interpreted as non-local. # Why - Implements apache#1327, which comes from improvement request apache#1322 - BindStream ingestion is significantly faster - Arrow type support is improved # Methodology The general approach for ingestion is most clearly demonstrated by the path taken when `stmt.Bind()` for a single record is used: ### IngestRecord ```mermaid flowchart LR A(Record) --> B(Write Parquet) B --> C(Upload File) C --> D(Execute COPY) D --> E(Check Row Count) ``` The Arrow record is written to a Parquet file due to its logical type support, compressibility, and native Snowflake support. The file is then uploaded to a temporary Snowflake stage via PUT query, and then loaded into the target table via COPY query. Once the COPY has finished, one more query to check the resulting row count is dispatched to accurately return the number of rows affected. This is used instead of counting the Arrow rows written in case there are any undetected losses when importing the uploaded file into Snowflake. A similar approach is taken when ingesting an arbitrarily large stream of records via `stmt.BindStream()`, but makes use of several opportunities to parallelize the work involved at different stages: ### IngestStream ```mermaid flowchart LR A(Read Records) --> B(Write Parquet) A --> C(Write Parquet) A --> D(Write Parquet) A --> E(Write Parquet) B --> J(Buffer Pool) C --> J D --> J E --> J J --> K(Upload File) J --> L(Upload File) K --> M(Finalize COPY) L --> M M --> N(Check Row Count) O(File Ready) --> P(Execute COPY) P --> O ``` The same steps are used, but the stream of records is now distributed among a pool of Parquet writers. This step is inherently CPU-bound, so it is desirable for it to scale independently with the availability of logical cores for writing/compression. These Parquet files are written to a buffer pool in memory to help decouple the upload stage from writing, and so that a writer can start working on the next file _while_ the last file it wrote is being uploaded. Uploads from the buffer pool also benefit from parallelism, but more so to maximize network utilization by limiting idle time between uploads and amortizing potential slowdown in any one upload. Technically, only a single COPY command is required after the last file is uploaded in order to load the Parquet files into the Snowflake table. However, on many warehouses this operation takes as long or even longer than the upload itself but can be made faster by paying for a larger warehouse. Given the batched approach taken and that the COPY command is idempotent, we can execute COPY repeatedly as files are uploaded to load them into the table on an ongoing basis. These COPY queries are executed asynchronously and listen for an upload-completed callback to ensure at least one file will be loaded by the query (otherwise it will no-op so this just prevents spamming Snowflake with a bunch of no-op COPYs). Empirically, ingestion works reasonably well on an XS warehouse. COPY speed is no longer a bottleneck with an S warehouse with high-speed home internet, or on an M warehouse with same-region data center networking. # Performance Running on GCP e2-medium (shared-core 1 vCPU, 4GB RAM) Snowflake warehouse size M, same GCP region as Snowflake account Default ingestion settings Benchmarking TPC-H Lineitem @ SF1 (6M Rows): - Current: 11m50s - New: 14s Benchmarking TPC-H Lineitem @ SF10 (60M Rows): - Current: Didn't attempt - New: 1m16s _This configuration is CPU bound, so I did another attempt with more cores available..._ Now with GCP e2-standard-4 (4 vCPU, 16GB RAM) Benchmarking TPC-H Lineitem @ SF1 (6M Rows): - Current: 11m17s - New: 9.5s Benchmarking TPC-H Lineitem @ SF10 (60M Rows): - Current: 1h47m - New: 45s # Considerations - Snowflake [guides](https://community.snowflake.com/s/article/How-to-Load-Terabytes-Into-Snowflake-Speeds-Feeds-and-Techniques) indicate that ingestion via CSV is the fastest. Experimentally, it does appear to be true that a COPY query on staged CSV files executes much faster than for Parquet files. However by distributing the COPY workloads _in parallel to_ the batched file uploads, overall performance is better with Parquet since it can be compressed _much_ more efficiently allowing the upload to complete in less time and with fewer bytes transferred than with CSV. Type support is also much better. - Single-Record ingestion performance is slightly worse than the previous INSERT-bind approach. As a rough idea, a record that previously ingested in about 1.7s now ingests in about 2.5s. However, the new approach does come with expanded type support and better consistency with the streaming approach. - An ingestion run that fails part-way through may leave the table with partial results. Transaction semantics may be added in the future by overriding the CopyConcurrency parameter to be 0, in which case only the final COPY will execute. # Additional Work ### Blocking - ~Timestamps will roundtrip properly after Arrow [GH-39466](apache/arrow#39466) is closed. A test is included but skipped for now.~ - ~Date64 will roundtrip properly after Arrow [GH-39456](apache/arrow#39456) is closed. A test is included but skipped for now.~ ### Non-Blocking - Compression codec and level are included in `ingestOptions` but are not configurable using `stmt.SetOption()`. It is trivial to add this, but it would be nice to be able to use the currently internal [CompressionCodecFromString](https://github.com/apache/arrow/blob/e6323646558ee01234ce58af273c5a834745f298/go/parquet/internal/gen-go/parquet/parquet.go#L387-L399) method to automatically pick up support for any other codecs added in the future. Captured in apache#1473. - List and Map types have some issues on ingestion. Snowflake returns `SQL execution internal error` whenever repetition level is greater than 0. Still some more investigation to do here. This is non-blocking because neither type was previously supported for ingestion. - Context cancelation is supported for all goroutines and queries executed as part of ingestion, _except_ for the PUT query (i.e. file uploads). This issue is being tracked in gosnowflake [1028](snowflakedb/gosnowflake#1028). In practice, it likely takes just a few seconds for in-progress uploads to complete and properly conclude cancelation. Once this issue is fixed, the queries would be canceled in Snowflake, allowing the process to exit faster and reduce unnecessary work. - ~The code previously meant to map Snowflake types to Go types is no longer used. It may still be useful for binding an Arrow record to an arbitrary Update query, but `stmt.Prepare` should be implemented first to follow ADBC spec for binding parameters.~
@davlee1972 The PR (#1456) including this improvement merged last month and is slated for the upcoming 0.10.0 release. The details are available on the current dev docs build. |
@joellubi - I just installed and tested it.. For 100 million records x 8 columns across 1,000 CSV files.. 0.9.0 took 5.5 hours. Thanks.. Marking this closed.. |
Problem: Bulk inserting a pyarrow table using adbc_ingest creates a bind array insert operation for every record batch. It takes 3 hours to insert 36 million rows across 3,000 record batches created by the pyarrow multithreaded CSV reader using adbc_ingest().
If you reorg the data into 36 record batches of 1 million rows each the same adbc_ingest() call only takes 3 minutes..
Sample code:
The text was updated successfully, but these errors were encountered: