You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on some previous testing I've done (seen as part of #2430) we can actually get the metrics to run in a semi-performant way with a very large duckdb instance. Due to the way that the sqlmesh rolling windows ran upon our initial version, deletes + writes into trino were exceedingly slow. Using duckdb as a pre-warmed cache, we can distribute the calculation of metrics to a cluster of pre-warmed duckdbs and then write that back to the trino warehouse.
The text was updated successfully, but these errors were encountered:
What is it?
Based on some previous testing I've done (seen as part of #2430) we can actually get the metrics to run in a semi-performant way with a very large duckdb instance. Due to the way that the sqlmesh rolling windows ran upon our initial version, deletes + writes into trino were exceedingly slow. Using duckdb as a pre-warmed cache, we can distribute the calculation of metrics to a cluster of pre-warmed duckdbs and then write that back to the trino warehouse.
The text was updated successfully, but these errors were encountered: