You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm really excited to see the great read throughput improvements from Icechunk on Zarr v3! I'm curious how read throughput performance compares to TensorStore.
Thanks for this Alex! While we're excited about the initial benchmarking results, performance is not our current focus. We're still mainly focused on correctness, feature implementation, and stabilization of the file formats. So we won't be prioritizing this comparison in the near term.
We welcome you and anyone else to do comparisons if you wish, bearing in mind that basically zero effort has gone into performance optimization in Icechunk, and thus there remain many low-hanging fruits in terms of optimization.
I'm really excited to see the great read throughput improvements from Icechunk on Zarr v3! I'm curious how read throughput performance compares to TensorStore.
Having a place for consistent comparison would be really fruitful for Xarray data loaders, like xbatcher (xarray-contrib/xbatcher#42) (or eventually neuralgcm/neuralgcm#97.)
xref: google/tensorstore#49
The text was updated successfully, but these errors were encountered: