You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a DA node is syncing, via the proactive fetching task, we go through each block sequentially, and the fetcher automatically spawns off requests for blocks that are missing. This would perform much better if we fetched batches at a time instead of firing a separate request for each block. Unfortunately, this is a bit tricky to implement, because the streaming logic is very decoupled from the fetching logic (intentionally, this separation allows us to write very simple code without worrying about how/when objects are being fetched).
There's a couple of approaches we could take:
couple the streaming logic more tightly with the fetching logic (unfortunate breaking down of the abstraction)
at the fetching level, buffer fetch requests and then combining anything in the buffer that is from a consecutive range
optimize for streaming workflow by prefetching
The text was updated successfully, but these errors were encountered:
When a DA node is syncing, via the proactive fetching task, we go through each block sequentially, and the fetcher automatically spawns off requests for blocks that are missing. This would perform much better if we fetched batches at a time instead of firing a separate request for each block. Unfortunately, this is a bit tricky to implement, because the streaming logic is very decoupled from the fetching logic (intentionally, this separation allows us to write very simple code without worrying about how/when objects are being fetched).
There's a couple of approaches we could take:
The text was updated successfully, but these errors were encountered: