-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sui-field-count: const instead of fn and use it in sui-indexer-alt #20143
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
3 Skipped Deployments
|
@@ -22,13 +22,6 @@ use crate::{ | |||
schema::sum_coin_balances, | |||
}; | |||
|
|||
/// Each insert or update will include at most this many rows -- the size is chosen to maximize the | |||
/// rows without hitting the limit on bind parameters. | |||
const UPDATE_CHUNK_ROWS: usize = i16::MAX as usize / 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amnn I saw the *_CHUNK_ROWS
only exist in these 2 files, while it should be applied to all tables?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So having seen how things are set-up, I think we'll want to make use of FIELD_COUNT
slightly differently -- sorry for the churn:
- We need to handle the integration of
FIELD_COUNT
s differently for concurrent and sequential pipelines. This means we can't do the integration once, in theProcessor
trait. We need to do it in the concurrent and sequentialHandler
traits respectively. - The
concurrent::Handler
is easiest to integrate with -- it already has aMAX_CHUNK_ROWS
parameter whose definition we can replace as you've done here, and this parameter is used by the framework automatically. - You will also need to remove the overrides for
MAX_CHUNK_ROWS
inconcurrent::Handler
impls. - Sequential pipelines (like this one) don't automatically chunk their rows -- that's done by each implementation of
sequential::Handler::commit
, so for these ones, the best thing to do is to keep the constants per handler implementation. (we also can't say that e.g. all deletions in sequential pipelines need exactly one bind per row).
@@ -159,7 +152,7 @@ impl Handler for SumCoinBalances { | |||
} | |||
} | |||
|
|||
let update_chunks = updates.chunks(UPDATE_CHUNK_ROWS).map(|chunk| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added temp tests to ensure values are the same before and after, removed them to avoid overheard later on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @gegaowp -- the change from a function to a constant looks good, but the integration needs some tweaking, PTAL.
@@ -22,13 +22,6 @@ use crate::{ | |||
schema::sum_coin_balances, | |||
}; | |||
|
|||
/// Each insert or update will include at most this many rows -- the size is chosen to maximize the | |||
/// rows without hitting the limit on bind parameters. | |||
const UPDATE_CHUNK_ROWS: usize = i16::MAX as usize / 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So having seen how things are set-up, I think we'll want to make use of FIELD_COUNT
slightly differently -- sorry for the churn:
- We need to handle the integration of
FIELD_COUNT
s differently for concurrent and sequential pipelines. This means we can't do the integration once, in theProcessor
trait. We need to do it in the concurrent and sequentialHandler
traits respectively. - The
concurrent::Handler
is easiest to integrate with -- it already has aMAX_CHUNK_ROWS
parameter whose definition we can replace as you've done here, and this parameter is used by the framework automatically. - You will also need to remove the overrides for
MAX_CHUNK_ROWS
inconcurrent::Handler
impls. - Sequential pipelines (like this one) don't automatically chunk their rows -- that's done by each implementation of
sequential::Handler::commit
, so for these ones, the best thing to do is to keep the constants per handler implementation. (we also can't say that e.g. all deletions in sequential pipelines need exactly one bind per row).
Description
Test plan
Release notes
Check each box that your changes affect. If none of the boxes relate to your changes, release notes aren't required.
For each box you select, include information after the relevant heading that describes the impact of your changes that a user might notice and any actions they must take to implement updates.