-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimise memory deallocation for stable log entries #96
base: main
Are you sure you want to change the base?
Conversation
… underlying-array utilisation Signed-off-by: Aditya-Sood <[email protected]>
dcbd0a2
to
ac11fb4
Compare
hi @pavelkalinnikov, could you please review the PR? |
Signed-off-by: Aditya-Sood <[email protected]>
hi @pavelkalinnikov, does the PR need any changes? |
hi @pavelkalinnikov, could you please review the PR? |
@Aditya-Sood Sorry, been super busy recently. Will review this PR this week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. Posting some initial thoughts.
This change needs unit-tests to check correctness and demonstrate the situations in which the compaction heuristics kick in.
const lenMultiple = 2 | ||
if len(u.entries) == 0 { | ||
func (u *unstable) shrinkEntriesSlice(nextUnstableEntryIndex int) { | ||
if nextUnstableEntryIndex == len(u.entries) { //all log entries have been successfully stored |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Please add a whitespace between //
and the comment. Same bellow.
countStableEntries := nextUnstableEntryIndex | ||
if countStableEntries > cap(u.entries)/lenMultiple { | ||
return true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This heuristic is different from the proposed idea in #71. It still honours only "visible" entries in deciding when to compact; and so it is susceptible to the same problem. The idea that was proposed is that we should instead track the full allocated size of the underlying slice.
This heuristic also seems different from the original: it checks the ratio of stable entries vs. cap; previously it checked the ratio of all entries vs. cap. Any particular reason why you went with this heuristic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @pavelkalinnikov, thanks for the review
I forgot that cap()
considers the backing array only for the last index, not the beginning one
I had another thought - at the time of shrinking the entries
slice, if we remove references to the shrunk pb.Entry
elements, then two things will happen:
- with no more references, the 'shrunk'
pb.Entry
values will immediately become available for GC (if not in use) - the
append()
method will consider creating an entirely new backing array at the time of adding new elements when capacity is low, and then the preceding unreachable values (which currently hold emptypb.Entry
values after the above change) will also become available for GC
this seemed like a much cleaner solution to me
if this approach seems feasible, then we can optimise the design further by replacing pb.Entry
values with pointers instead, to further minimise the memory cost of holding shrunk values in the array until append()
replaces the backing array
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @pavelkalinnikov, could you please share your review of the above approach?
hi @pavelkalinnikov, could you please share your thoughts on this approach - #96 (comment) |
hi @ahrtr, would you be able to weigh-in on #96 (comment)? |
hi @mitake, @serathius, @ptabor, @spzala |
hi @pav-kv |
QQ for any optimisation, do we have a benchmark that confirms the improvement? |
hi @serathius, no I haven't implemented the proposed change yet |
Increment: Modify unstable.shrinkEntriesArray() to better account for underlying-array utilisation
Fixes #71