You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The usage for hlock would result in many tombstones, based on expiry or deletion. The access on this needs to be fast, so ideally we would want to keep the tombstone size small (supposed row cache is not enabled).
Hence using a default 10 seconds for the gc_grace on the HLocks CF creation is not ideal.
There is a HLockManagerConfigurator.java defining a bunch of default config already, so we should add default gc_grace in here, and allow it to be configurable via its api.
Food for thoughts: do we need to make sure ttl <= gc_grace? Why or why not?
The text was updated successfully, but these errors were encountered:
On "Food for thoughts: do we need to make sure ttl <= gc_grace? Why or why not?"
Ok I've checked with Nick Bailey, and agreed that if data are always inserted with TTL and delete only, then setting the gc_grace = TTL is safe.
On a side note, if data are always inserted with TTL only, then gc_grace can be set at 0.
The reason we can't do this for Locks is because of the delete. Here is a scenario:
assume rf=3, R/W @ CL.Q
if node C doesn't get the delete but node A and B do, then before TTL reach, they compact and remove the tombstone (since gc_grace=0), then repair happens, node C will send the deleted data back to A and B
The usage for hlock would result in many tombstones, based on expiry or deletion. The access on this needs to be fast, so ideally we would want to keep the tombstone size small (supposed row cache is not enabled).
Hence using a default 10 seconds for the gc_grace on the HLocks CF creation is not ideal.
There is a HLockManagerConfigurator.java defining a bunch of default config already, so we should add default gc_grace in here, and allow it to be configurable via its api.
Food for thoughts: do we need to make sure ttl <= gc_grace? Why or why not?
The text was updated successfully, but these errors were encountered: