-
Notifications
You must be signed in to change notification settings - Fork 721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dr-autosync] online recover time out after switching to backup cluster in sync_recover mode #6803
Comments
/assign @v01dstar |
@mayjiang0203: GitHub didn't allow me to assign the following users: v01dstar. Note that only tikv members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @Connor1996 |
Form the log, I find that a peer is in applying snapshot, so it won't send out the vote repsonse back to the force leader temporarily. While the apply snapshot is very slow, so the pre force leader can't finish in time and finally online recovery timeouts. The behavior is as expected. |
/remove-severity critical |
/severity major |
/remove-severity major |
recreate it in tikv repo as tikv/tikv#15346 |
Bug Report
What did you do?
What did you expect to see?
What did you see instead?
client logs:
pd3-peer logs
What version of PD are you using (
pd-server -V
)?The text was updated successfully, but these errors were encountered: