-
-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: auto retry when rate limit exceeded or server error encountered #86
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #86 +/- ##
==========================================
+ Coverage 35.23% 35.25% +0.01%
==========================================
Files 2414 2416 +2
Lines 124893 124971 +78
==========================================
+ Hits 44007 44053 +46
- Misses 80886 80918 +32
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
It seems concurrent restriction should consider both sync/async usage and multi-instance usage. The concurrent restriction mechanism may need external service like redis. Currently, i would like to add a simple retry logic when rate limit exceeded or server error encountered. |
Considering both sync and async usage should be possible. I will try to add that. The benefit also depends a lot on the what each of the instances is doing with the api. |
multiple instances are commonly used in multi-process mode or cluster mode. octokit also uses redis to schedule the requests in cluster mode. Maybe an abstract storage layer should be implemented to cache infos and restrict the concurrency. I will try to implement an in-memory storage and a redis one. |
The advised "best practice" is to not do anything concurrently at all. |
octokit uses the bottleneck library to schedule the request job. you can see the rate limit logic in the library. |
Due to the complexity of this feature, I'm going to split it into two PRs. In this pr, rate limit auto retry will be implemented and the concurrency limit will be implemented in the next version |
This PR introduces a mechanism that limits the number of concurrent requests.
Additionally, if a
RateLimitExceeded
response is encountered, new requests are not started for a while.I'm not sure if halting the new requests is really needed, but it seems the right thing to do.
The print statements are there currently for seeing that the mechanism works.
Should they be replace with a logger? or removed entirely?
Related to #66