-
Notifications
You must be signed in to change notification settings - Fork 733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement "Secondary rate limit" behavior to internally throttle querying #1975
Comments
I'm getting this error from two different locations when I attempt to search at all. Something appears to have gone sideways with the implementation. I understand the need for ratelimiting, but this doesn't seem like it was tested thoroughly enough prior to release. |
I'm not sure what you're referring to. This is a general task. No changes were made in this library, so it couldn't have been "pushed too quickly". If you want to be angry take it up with GitHub. Or you could submit PRs to help this library handle the changes. In the meantime, I suggest you file a separate issue describing the problem and including the code to reproduce it. Then we might be able to help you work around the problem. |
@bitwiseman I think it will make sense actually to add retries with backoff to calls to GitHub. Ideally if from what GitHub returned we can get error that is relate to ephemeral rate limiting and try to retry. From developers side we can control count of retries through some parameters. |
GitHub expects clients to avoid exceeding the primary or secondary rate limit. Accordingly, they make the penalty for exceeding it very high. We already accurately handle primary rate limits because GitHub provides actionable information the response headers. No information is provided for secondary rate limits. Reacting after we exceed the secondary rate limit is what we already do. We already have retries and waits. The number and duration of the waits are configurable. We already attempt to determine if errors are due secondary rate limits to the degree GitHub lets us. |
@bitwiseman didn't know that there possibility to retry failed calls, is there some examples? |
See #1805
See #1842
This docs page describes secondary rate limit behavior:
https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28#about-secondary-rate-limits
As of this reading it says:
These are incredibly loosely defined guides and you cannot query for them ahead of time. 👎 It looks like we need to take the path some users have suggested and make rate limiting much more resilient, potentially allowing users to write their own rate limit strategies for handling secondary rate limits.
The current internal
GitHubRateLimitChecker
would need to be replaced by aPrimaryGitHubRateLimiter
which extends a newGitHubRateLimiter
class/interface. Then each of the above bullet points would a new rate limiter class. All of them would need to be called before and after each query, and maintain their own configuration and calculated state.GitHubRateLimiter
would provide the API and possibly helper functions to make that easier to do right.I think the basic API would be that the method call before a request is sent, would return an
Optional<Duration>
and if more than one limiter returns a Duration the longest one is used. Or maybe return an option record that includes areason
message and a duration, perhaps also a logLevel/severity. Make it easier to produce meaningful output.The text was updated successfully, but these errors were encountered: