-
-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: ratelimit prediction #2188
Conversation
Signed-off-by: VincentRPS <[email protected]>
…ts to bucketstorage also adds the ability to modify bucket storage for distribution or shared data, using something like Redis.
Please add a changelog entry |
Changelog requirements met |
Co-authored-by: JustaSqu1d <[email protected]> Signed-off-by: VincentRPS <[email protected]>
pain. it was all pain.
Forgot to add bucket storage to slots. Signed-off-by: VincentRPS <[email protected]>
Please add a changelog entry |
Changelog requirements met |
Co-authored-by: Emre Terzioglu <[email protected]> Signed-off-by: VincentRPS <[email protected]>
PR is mostly finalized by this point. The final goal now is to try and test this system on larger bots to identify any possible faults before it is merged into master. This PR will not be refactoring webhooks, at least for now. That will be the goal of a second PR in 2.6 or 2.7, as the work for that would be a bit more complicated, especially with interaction followups. There is a chance we could make interactions use the HTTPClient instead of Webhooks though which would be much easier to handle since it would just integrate with the current rate limit prediction system. |
Signed-off-by: Dorukyum <[email protected]>
rate_limited is now set to False by default. There is a lock on .use to prevent multiple requests from all releasing requests at the same time. Signed-off-by: VincentRPS <[email protected]>
You will need to redo this PR if it is based on bluenix comment, which describes an incorrect implementation of rate limiting. You can't predict rate limits based on when your bot sends requests, because your bot isn't the single source of truth of the rate limit. The discord server determines which bucket (window) the request applies to and thus how the rate limit works. Your bot can send 50 requests per second at a technical level and still be rate limited depending on the timing of the rate limit bucket window and when the server receives the request. You can read switchupcb/disgo#14 and switchupcb/disgo#22 for more information that validates these claims. But I wouldn't read all that. Bluenix admitted in the same thread his solution didn't work. discord/discord-api-docs#5144 (comment)
|
Yeah, no thanks. Where were you when other Discord.py forks updated their ratelimit handlers? |
The answer to that question doesn't change the truth or the tests used to assert the truth. |
I want to know more about how "reliable" your method is. How many tests have you conducted? Have you tested your impelementation alongside Bluenix's proposal? I'm really just curious at this point, because you've seemed to dominate that original Discord API Docs issue I linked to with a bunch of your research (which was a weird thing to do btw, an issue in an issue tracker is not the right place to do that). |
https://github.com/switchupcb/disgo can handle over 50 RPS, but doesn't due to a correct rate limit implementation. The rate limit test is located here: https://github.com/switchupcb/disgo/blob/v10/wrapper/tests/integration/ratelimit_test.go You can also view the actions run on every commit and also read the information I linked, which ran a test indicating that Discord CANT be consumed using a "rolling" rate limit implementation.
The entire point of that thread was to determine the actual rate limit implementation, so it could be included in the documentation.
Bluenix admitted in the same thread his solution didn't work. discord/discord-api-docs#5144 (comment)
|
Oh my god, I guess I missed that part. I suppose you're right then, sorry. I'm reading your documentation from switchupcb/disgo#14 now. |
needs a redo |
This is outdated and needs a rework - Closing |
Summary
This replaces the previous system of rate limit, which purely only handled rate limits, with
a new rate limit system which not only eases and handles rate limits but also tries to predict rate limits before they can happen and affect the bot. This system is made to be scalable and replaceable with something like Redis storage.
Information
examples, ...).
Checklist
type: ignore
comments were used, a comment is also left explaining why.