You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some APIs have multiple throttle limits: i.e. no more than 4 requests/second and no more than 20 requests per minute and no more than 200 requests/hour.
Current behavior (as far as I've been able to verify) is that placing multiple req_throttle() policies on a request results in just the last policy being enforced.
A desired behavior (if feasable) is a pool of throttles, with delay required to satisfy all throttles, for the above scenario:
results in stepwise increase in throttle delays. So 20 requests could happen in as little as 5 seconds, but once 20 requests are sent the throttle engages the next limit and holds until one minute is past to continue sending requests. This same occurs with respecting the 200 request/hour limit.
The benefit of this is it permits bursts of activities to occur quickly, but respects the larger scale limit(s) in place should users be doing more significant API access tasks.
The text was updated successfully, but these errors were encountered:
This would require splitting the parameters in two (i.e. number of requests and time limit). But are you sure you need this? Most modern APIs will return a rate-limit header that you can respond to dynamically with req_retry().
I know it's common for more "official" APIs to have Retry-After responses, but there's lots of community data sources (particularly in sports analytics where I spend most of my time) where they have set guidelines or rules but may not have implemented Retry-After headers once a user is over the limits (just a 403 or unspecific 429 response).
Some APIs have multiple throttle limits: i.e. no more than 4 requests/second and no more than 20 requests per minute and no more than 200 requests/hour.
Current behavior (as far as I've been able to verify) is that placing multiple
req_throttle()
policies on a request results in just the last policy being enforced.A desired behavior (if feasable) is a pool of throttles, with delay required to satisfy all throttles, for the above scenario:
results in stepwise increase in throttle delays. So 20 requests could happen in as little as 5 seconds, but once 20 requests are sent the throttle engages the next limit and holds until one minute is past to continue sending requests. This same occurs with respecting the 200 request/hour limit.
The benefit of this is it permits bursts of activities to occur quickly, but respects the larger scale limit(s) in place should users be doing more significant API access tasks.
The text was updated successfully, but these errors were encountered: