You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pubasyncfn list_failed_requests(pool:&SqlitePool) -> Result<Vec<QueuedRequest>>{
tracing::trace!("list_failed_requests");letmut conn = pool.acquire().await?;// FIXME - we currently tick the retry queue every second, so this effectively gives a// rate limit of 5 requests per second. This should probably be configurable on a per-origin// basis.let query = r#" SELECT * FROM requests WHERE state IN (?, ?, ?, ?) AND retry_ms_at <= strftime('%s','now') || substr(strftime('%f','now'), 4) ORDER BY retry_ms_at ASC LIMIT 5; "#;...
Currently we only get the top 5 failed requests, that should be retried earliest. This means if a certain origin/domain had lots of failing requests at same time, it would delay failing requests from other origins/domains from getting retried. I suggest we fetch x requests per domain/origin that should be retried at earliest to make the system be fair. What are your thoughts?
The text was updated successfully, but these errors were encountered:
Currently we only get the top 5 failed requests, that should be retried earliest. This means if a certain origin/domain had lots of failing requests at same time, it would delay failing requests from other origins/domains from getting retried. I suggest we fetch
x
requests per domain/origin that should be retried at earliest to make the system be fair. What are your thoughts?The text was updated successfully, but these errors were encountered: