-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hammer pushback support #106
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #106 +/- ##
==========================================
- Coverage 35.80% 26.17% -9.64%
==========================================
Files 16 19 +3
Lines 1363 1857 +494
==========================================
- Hits 488 486 -2
- Misses 801 1297 +496
Partials 74 74 ☔ View full report in Codecov by Sentry. |
hammer/client.go
Outdated
// Continue below | ||
case http.StatusServiceUnavailable, http.StatusBadGateway, http.StatusGatewayTimeout: | ||
// These status codes may indicate a delay before retrying, so handle that here: | ||
retryDelay(resp.Header.Get("RetryAfter"), time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This just computes a value and then throws it away, right? Where does the retry happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
d'oh, yes - it used to actually delay in the method.
709caab
to
03cc388
Compare
hammer/workers.go
Outdated
@@ -201,7 +201,9 @@ func (w *LogWriter) Run(ctx context.Context) { | |||
lt := leafTime{queuedAt: time.Now()} | |||
index, err := w.writer(ctx, newLeaf) | |||
if err != nil { | |||
w.errChan <- fmt.Errorf("failed to create request: %v", err) | |||
if !errors.Is(ErrRetry, err) { | |||
w.errChan <- fmt.Errorf("failed to create request: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it deserves a comment to say that as implemented this retry uses a token from the throttle for each attempt. So if we had a throttle of 100 writes per second, but one thread retried 99 times in a second, we'd only have 1 leaf written and no errors propagated down w.errChan
to explain where all the writes were going.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've de-quietened it a bit; now it'll "squash" reporting of pushback errors to once per second (with a count), so you can see that it's happening, but can also still see what else is going on.
9967644
to
754034b
Compare
754034b
to
1bf6dae
Compare
This PR adds support in the hammer for recognising and reacting to pushback from the target log.