-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Extreme latency in BulkIndexer #113
Comments
@VijayanB @VachaShah Any ideas? |
Poke! |
@dokterbob This looks visible problematic, but doesn't look like the folks here got to looking into it. Let's try to move this forward? First, what's the easiest way to reproduce this (maybe post code similar to the benchmarks in this project)? Are you able to bulk load data a lot faster into this instance with other mechanisms (aka is this a client issue for sure)? |
@dokterbob Do you mind posting some of the code you were using to help pinpoint this issue? |
Sorry, I didn't see the messages. Code is https://github.com/ipfs-search/ipfs-search/ but of course you'll need a more detailed test case. After increasing the workers it seems the problem has become less severe. Now that it's been picked up I'll see if I can get more concrete feedback over the next couple of weeks. |
@dokterbob hey, I want to work on solving the issue, how relevant is this? |
What is the bug?
It seems that with a BulkIndexer with 2 workers, I am getting unexpected latency on
BulkIndexer.Add()
. It seems that somehow the workers are not consuming the queue within any reasonable sort of timeframe, I'm seeing delays of over 20s!For example, in the last hour I've 53 cases of >1s latency on just Add() out of a total of 174 calls.
How can one reproduce the bug?
With 2 workers running, adding items from different goroutines and a relatively busy search cluster.
What is the expected behavior?
Sub-millisecond latencies, basically the time it takes to shove something into a channel.
What is your host/environment?
Do you have any screenshots?
The text was updated successfully, but these errors were encountered: