-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug(batch): distributed query holds hummock version/snapshot for long time #9732
Comments
This comment was marked as outdated.
This comment was marked as outdated.
We do observe potential batch task leaked in compute node: the running task num is not zero in compute node, while the query num is zero according to frontend. (But we cannot be sure yet #9841) |
Yes, there is potential task leak, this may cause pinning of hummock version? |
Discussed offline. #9848 |
We encounter another unexpected pinned version by compute node:
@liurenjie1024 @ZENOTME Let's dig into this. |
So this means that SeqScanExecutor not dropped? You can add some logs in it to see if it's dropped. |
Did some investigation with no clue. Add #13589 to make the metrics to be more accurate to see if it's really caused by mpp task. |
Some more hints:
|
No occurrence recently after introducing statement_timeout. |
Describe the bug
We observer both min version and min snapshot being stuck for days, co-occurrence with
distributed_running_query_num
change. Is it just very slow query, or something wrong with the execution?To Reproduce
No response
Expected behavior
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: