-
-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below): clickhouse-1 | #5707
Comments
Seeing the same thing on 24.3.0. It is unclear when and how it started, but it was not there when we set up the instance initially, nor after upgrading to 24.3.0. It is also unclear if it has any actual impact on functionality. |
@csvan I suspect that |
Looking at our internal graphs, I have not noticed any significant deviations in CPU usage. |
Do you think having too many projects can cause CPU spikes? I have a total of 67 projects on the Sentry and 23 out of them are actively used for monitoring. |
I also came across this, but also saw this in the logs early on while booting:
which makes sense as there is no IPv6 in our docker. I've added a I've now gotten another error from clickhouse (which has scrolled out of my terminal's history unfortunately) about not being able to bind to several ports, but some prodding with Note that I'm not seeing any CPU spikes as well. (all of this one 24.3.0) |
A duplicate of this error is at getsentry/self-hosted#2876. Have you tried the updating to a nightly build past the PR listed there? |
@azaslavsky For now I have just rolled back to 24.1.0 and the clickhouse stopped throwing the error, but still seeing a lot of CPU spikes for the server. |
@mahesh1b to answer this: No, having too many projects doesn't cause CPU spikes. I have 100+ projects with only 8 core CPU and the average CPU usage is around 19% - 24%
@jap There's IPv6 support in Docker though, but it's not the default yet: https://docs.docker.com/config/daemon/ipv6/ |
Wild guess, but try to change every |
It's up to you, using |
@aldy505 I have set up a new sentry server with version 23.4.0, I will try it and let know. Thanks. |
Just upgraded from 23.9.1 to 24.3.0 and am seeing this connection error. Also events are not being processed by the instance - it seems very broken. I followed the instructions in getsentry/self-hosted#2876 (comment) to stop using the rust-consumer and add the billing worker and it seems to have fixed the issues for now. |
EDIT: I do still see the log messages, just at a way lower frequency than before. They're still annoying, but at least they don't destroy the logs or fill up my disk anymore. @onewland For what it's worth, I'm no longer seeing the log messages with self-hosted version 24.5.1 and the Snuba rust consumer. (I also don't see any abnormal CPU usage, but I don't believe I ever have.) In case it's interesting, I'm running this fork of the self-hosted project, which adds a few environment variables and runs against an external Postgres instance: https://github.com/folio-as/sentry-self-hosted/tree/24.5.1-folio-rust-consumer-2. I don't think my fork affects this issue at all. I did recently make one change to my setup, though, which I think may be related: Some containers (I can't remember which, unfortunately) were failing to start due to low memory, so I "upgraded" my VM from the recommended 16 GB to 32 GB. (This also added 2 vCPUs). With 32 GB memory available so it managed to start properly, the full Docker Compose stack now runs comfortably on around 14 GB residential memory on my Debian 11 (bullseye) VM. This makes it seem to me like the log errors from Clickhouse are, in fact, exposing a real issue – and that 16 GB of memory is just not enough to start the full Compose stack anymore. It might be helpful if someone else in this thread checked if any of their containers are failing to start, so we could narrow down the root cause of the issue. @lcsvcn, for example, or @christopherowen? |
I run 24.5.1 on a an 8-core 32GB VM and am still being absolutely spammed by these logs, so I am not sure the VM size is related. |
I see. Well, it was worth a shot, thanks! |
Any update on that? The same problems after upgrading sentry to 24.6.0 I find replacing |
I just upgraded my self-hosted stack to 24.6.0, and I'm still seeing the error messages a whole bunch 😞 |
I had the same, changing |
Same issue here Clickhouse log was full with
|
The main issue for me why I ended up here with this workaround is disk usage from log spam. Both clickhouse log files fill up with the same error message at a rate of multiple gigabytes per day (see comment above). The high CPU usage may just be another symptom of whatever is going on. It's the error logs from the original message, but at a constant rate of multiple per second. Their frequency may depend on the type and mount of activity, so may not show on an idle test instance. |
You can cap the size of log files in your Docker daemon config, e.g.
|
Hi (yeah, i have the same issue 😺 ), though you can cap the size of logs, it is not really solution neither workaround when using centralized logs storage, for example we are using grafana stack (promtail + loki) and it takes a lot of storage, ignoring logs for the container is not solution either. |
The solution against hammering into the logging system is not to reduce the logfile size! |
@stumbaumr its a workaround until the issue is resolved, nobody said it was a solution. |
Any progress on this issue or do we have some schedule? It has been several months after this issue was created. |
I'm also experiencing this issue still with most recent 24.7.0. Adjusting the I'm now running following after update/install and before starting the stack:
I only replaced the consumer, no other changes were made. RAM usage also slightly reduced. So it seems there's indeed a issue with the new consumer written in rust. |
thanks, worked for me as well and resolved some other nasty DuplicateKeyExceptions in postgres. Should be integrated into master as soon as possible. |
Seems Clickhouse fixed the logging issue and it's been released in |
Same issue with 24.10.0 |
There is a know bug that has yet to be fixed that causes lots of errors and excess resource consumption in clickhouse. REF: getsentry/snuba#5707 (comment)
In version 24.11.0, this issue still persists. I applied the suggested fix: https://github.com/getsentry/snuba/issues/5707#issuecomment-2027710056 - and everything appears to be working now. @patschi seems to be correct - there is indeed an issue with rust-consumer. I had previously fixed it a few months ago by replacing the Rust version with the standard one, but unfortunately, the issue has not been fully resolved. |
I still have duplicated |
also I have errors from postgres about duplicates I cannot get rid of
|
Self-Hosted Version
24.4.0.dev
CPU Architecture
x86_64
Docker Version
26.0.0
Docker Compose Version
2.25.0
Steps to Reproduce
./install.sh
in the GitHub folderclickhouse
container logsExpected Result
The
clickhouse
container should work without throwing any errors in the logs and CPU consumption should be normal.Actual Result
Event ID
No response
The text was updated successfully, but these errors were encountered: