Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client: Tweak the keepalive interval and timeout #548

Merged
merged 1 commit into from
Dec 3, 2024
Merged

Conversation

cdecker
Copy link
Collaborator

@cdecker cdecker commented Nov 29, 2024

As reported by the Breez team the cause for the transport error when
attempting a trampoline_pay appears to be that even short
interruptions in connectivity kill the TCP connections. The long 90s
timeout is causing gl-client to patiently sit there for the 90s,
ignoring all errors in the hope of getting the TCP connection
back. When it then attempts to re-establish the connection, any
pending call gets the dreaded transport error.

Reducing this to 5 seconds timeout and a 1s interval should minimize
the chances of issuing a command during the broken phase.

Suggested-by: Roei Erez <@roeierez>
Suggested-by: Jesse De Wit <@JssDWt>
Closes #428

pub(crate) const TCP_KEEPALIVE: Duration = Duration::from_secs(5);
pub(crate) const TCP_KEEPALIVE_TIMEOUT: Duration = Duration::from_secs(90);
pub(crate) const TCP_KEEPALIVE: Duration = Duration::from_secs(1);
pub(crate) const TCP_KEEPALIVE_TIMEOUT: Duration = Duration::from_secs(5);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense to set the keep alive timeout not more than the interval otherwise we may end up sending a second keepalive packet before the first one is ack'd.
Any reason you see not to set it for 1 second?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a balance of eager timeout vs keeping connections more stable. The timeout is counted for each packet, while the interval tells us how to send the ping, the timeout tells us how long to wait for pings. Here we pipeline up to 5 pings before we require the first pong to be back. So technically, if we send 2 pings, the first one gets replied to, the second one doesn't, then at second 6 we reconnect. If we set the interval and timeout to the same, we'd wait for 10 seconds, because we send only every 5s, and wait for a reply for 5s.

Pipelining pings can help reduce the time to recovery, without increasing the sensitivity to latency. Timeout on the other hand controls latency sensitivity, and as mentioned, I'm hoping that for interruptions <5s the TCP connection can just resume, delivering a late pong but keeping it inside of the timeout. Overall I think this will minimize the number of failed grpc calls.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess long story short, it's a parameter that we should likely tweak as we learn more about the network constraints we work with.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we set the interval and timeout to the same, we'd wait for 10 seconds, because we send only every 5s, and wait for a reply for 5s.

I actually commented about making the timeout the same as the interval (both 1 seconds) which means sending every 1 seconds and waiting up to 1 second for ack, lowering the max time to detect network change to 1 second overall and lowering even more the chances to get transport errors (if the call falls into the timeout time frame).
I guess the main question is do we foresee that on a stable connection we will have more than 1 second latency for replying a single tcp packet?
I agree that this fix by itself is a significant improvement.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best measure we currently have is the RTT that we get from the signer, which admittedly is a rather large request and response so bandwidth also influences this. We see RTTs up to 1.5s relatively regularly, those would be the affected nodes getting disconnected with timeout=1s, if ping messages (unloaded lines) also have a similar RTT.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my tests tracing the keep alive packets I see response in milliseconds but perhaps my bandwidth is better than the average and we should prepare for higher latency. I am ok to start with 5 seconds and gather some feedback from the field.

@cdecker cdecker enabled auto-merge (rebase) December 3, 2024 09:21
Suggested-by: Roei Erez <@roeierez>
Suggested-by: Jesse De Wit <@JssDWt>
@cdecker cdecker force-pushed the 202448-tcp-timeout branch from 6fbb9aa to 6803f70 Compare December 3, 2024 09:21
@cdecker cdecker merged commit 1fa8735 into main Dec 3, 2024
12 checks passed
@cdecker cdecker deleted the 202448-tcp-timeout branch December 3, 2024 09:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Connection is getting dropped with transport error
2 participants