Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Remote socket changing to 'readonly' state in localtunnel client after 60s #673

Open
adearriba opened this issue Aug 31, 2024 · 3 comments

Comments

@adearriba
Copy link

Hi @TheBoroer

This issue is based on my comment in: #664 (comment)

I deployed it in my own server but for some reason there is a strange connection bug I've been trying to fix for 2 weeks now without luck. After some time, the remote socket puts in read-only mode and the local socket ends. However, the remote server is unable to end even if I force it. Not sure why. Maybe your last version has fixed this somehow. When using your server it all works without issues.

I've tried multiple things. Even a full refactor. Nothing. It's frustrating.

Problem

After 60s (approx) the remote socket in localtunnal client sets to 'readonly', triggering the local socket to close. However, the socket from the localtunnel server is still in 'open' state (read & write). The server has no way of knowing when this happens since the localtunnel client is not able to write to it. Once this happens, all local sockets are closed inmediately and all remote sockets remain useless. Localtunnel server (self-deployed) responds with 408 since it can't connect to localtunnel client anymore and sockets keep burning until there are no more sockets and the connection is shutdown.

Solution

The only solution I found is to set a timeout for available sockets og 60seconds, which is the time it takes for the issue to appear. The change is in _onConnection(socket) method inside TunnelAgent.js

        // Timeout sockets after 60 seconds to prevent unsync between localtunnel client and server
        socket.setTimeout(this.availableSocketTimeout);

        socket.on('data', () => {
            // Extend timeout if there's data 
            socket.setTimeout(this.availableSocketTimeout);
        });

        socket.on('timeout', () => {
            this.debug('Socket timeout');
            socket.end();
        });

Not sure if you solved it with other solution, but this requires the client to reconnect all sockets (maxTcpSockets) every 60 seconds.

Full method code

_onConnection(socket) {
        // no more socket connections allowed
        if (this.connectedSockets >= this.maxTcpSockets) {
            this.debug('no more sockets allowed');
            socket.destroy();
            return false;
        }

        socket.once('close', (hadError) => {
            this.debug('closed socket (error: %s)', hadError);
            this.connectedSockets -= 1;
            // remove the socket from available list
            const idx = this.availableSockets.indexOf(socket);
            if (idx >= 0) {
                this.availableSockets.splice(idx, 1);
            }

            this.debug('connected sockets: %s', this.connectedSockets);
            this.debug('available sockets: %s', this.availableSockets.length);
            this.debug('waiting sockets: %s', this.waitingCreateConn.length);
            if (this.connectedSockets <= 0) {
                this.debug('all sockets disconnected');
                this.emit('offline');
            }
        });

        // close will be emitted after this
        socket.once('error', (err) => {
            // we do not log these errors, sessions can drop from clients for many reasons
            // these are not actionable errors for our server
            socket.destroy();
        });

        if (this.connectedSockets === 0) {
            this.emit('online');
        }

        this.connectedSockets += 1;
        this.debug('new connection from: %s:%s', socket.address().address, socket.address().port);

        // if there are queued callbacks, give this socket now and don't queue into available
        const fn = this.waitingCreateConn.shift();
        if (fn) {
            this.debug('giving socket to queued conn request');
            setTimeout(() => {
                fn(null, socket);
            }, 0);
            return;
        }

        // Timeout sockets after 60 seconds to prevent unsync between localtunnel client and server
        socket.setTimeout(this.availableSocketTimeout);

        socket.on('data', () => {
            // Extend timeout if there's data 
            socket.setTimeout(this.availableSocketTimeout);
        });

        socket.on('timeout', () => {
            this.debug('Socket timeout');
            socket.end();
        });

        // make socket available for those waiting on sockets
        this.availableSockets.push(socket);
    }
@ruscon
Copy link

ruscon commented Sep 6, 2024

@adearriba I checked your solution.
Unfortunately, in our case after about 50 minutes of server operation, the error is reproduced.
We additionally set a cronjob to restart the server every 15 minutes, in order to somehow bypass the problem.

I think the problem is in the number of sockets that are opened. The server is running into the maximum limit.
Maybe it's worth looking into adding keep-alive somewhere.

@adearriba
Copy link
Author

Hi @ruscon, when the timeout is every 60 seconds, all the remote connections from the localtunnel client to the localtunnel server are closed and recreated, so you shouldn't have this issue. I have my server working for a couple of days without an issue with permanent tunnels opened.

I just saw you edit:
In that case is a server issue and not a code issue. However the ports are free once the remote connection is closed and created again. So it might be that you have too many tunnels (with many connections) opened or another issue. I have set 10 as max_conn, so each client has 10 connections with the localtunnel server.

@ruscon
Copy link

ruscon commented Sep 6, 2024

@adearriba
We don't have too many tunnels (lets say maximum 16 real tunnels).
But we use docker healthcheck with 2s interval. Maybe this has some effect, I'll try to play with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants