You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I deployed it in my own server but for some reason there is a strange connection bug I've been trying to fix for 2 weeks now without luck. After some time, the remote socket puts in read-only mode and the local socket ends. However, the remote server is unable to end even if I force it. Not sure why. Maybe your last version has fixed this somehow. When using your server it all works without issues.
I've tried multiple things. Even a full refactor. Nothing. It's frustrating.
Problem
After 60s (approx) the remote socket in localtunnal client sets to 'readonly', triggering the local socket to close. However, the socket from the localtunnel server is still in 'open' state (read & write). The server has no way of knowing when this happens since the localtunnel client is not able to write to it. Once this happens, all local sockets are closed inmediately and all remote sockets remain useless. Localtunnel server (self-deployed) responds with 408 since it can't connect to localtunnel client anymore and sockets keep burning until there are no more sockets and the connection is shutdown.
Solution
The only solution I found is to set a timeout for available sockets og 60seconds, which is the time it takes for the issue to appear. The change is in _onConnection(socket) method inside TunnelAgent.js
// Timeout sockets after 60 seconds to prevent unsync between localtunnel client and serversocket.setTimeout(this.availableSocketTimeout);socket.on('data',()=>{// Extend timeout if there's data socket.setTimeout(this.availableSocketTimeout);});socket.on('timeout',()=>{this.debug('Socket timeout');socket.end();});
Not sure if you solved it with other solution, but this requires the client to reconnect all sockets (maxTcpSockets) every 60 seconds.
Full method code
_onConnection(socket){// no more socket connections allowedif(this.connectedSockets>=this.maxTcpSockets){this.debug('no more sockets allowed');socket.destroy();returnfalse;}socket.once('close',(hadError)=>{this.debug('closed socket (error: %s)',hadError);this.connectedSockets-=1;// remove the socket from available listconstidx=this.availableSockets.indexOf(socket);if(idx>=0){this.availableSockets.splice(idx,1);}this.debug('connected sockets: %s',this.connectedSockets);this.debug('available sockets: %s',this.availableSockets.length);this.debug('waiting sockets: %s',this.waitingCreateConn.length);if(this.connectedSockets<=0){this.debug('all sockets disconnected');this.emit('offline');}});// close will be emitted after thissocket.once('error',(err)=>{// we do not log these errors, sessions can drop from clients for many reasons// these are not actionable errors for our serversocket.destroy();});if(this.connectedSockets===0){this.emit('online');}this.connectedSockets+=1;this.debug('new connection from: %s:%s',socket.address().address,socket.address().port);// if there are queued callbacks, give this socket now and don't queue into availableconstfn=this.waitingCreateConn.shift();if(fn){this.debug('giving socket to queued conn request');setTimeout(()=>{fn(null,socket);},0);return;}// Timeout sockets after 60 seconds to prevent unsync between localtunnel client and serversocket.setTimeout(this.availableSocketTimeout);socket.on('data',()=>{// Extend timeout if there's data socket.setTimeout(this.availableSocketTimeout);});socket.on('timeout',()=>{this.debug('Socket timeout');socket.end();});// make socket available for those waiting on socketsthis.availableSockets.push(socket);}
The text was updated successfully, but these errors were encountered:
@adearriba I checked your solution.
Unfortunately, in our case after about 50 minutes of server operation, the error is reproduced.
We additionally set a cronjob to restart the server every 15 minutes, in order to somehow bypass the problem.
I think the problem is in the number of sockets that are opened. The server is running into the maximum limit.
Maybe it's worth looking into adding keep-alive somewhere.
Hi @ruscon, when the timeout is every 60 seconds, all the remote connections from the localtunnel client to the localtunnel server are closed and recreated, so you shouldn't have this issue. I have my server working for a couple of days without an issue with permanent tunnels opened.
I just saw you edit:
In that case is a server issue and not a code issue. However the ports are free once the remote connection is closed and created again. So it might be that you have too many tunnels (with many connections) opened or another issue. I have set 10 as max_conn, so each client has 10 connections with the localtunnel server.
@adearriba
We don't have too many tunnels (lets say maximum 16 real tunnels).
But we use docker healthcheck with 2s interval. Maybe this has some effect, I'll try to play with it.
Hi @TheBoroer
This issue is based on my comment in: #664 (comment)
Problem
After 60s (approx) the remote socket in localtunnal client sets to 'readonly', triggering the local socket to close. However, the socket from the localtunnel server is still in 'open' state (read & write). The server has no way of knowing when this happens since the localtunnel client is not able to write to it. Once this happens, all local sockets are closed inmediately and all remote sockets remain useless. Localtunnel server (self-deployed) responds with 408 since it can't connect to localtunnel client anymore and sockets keep burning until there are no more sockets and the connection is shutdown.
Solution
The only solution I found is to set a timeout for available sockets og 60seconds, which is the time it takes for the issue to appear. The change is in
_onConnection(socket)
method inside TunnelAgent.jsNot sure if you solved it with other solution, but this requires the client to reconnect all sockets (maxTcpSockets) every 60 seconds.
Full method code
The text was updated successfully, but these errors were encountered: