Skip to content
This repository has been archived by the owner on Jul 2, 2020. It is now read-only.

ConnectionPoolTimeoutException was thrown after reaching to open connections limit #2

Open
shayts7 opened this issue Oct 27, 2015 · 0 comments
Labels

Comments

@shayts7
Copy link
Contributor

shayts7 commented Oct 27, 2015

This is the full stacktrace:

Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: DefaultExceptionHandler: Consumer adm.gaia.events.indexer.consume.EventIndexerConsumer@25a9b451 (amq.ctag-ujMOEsjHBoLNS4BBvPRYrg) method handleDelive
ry for channel AMQChannel(amqp://[email protected]:5672/,2) threw an exception for channel AMQChannel(amqp://[email protected]:5672/,2):
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: javax.ws.rs.ProcessingException: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at io.dropwizard.client.DropwizardApacheConnector.apply(DropwizardApacheConnector.java:111)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:245)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:671)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:668)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:444)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:668)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:428)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:334)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at adm.gaia.events.indexer.consume.EventIndexerConsumer.handleDelivery(EventIndexerConsumer.java:42)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:144)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:99)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at java.lang.Thread.run(Thread.java:745)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:254)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:231)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:173)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:195)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: at io.dropwizard.client.DropwizardApacheConnector.apply(DropwizardApacheConnector.java:89)
Oct 27 09:20:32 ip-10-60-73-23.ec2.internal docker[7034]: ... 16 more

This bug looks very similar to what was described in: http://phillbarber.blogspot.co.il/2014/02/lessons-learned-from-connection-leak-in.html

When I counted the number of failed 404 requests to influxdb before the exceptions I got into the non surprisingly number of 1024, which is the default open connections limit :-)

To fix it we need to close the response...

Also - we need to check how the channel recovery works and why it did not work after this exception...

@shayts7 shayts7 added the bug label Oct 27, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

1 participant