-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.acquire() doesn't reject when resource creation fails #175
Comments
Hi! very short answer - some of this is "known problem": if every promise returned by It is possible to get your from the docs:
I hope this helps. |
The example doesn't work. It loops until forever. Isn't acquireTimeoutMillis for a single request that takes over that much time. Something like: maxAcquireRetries would be very useful. |
hmmm.. I haven't actually tested it myself... (the retry logic bit). I'll try to put some demo together and see whats happening with it (although if you have any code you paste her or in a gist that would be very helpful.)
The design of the pool is such that factory errors are isolated away from
which could be used to detect constant failures and |
Here is the example with acquireTimeoutMillis (set to 1 sec): Note that even after 1 second it still calls _createResource () |
@alexpenev-s thanks! I'll try and take a look tonight. |
Ok, I forked your gist a modified it a bit so I could keep out with the output :-) and ran it. Once the I'm still not sure what the best way to handle constantly failing Incase you are interested my vague idea involved counting the |
Hi all, Would one of you be able to take a quick look at my Stackoverflow post? http://stackoverflow.com/questions/41902030/node-js-connection-pool-produces-infinite-loop-on-error |
hey @whiteatom - yep you've hit "known problem" land... for some history... So to sum up slightly: the There is still the problem that it's possible for the pool to end up in some infinite loop if your |
resources if we need them * There are many errors due to pools being configured with min: 0 but then running out of workers while there is still work to do * This results in missed work, infinite loops and timeouts * A solution is to ensure that if we still have work todo that we make sure we still have some workers to do them See: * brianc/node-pg-pool#48 * loopbackio/loopback-connector-postgresql#231 * coopernurse#175 (Seems related)
Prevent permanent waits, by ensuring that we continue creating resources if we need them * There are many errors due to pools being configured with min: 0 but then running out of workers while there is still work to do * This results in missed work, infinite loops and timeouts * A solution is to ensure that if we still have work todo that we make sure we still have some workers to do them See: * brianc/node-pg-pool#48 * loopbackio/loopback-connector-postgresql#231 * coopernurse#175 (Seems related)
Prevent permanent waits, by ensuring that we continue creating resources if we need them * There are many errors due to pools being configured with min: 0 but then running out of workers while there is still work to do * This results in missed work, infinite loops and timeouts * A solution is to ensure that if we still have work todo that we make sure we still have some workers to do them See: * brianc/node-pg-pool#48 * loopbackio/loopback-connector-postgresql#231 * coopernurse#175 (Seems related)
Prevent permanent waits, by ensuring that we continue creating resources if we need them * There are many errors due to pools being configured with min: 0 but then running out of workers while there is still work to do * This results in missed work, infinite loops and timeouts * A solution is to ensure that if we still have work todo that we make sure we still have some workers to do them See: * brianc/node-pg-pool#48 * loopbackio/loopback-connector-postgresql#231 * coopernurse#175 (Seems related)
Prevent permanent waits, by ensuring that we continue creating resources if we need them * There are many errors due to pools being configured with min: 0 but then running out of workers while there is still work to do * This results in missed work, infinite loops and timeouts * A solution is to ensure that if we still have work todo that we make sure we still have some workers to do them See: * brianc/node-pg-pool#48 * loopbackio/loopback-connector-postgresql#231 * coopernurse#175 (Seems related)
This is an issue for us now as well ... if MariaDB fails, our create factory is being called multiple times making the server become un-responsive. |
We have something like this as well. We reject, if the connection fails and until now we expected, that the reject would be given back to the acquire function, however it is caught and we never get the reject which is hard to deal with from our code as we expect a reject in error case. However instead it tries to reconnect all the time. Is there a way to somehow disable the event emitter so that the reject is returned instead of being caught? |
When I encountered this problem, my first attempted solution was to It seems what's necessary is some API to cancel and reject all waiting client acquisition requests. This could be something like |
Yes, this is exactly what I've found. I opened #256 to track that I don't see that a |
There currently isn't any way to "ungracefully" stop the pool. And there isn't any way to dump both the waiting clients. I could probably envisage adding a method that would:
I'm not sure if the pool would wait for borrowed resources to be returned, or just abandon them. |
My view (having not thought it through particularly thoroughly) would be that if you want to close the pool then any resources that haven't been released (or created) should no be released after If the pool is closed a resource should never be released |
Following solution originaly posted here by ImHype works for me like a charm.
|
Hi,
Consider this code:
Here I get an infinite loop of _dispense() calling _createResource calling _dispense()
How do I get
myPool.acquire()
to fail?The text was updated successfully, but these errors were encountered: