Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lazy connection pool - pipeline issue #90

Open
irshad-qb opened this issue Nov 13, 2015 · 4 comments
Open

Lazy connection pool - pipeline issue #90

irshad-qb opened this issue Nov 13, 2015 · 4 comments

Comments

@irshad-qb
Copy link

txredisapi ( trying the master branch which has the fix for pipeline pool) hangs when there are greater number of concurrent pipeline commands executed than the maximum connection pool size.

How to reproduce :
1- Create lazyConnectin pool with pool size 2
2- Execute any more than 10 pipeline requests
Simple sample code to reproduce above is added below. The code should output all the 100 requests but it outputs only around 60.

import txredisapi as redis

from twisted.internet import defer
from twisted.internet import reactor
def print_result(result,count=0):
    print "got the result for count",count

def process(pipeline):
    pipeline.smembers('key1')
    return pipeline.execute_pipeline()

@defer.inlineCallbacks
def main():

    rc =  redis.lazyConnectionPool(poolsize=1)

    for i in range(100):
        rc.pipeline().addCallback(process).addCallback(print_result,count = i)
    yield rc.disconnect()


if __name__ == "__main__":
    main()
    reactor.run()
irshad-qb referenced this issue Nov 14, 2015
Modeled after how transactions are handled.

When a pipeline starts it was not getting removed from the
ConnectionPool, and so if you make enough requests to cycle through your
ConnectionPool before the pipeline executes you could end up sticking
non-pipeline requests in a pipeline unintentionally.
@steiza
Copy link
Contributor

steiza commented Nov 16, 2015

Hmmm, that's odd. I was not able to reproduce this on my machine (using the HEAD of master on fiorix/txredisapi.

I tried setting the poolsize to 1, 2, and 10, having an empty set at key1 or a set with several elements, but each time all 100 requests completed successfully. Can you supply additional details about your test setup?

The deferedQueue (https://github.com/fiorix/txredisapi/blob/3834aeee670720c807c0de9a220a76aa984a303e/txredisapi.py#L2051) should block if we try to get a connection and none are available (https://github.com/fiorix/txredisapi/blob/3834aeee670720c807c0de9a220a76aa984a303e/txredisapi.py#L2104-L2116).

@irshad-qb
Copy link
Author

@steiza I was testing my test case with 1.3 branch code of tx-redis api which do not have the pool fix you did. Then I thought this would be an issue with the pipeline - pool combination.
Actually the issue I mentioned was with my application code. There were some connections being requested while redis pipeline was created but not finished. This was creating the block ( Dead lock like scenarion while waiting for connection from pool ) scenario you mentioned in above comment when there are more number of concurrent users. I fixed it by avoiding code which uses more than one connection concurrently. Thanks alot for helping me to find the root cause and apologies for wasting your time with my test case.
Please close this issue. Seems I don’t have the privilege .

@offero
Copy link

offero commented May 22, 2017

I also witness this behavior in my applications when I have a burst of redis activity, it causes pauses in the rest of my program. I did not expect such behavior. I am still investigating it.

We're running txredisapi version 1.4.3

@IlyaSkriblovsky
Copy link
Owner

IlyaSkriblovsky commented May 23, 2017

I believe that pipeline() is mostly useless. Thanks to async nature of txredisapi you can issue as many parallel requests as you like even with a single connection and they will be pipelined automatically. There won't be any perfomance gain from batching requests together with pipeline() (or maybe it will be very small).

This doesn't excuses deadlocks though. And if the problem is really in txredisapi code, it may affect watch()/multi() too because implementation is somewhat similar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants