-
Notifications
You must be signed in to change notification settings - Fork 67
Memory usage of webkit never stops growing #41
Comments
Hi @bjmb. It is actually well possible that there is a memory leak in webkit-server. I don't think many people have used it in a persistent way. If this is possible for your use case, a simple workaround would be to restart webkit-server every once in a while by killing it and creating a new session... Of course a better fix would be to track down the leak, but I'm not sure this is something I will be able to do in the next few weeks or so. Niklas |
Thanks for the answer. I will try your advice. Best |
Hello, Another thanks for this awesome package, and unfortunately another report of this condition. You mentioned killing webkit_server and re-starting it as a workaround. I'm new, and can't think of a relatively seamless way to do this. Any suggestions? Thanks again! |
This appears to be a QtWebkit bug, see ariya/phantomjs#11390 for more info. I've been unable to find a solution with dryscrape, and because of instability issues I mentioned in #43, I've switched over to using PhantomJS with selenium, and there I've solved it with:
|
Memory leaks are giant! For 70 visited pages almost 15 GB of virtual memory... |
Hi, how can I restart the webkit-server? Thanks! |
I'm with that guy from 4 hours ago. Is there a trick to restarting the server. I could never get my OSError's resolved. |
Hello guys, I'm experiencing similar issues here. I'm iterating over approx. 300 urls and after 70-80 urls webkit_server takes up about 3GB of memory. However it is not really the memory that is the problem for me, but it seems that dryscrape/webkit_server is getting slower with each iteration. After the said 70-80 iterations dryscrape is so slow that it raises a timeout error (set timeout = 10 sec) and I need to abort. Restarting the webkit_server (e.g. after every 30 iterations) might help and would empty the memory, however I'm unsure if the 'memory leaks' are really responsible that dryscrape is getting slower and slower. Does anyone know how to restart the webkit_server so I could test that? I have not found an acceptable workaround for this issue, however I also don't want to switch to another solution (selenium/phantomjs, ghost.py) as I simply love dryscrape for its simplicity. Dryscrape is working great btw. if one is not iterating over too many urls in one session. |
It seems like a lot of people could really use the simple workaround of just restarting the server. After my exams for this semester are over I will try to implement this in a way that should be useful to most people with this particular issue. |
Much appreciated, Niklas! |
I see this def reset(self):
""" Resets the current web session. """
self.conn.issue_command("Reset") In the Client class in webserver.py here. Is this the point of focus if one is attempting a workaround fix for the memory leak issue? If so I guess I could try to play around with implementing it here class Driver(webkit_server.Client,
dryscrape.mixins.WaitMixin,
dryscrape.mixins.HtmlParsingMixin):
""" Driver implementation wrapping a ``webkit_server`` driver.
Keyword arguments are passed through to the underlying ``webkit_server.Client``
constructor. By default, `node_factory_class` is set to use the dryscrape
node implementation. """
def __init__(self, **kw):
kw.setdefault('node_factory_class', NodeFactory)
super(Driver, self).__init__(**kw) |
@trendsetter37 I think that pretty much only triggers https://github.com/niklasb/webkit-server/blob/master/src/NetworkAccessManager.cpp#L50, which clears request headers and username + password for HTTP auth, so no, that doesn't really do anything interesting in that context |
Also that command is already exposed here: https://github.com/niklasb/webkit-server/blob/master/webkit_server.py#L254 and can be called as |
Ahh ok I see. I will keep poking around. I know that the Server.cpp catches my eye but I am still unsure if that's going in the right direction. I am willing to take a crack at it...would it be better if I sent an email? |
If I could just bump this, it takes ~20 minutes to scrape 30 or so links and memory usage climes exponentially, hitting almost 80-90 MB per new link towards the end of those 30. Javascript execution also slows down immensely as the number of links grows. |
@siddarth so why not start Xvfb once manually and running your python script inside it? What you describe seems to have nothing to do with the rest of this threads |
@niklasb Thanks and agree |
I ended up using Phantomjs and approach mentioned by 13steinj . p.s. There is a bug in his wrapper. get function is missing self. The correct version is get(self, url). |
Whoops, thanks for the catch. It may be important to note however the only thing that wrapper "solves" is the memory usage (and somethings changed since I wrote that, I believe the driver's |
There is a bug in QT till 5.2.1 when memory increase very fast if auto load images is off https://bugreports.qt.io/browse/QTBUG-34494. See if this helps. |
First of all thanks for this great package,
The memory usage of the process webkit_server seems to increase with each call to session.visit()
It happens to me using the following script:
I see the memory usage with this command:
Maybe I'm not doing something right?
The text was updated successfully, but these errors were encountered: