-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bandwidth usage #2746
Comments
Good point! With a small tweak, you could probably read the named pipe that GoAccess uses to get the data directly. The |
Also, take a look at mod_deflate, I think it can handle application/json content types. |
Ok, the bandwidth usage is a little crazy .. is there any way to limit the frequency of response, so no more than once every 5s? I know the whole idea is "live", but this will chew up my monthly bandwidth allowance in a matter of days ..? |
|
It would appear html-refresh sends the first WS packet then stops .. and for some reason my persist and restore seem not to be working, logrotate just rand and after restarting goaccess instances I'm seeing blank charts. Very odd .. cache folders are not populated .. although I have an old cache folder that is. Might have to call it a night, will look again tomorrow. |
Mmm.. works when I launch it from the command line, but when I launch goaccess from a python script, persist doesn't write the database file in the cache folder .. no error. Will investigate tomorrow, I guess maybe it's a pty issue. |
Just a quick heads up: if you're piping data or running it from a script, be sure to include
|
Ok, so I seem to have resolved a number of issues. I'd not appreciated that the cache is only written on a clean exit and my sub-process shutdown was obviously a little too severe. I'm now doing a SIGINT and that seems to be writing cache files on exit. However; what happens if the application (or server) has a hard crash? This seems to imply that logging information might be lost if there was a logrotate since it was last restarted? Should I be restarting all goaccess instances following a logrotate (to ensure the cache is updated?) http-refresh now seems to work for me, I was obviously doing something wrong here. |
Hi @oddjobz What you do means with " cache is only written ... " ? Humm... I see. You want running in real-time mode and generate persistent data storage. Well... I use day-by-day So, I use too scripts for start and stop... In truly, I created SystemD service script for that. But, the price is... Service will be stopped. Off course; do You need restart it. Yes; is short answer about warning lost your persistent data. I hope to be clean, and that helped you. |
Ok, so this would be a feature request :-) Please can we have an option for goaccess to flush it's persistent storage cache to disk periodically .. say every minute .. so the most data that could be lost would be 1m? In the meantime I'm going to set an hourly restart :) Just as a matter of interest, this is how I'm using it, embedded within a Vue Application ... so it automatically creates and maintains a live stats instance for every site it tracks. Another useful feature would be the ability to manipulate the side-bar a little more easily .. ;-) |
Hi @oddjobz Well ... ... this is impossible, in practice. But; Why is this impractical ?
Humm ... What I propose here to You do use Well. I hope than be clean. Again; feel free to show your point. |
Ok, so the statistics need to be relatively accurate, but losing a small percentage of the information is not an issue. So when you talk about data safely, I think you are missing the point. Here is the operational scenario;
Problems;
What I do to try to mitigate this; 0 0 * * * /usr/local/bin/mms_weblogs --restart Which restarts all my GoAccess instances and forces them onto new logs. This works, and in the event of a crash the current logs would still be available so little or no information would be lost. So, don't saying the issue can't be solved, I've already solved it, what I'm suggesting is that having to do this with "cron" is a bit of a messy / poor solution and it would be a lot "cleaner" if GoAccess had this ability itself. |
@oddjobz Well; like I already said above, if You separate into 2 process, all problems that you did quote will be solved !
Well; I think the opposite of you. I believe more like Unix Philosophy: make one thing, make it right. Well. Again, I hope than be clean. |
If you're happy with it the way it is, that's great. For me, although it's all there and looks great, it's operationally problematic. Whereas I accept the bandwidth issue is complex and not easy to solve (and something I can probably do myself), simply storing the data in a way that doesn't involves excessive reprocessing or data loss seems to be a fundamental issue. |
Hi @oddjobz . Well... For me, backup exist for never be used... but if something to happened, that is amazing that had one !
Not. I not said that. You not need reprocess 30 days again.
What do I propose is offline Well, It is correct: Database Server can do snap at few seconds. But it have a mechanism for that.
|
So, I've just noticed that the websockets update seem to be a complete data refresh every couple of seconds .. this is generating ~ 120k per refresh, so maybe 2-3Mbtytes per minute .. which is something like 2G per hour, or 50G per day. Which is kinda huge in the context of scaling to many users (which is what I'm looking at)
My current websocket client/server code just transfers deltas, I was wondering if there was any scope in the code for outputting to a local file / key-value store rather than a websocket, in order to hook in a more efficient ws mechanism?
(or alternatively, a way of cutting down the data packets ... other than disabling a lot of charts? or maybe compress the data?)
The text was updated successfully, but these errors were encountered: