-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PostgreSQL questions #11
Comments
Well going to anything other than 9.3 is going to involve extra work anyway, as 9.3 is what 14.04 has and it won't get 9.4 even when that comes out. |
2b: also prevent flapping. I think since we're volunteers we'd trade some uptime for fewer headaches, rather than seeking the perfect 100%-uptime solutions. Is now the time to ask for advice regarding peeling off GPX traces - e.g. using FDW on 9.3+? Or should we save that topic for later? |
FDW? |
2b: yes, I think even if we had automatic failover, we wouldn't want that to happen more than once (i.e: if A fails over to B, wouldn't want B to think it could fail back to A without human intervention). However, I'd like to think we should be able to handle one database / site failure without human intervention. Personally, I'm of the opinion that the GPX stuff can be peeled off into a completely different database with no links to the original database. Using an FDW (foreign data wrapper) would mean a rather stronger coupling between them than i was hoping for. |
Yes I was just going to make it a separate connection at the rails level rather than federate it at the postgres level. |
I've been reading http://blog.pgaddict.com/posts/performance-since-postgresql-7-4-to-9-4-pgbench My take away is that moving to newer postgres releases probably won't buy us much. The newer postgres versions help with CPU contention if all the data is cached in RAM (the "small" datasets in the blog post). For our dataset the "large" benchmarks are more appropriate (250GB RAM vs 5TB dataset). The limiting factor in this case continues to be IO. If we continue with SAS drives then these will limit the overall performance. I think we can see this in the in Katla "vmstat" and "system" graphs which show that IO is a more limiting factor than CPU usage. Changing to SSDs would be an obvious win but has questions about cost, capacity & reliability. |
9.4 offers minor performance gains with GIN indexes and isn't a huge gain over 9.3 for what we have. That being said, if we're upgrading we should probably go to 9.4. apt.postgresql.org packages it the same way ubuntu packages do, so it should be relatively easy to use.
Yes. Set up a slave on the new version, replicate from master to it, fail over to it, then upgrade master. This has no downtime. pg_upgrade is quicker than dump/reload, but it probably would need an almighty amount of disk space to do so in a way that can be backed out. |
You can't replicate across different versions can you? |
as long as the slave is at least as newer I think you can. All this being said, I've never run replication myself, as I only have one server. |
Well the builtin replication is streaming the binary logfiles, so if the format of those changes at all between versions I don't see how it can work? Unless new versions maintain the ability to read old version logs/ |
I picked a few brains this weekend and
So, we need an outage to transition to a newer version, but in the future we might not |
The question is, what are our chances of doing an in-place upgrade rather than a dump+reload, and if we can do that how long might it take... |
pg_upgrade vs pg_dump+pg_dumpall from the old version and then pg_restore + psql on the new version? I haven't heard recent cases of problems with pg_upgrade, but I'll ask around the next chance I get. The dump/restore route has the advantage of forcing a vacuum full and reindex of everything. |
I've had issues in the past with 3GL functions, though those can usually be worked around I think Fedora at least also makes a copy of the database when using pg_upgrade but I suspect that may just be their wrapper rather than anything pg_upgrade itself does. |
pg_upgrade has hardlink mode that upgrades database in-place, losing only some metadata/statistics. |
Yes I know (it's not the default on Ubuntu, in fact dump and reload is the default...) and we've already tested it on the (much larger) main database. |
I deleted spam comment from here. |
This 9 year old ticket doesn't have a clearly define "done"... But here goes:
We've now on Postgres 14+ ;-) All this in mind I am going to close. |
List & discuss the topics that we want to know about:
And some questions for us:
The text was updated successfully, but these errors were encountered: