-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ticket to track Migration of a Root CA #813
Comments
Define "safe". So long as you don't publish the TAL you use during testing, nobody but us will ever know about it, so at worst you might have to Trac comment by sra on 2016-04-25T01:25:31Z |
While I had originally expected to do this sort of transition using Django migrations, this is a big enough jump and likely enough to involve multiple machines that I think there's a simpler solution:
As a refinement, we might run the pickle through some compression program, both for size and, more importantly, for some kind of internal checksum to detect transfer errors while moving the pickle around. Heck, we could use It turns out that rendering the contents of While this could be generalized into some kind of back-up-the-entire-CA mechanism, that would be mission creep. At the moment, I'm focused on a specific nasty transition, which includes the raw-MySQL to Django ORM jump, which is enough of a challenge for one script. Another thing I like about this besides its (relative) simplicity is that one can save the intermediate format. Assuming we can get keep the part that generates the pickle simple enough, it should be straightforward to reassure ourselves that it has all the data we intended to save. Given that, we can isolate the more complex problem (unpacking the data into the new database) as a separate task, which we can run repeatedly until we get it right if that's what it takes: so long as the pickle is safe, no data has been lost. Yes, of course we also tell the user to back up every freaking thing possible in addition to generating the pickle, even though we hope and intend that the pickle contains everything we need. This scheme does assume that everything in a CA instance will fit in memory. That's not a safe assumption in the general case, but I think it's safe for everything we're likely to care about for this particular transition given state of play to date. There are variants on this scheme we could use if this were a problem, but I don't think it is. Trac comment by sra on 2016-04-27T13:41:41Z |
for transfer check, just sha1 it on both ends do not care about efficiency. one does not do this daily. being as tolerant of input issues on the /trunk side may be helpful. {{{ not sure we need backup entire CA, as we can back up machine. as you i think the largest dataset that would migrate would be jpnic or cnnic. Trac comment by randy on 2016-04-27T14:15:07Z |
Piping through "xz -C sha256" automates this.
Right.
Input side is just data capture, no analysis.
You've been getting that on and off for years, with various causes,
I think we are quibbling about "entire CA". Intent is to capture data
Seems likely. Trac comment by sra on 2016-04-27T20:48:49Z |
In [changeset:"6395" 6395]: Transfer format is an ad hoc Python dictionary, encoded in Python's Trac comment by sra on 2016-04-27T22:20:20Z |
except the mysql server is running. and if i restart mysql-server, the it is not that i have not tried Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> Bye Trac comment by randy on 2016-04-27T23:09:28Z |
I'm considering taking advantage of this pickled migration process to make one schema change which appears to be beyond the capabilities of Django's migration system. Task:: Users Affected:: Details:: When:: Any objections to making this change, before Randy attempts to move ca0.rpki.net? Trac comment by sra on 2016-04-29T03:20:30Z |
On Fri, Apr 29, 2016 at 03:20:31AM -0000, Trac Ticket System wrote:
Sounds good to me. Trac comment by melkins on 2016-04-29T16:21:33Z |
sure Trac comment by randy on 2016-04-29T22:49:22Z |
uh, any progress? Trac comment by randy on 2016-05-05T07:13:26Z |
One last bug.... Trac comment by sra on 2016-05-05T11:52:16Z |
OK, in theory it's ready for an alpha tester. This is a two stage process: the first stage runs on the machine In addition to the usual tools, you need two scripts:
You can fetch these using svn if you want to pull the whole source On the old machine:
On the new machine:
Notes on the long script above:
As to what's really going on here:
Trac comment by sra on 2016-05-06T01:04:34Z |
how long is expected to run? it's been maybe 15 minutes. and mysql-server is running {{{ Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> quit; Trac comment by randy on 2016-05-07T06:28:01Z |
ignore. it finally finished. Trac comment by randy on 2016-05-07T06:32:33Z |
Out of curiosity, please post size of the xz file. I don't think I've seen ca-pickle take more than five or ten seconds, Trac comment by sra on 2016-05-07T06:39:04Z |
ca0.rpki.net:/root# l -h pickled-rpki.xz Trac comment by randy on 2016-05-07T06:40:00Z |
Removed confused instructions which led to #815. That part of the instructions was just plain wrong. Trac comment by sra on 2016-05-09T05:44:11Z |
Added {{{ so that the name of the entity created from the salvaged rootd data If you already have an entity named "Root", this will fail with a SQL Trac comment by sra on 2016-05-09T17:55:55Z |
Noting something I figured out while writing a report for Sandy: If we need to take this pickled database hack beyond what will easily fit in memory, one relatively simple way of breaking the problem up into chunks would be to use the Python Transfer format in this case would be a gdbm database, which we could then ship to another machine in portable format using the None of this is worth worrying about until and unless we hit a case which needs it, just making note of the technique while I remember it. Trac comment by sra on 2016-05-10T18:06:40Z |
is it safe to go through the root creation, pubd credentials, ... and patch in the migrated root key and resultant TAL later and then migrate the resources and users?
Trac ticket #807 component rpkid priority major, owner None, created by randy on 2016-04-25T01:02:57Z, last modified 2016-05-10T18:06:40Z
The text was updated successfully, but these errors were encountered: