-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate auctions to new database table #3067
Conversation
@@ -802,6 +802,92 @@ impl Persistence { | |||
ex.commit().await?; | |||
Ok(()) | |||
} | |||
|
|||
pub async fn populate_historic_auctions(&self) -> Result<(), DatabaseError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to test this beforehand in a unit test or e2e test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could even set up a snapshot of the actual DB and and run a local DB migration with that to see how it will behave with real data.
ex = self.postgres.pool.begin().await?; | ||
|
||
// update the current auction id | ||
current_auction_id = competitions.last().unwrap().id; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if this process get interrupted in the middle of it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another point for a separate script, which could store the counter on disk.
@@ -449,6 +449,11 @@ pub async fn run(args: Arguments) { | |||
.instrument(tracing::info_span!("order_events_cleaner")), | |||
); | |||
|
|||
if args.migrate_auctions { | |||
let persistence_clone = persistence.clone(); | |||
tokio::spawn(async move { persistence_clone.populate_historic_auctions().await }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when we spawn this I assume we are still populating both tables, therefore the migration will never end, right? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than injecting this code into this code base which relies a restart to run wouldn't it be more convenient to build a separate small CLI tool for that? That way this code can be run / tested independently from the services.
.clone(), | ||
}; | ||
|
||
if let Err(err) = database::auction::save(&mut ex, auction).await { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there are reason we only populate one of the new tables. AFAICS we can populate the proposed_solutions
and proposed_trade_executions
(at least partially).
I think it might be good to migrate as much as data as possible if we are considering deleting the old table entirely some time in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a separate issue for proposed solutions migration: #3056
The migration of solver competition is a bit more complicated and I planned doing it in a separate step -> smaller PRs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’ll second for a separate script. Not sure why we need this in the codebase. I understand this is required only to populate the new table with historical data, so it is like a one-shot operation.
Looks like there is a consensus this is not a right approach. Let me summarize the options:
|
Why not a simple Python script? Too time-consuming? Chatgpt should work well with generating something from the existing resources. |
With a separate tool you don't necessarily have to set up everything to have it automatically be spawned by kubernetes. You could have the tool and still run it manually inside the cluster using |
Closing as it will be executed via external tool |
Description
Fixes #3055
Data migration is controlled by configuration parameter, so it can be enabled on one by one network.
Data migration is done in a background task. Separate batched transactions are executed until all data from
solver_competition
table is moved to new auction table.How to test
Manually on sepolia staging, then rest.