You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Awesome tool. We had a use case to pull from our customer addresses table and needed to obfuscate the data to test out why a query being run in our production environment could not be replicated in staging. So to do this, i pulled out all of the addresses and only that via a dump. ~8 million rows took about an hour to dump which is fine. It was about 4GB of data. However, restoring the dump was where i had my issues. I started yesterday around 1pm EST and it it's now 10am EST the following day and it only has about 1.7million records in the replica db
Are there any other destination options to help speed this up. I took a look through the code base and nothing jumped out at me, but wanted to ask to see if i was missing something around how it batches it's request.
Awesome tool. We had a use case to pull from our customer addresses table and needed to obfuscate the data to test out why a query being run in our production environment could not be replicated in staging. So to do this, i pulled out all of the addresses and only that via a dump. ~8 million rows took about an hour to dump which is fine. It was about 4GB of data. However, restoring the dump was where i had my issues. I started yesterday around 1pm EST and it it's now 10am EST the following day and it only has about 1.7million records in the replica db
Here is what my config file looks like:
Are there any other destination options to help speed this up. I took a look through the code base and nothing jumped out at me, but wanted to ask to see if i was missing something around how it batches it's request.
Edit:
here is where i was looking:
Replibyte/replibyte/src/config.rs
Line 222 in 1476dd7
The text was updated successfully, but these errors were encountered: