Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Update on Unresolved Database Corruption Issue (#41 and Related) #264

Open
copy2018 opened this issue Nov 16, 2024 · 2 comments

Comments

@copy2018
Copy link

copy2018 commented Nov 16, 2024

I would like to inquire about any updates regarding the database corruption issue initially reported in #41 and its relation to #155, which mentions the "File size is not a multiple of recordSize" error.

The discussion under #41 highlights:

A note that the issue was planned to be fixed in a future release. It was also advised to avoid the use of --fast-sync as it is experimental and resource-intensive. Additionally, it was mentioned that the corruption is not limited to the initial sync but also affects already synced devices.

A follow-up query in June 2023 asked for updates regarding both issues (#41 and #155), emphasizing that ungraceful shutdowns necessitate resyncs, which has caused concern for StartOS.

Given the above context, is there any progress or resolution for this problem? Are there updated best practices for preventing database corruption in the current versions of Fulcrum?

Thank you!

@JamesKrolak
Copy link

I'm dealing with this problem doing an initial sync, as well, on Windows 2022 Server. Every time, it gets close to the end, then apparently it ends up eating up all available memory on the system, causing bitcoind to crash, and the Fulcrum DB to be corrupted. In the latest, instance, the bitcoin core block index reported as needing to be re-indexed, which took 24+ hours. I had no swap file set in Windows, though. There was speculation in 1 of those other threads that it was happening on systems with no swap enabled. I've given this VM more memory and setup a swap file to see if that prevents it.

@JamesKrolak
Copy link

Either setting up a small swap file or doubling the server's memory prevented this resource exhaustion issue from happening again for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants