You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Folks... sharing a little hard won experience here to hopefully help other Synology PWD users avoid a very time consuming and hairy situation I found myself in.
TL;DR
Synology Hyperbackup does not support Docker.
Even though Hyperbackup does grab the "Docker" shared folder it does not grab any of the Docker config or Docker directory structure
If you ever have to recover a PWD instance from Hyperbackup, you are toast without the Docker config
This is because without the Docker configs & files, you have to clean sheet Docker and install PWD from scratch
Installing PWD from scratch blows away all the handles to your previous DB data, rendering it unrecoverable
Synology users should be running a supplemental script to tar up all the relevant Docker directories
You have to decide whether to (1) do this before running Hyperbackup and putting the tar file somewhere Hyperbackup will find it, or (2) to do it after Hyperbackup finishes and manually copy the tar file to the backup media.
Explanation
I have been running PWD for probably almost 2 years on a Synology DS1618+ and I have gone back and pulled all the history from my Powerwalls back to their install date in 2019. The DS1618+ was an upgrade from an older Synology box and so my filesystem inherited the old ext4 instead of the newer btrfs that has self-healing. ext4 volumes are limited in their ability to expand. In the course of significantly upgrading my array size I discovered that I couldn't access all the new storage without going btrfs and that meant migrating my entire system by hand. You really don't want to know all the gory details of ext4 to btrfs migration on Synology. It was a complete bitch -- and that was before I discovered I had blown up my PWD databases.
Bottom line is that for the first time I was in a restore situation with my PWD instance.
I had foolishly thought I was covered because I was storing the shared folder that Docker set up which includes the PWD hierarchy.
I was aware that Docker is not on the list of supported Hyperbackup apps, but I assumed that that was OK because I could just reinstall Docker and then point to my PWD shared folder and be right back up. For example you do not see Docker on this list of what Hyperbackup will do for you:
But what I didn't think through was that doing a clean Docker install also forces a clean PWD install. And a clean PWD install basically means a blank versions of all the supporting containers for InfluxDB, Grafana etc. so when I got the new PWD running all my history was gone. And that's when I hit the brick wall because I hadn't saved any of my Docker config files.
After banging my head with multiple failed attempts to hack the new cleansheet PWD containers to point to my old PWD data directories, the obvious solution was to just go back up my original Docker files/directories. But of course they didn't exist in any of my backups for reasons stated above! Fortunately, I had the old HDAs so I dismounted all the new drives, re-installed the old drives (thereby confusing Synology OS which definitely does not like having that done), saved off a copy of my Docker files, and then reinstalled the new bigger disks and refired up the whole new system (confusing Synology all over again).
THE LESSON IS SYNOLOGY USERS MUST MANUALLY BACKUP THEIR DOCKER DIRECTORIES & FILES IF YOU WANT TO BE ABLE TO RESTORE IN THE FUTURE.
Procedure - What to Tar Up
What you need for this to work is to grab the "Docker" subdirectory in each of the "@app" directories plus the entirety of the "@docker" folder in your root drive. It will look something like this:
The "@docker" directory is where all your container binaries and data are stored. The 5 "@app" directories are generic Synology folders that contain applications you install on your NAS. Depending on your particular setup, some or most of these will already be supported by Hyperbackup so no need to deal with them. Here are the contents of my "@appconf" folder for example. Of these 7 applications, only AntiVirus, DNSServer and HyperBackup are supported by HyperBackup for automatic backup. Docker, Git, Python3.9 and VPNCenter would not be captured.
And if we look at the "@appconf/Docker" directory, voila the keys to the kingdom as far as Powerwall Dashboard is concerned.
The "@appdata/Docker" directory contains a critical symlink:
The "@apphome/Docker" and "@apptemp/Docker" directories are generally empty (but required with correct permissions). And the "@appstore/Docker" folder contains critical binaries and other files for Docker operation.
Therefore, here is the minimum list of what needs to be tar'ed up:
I have also gotten into the habit of adding a copy of my /[VOLUME_NAME]/docker directory as well (that includes the Powerwall_Dashboard subfolder). In theory Hyperbackup is getting this already with the Shared Folder backups but having been bitten once I'd rather have two copies and disk is cheap. I've gone back and recovered all my historical Powerwall data so my database goes all the way back to 2019 and I never want to have to repeat that process.
The full command I run (from the root directory of the volume with Docker on it) is as follows using todays date as an example:
It has to be run as 'su' for obvious reasons. Right now that is generating a tar file of about 9.5GB given all my history.
Procedure - When to Run Tar & Where to put the Tar file
You have a couple of choices here. The easiest would be to move the shiny new tar file into the shared folder with your PWD instance. In this case Hyperbackup will pick it up without any further work.
I do not recommend this method however. Because Hyperbackup is notoriously slow (backing up my 20TB array takes almost 3 full days) so by the time the backup is finished the PW data is out of date. Also, having been through this once if I ever need it again I don't want to have the extra step of pulling the tar file out of the Hyperbackup image which is only machine readable. I prefer having a standalone copy of the tar file on the backup medium.
Therefore, what I do is finish the Hyperbackup over however many days. Once that is done, only then do I run the 'tar' command so I have up-to-the-minute PWD data. For backup media I use an external 20TB HDA plugged into an eSata dock. When the tar create wraps up I 'cp' the tar file over to the external Sata disk before I dismount it and rotate in the next backup drive.
Good hunting!
The text was updated successfully, but these errors were encountered:
Hey @jasonacox - having raised this issue let me ask you to weigh in on a procedure question. Should PWD be administratively shut down before running the tar to create the backup?
In which case it would add the following commands to the sequence which becomes:
cd [VOLUME_NAME]
./docker/Powerwall_Dashboard/compose-dash.sh down
sudo tar cvf [BACKUP_FILE_NAME] ./@appconf/Docker ./@appdata/Docker ./@apphome/Docker ./@appstore/Docker ./@apptemp/Docker ./@docker ./docker
./docker/Powerwall_Dashboard/compose-dash.sh up -d
Folks... sharing a little hard won experience here to hopefully help other Synology PWD users avoid a very time consuming and hairy situation I found myself in.
TL;DR
Explanation
I have been running PWD for probably almost 2 years on a Synology DS1618+ and I have gone back and pulled all the history from my Powerwalls back to their install date in 2019. The DS1618+ was an upgrade from an older Synology box and so my filesystem inherited the old ext4 instead of the newer btrfs that has self-healing. ext4 volumes are limited in their ability to expand. In the course of significantly upgrading my array size I discovered that I couldn't access all the new storage without going btrfs and that meant migrating my entire system by hand. You really don't want to know all the gory details of ext4 to btrfs migration on Synology. It was a complete bitch -- and that was before I discovered I had blown up my PWD databases.
Bottom line is that for the first time I was in a restore situation with my PWD instance.
I had foolishly thought I was covered because I was storing the shared folder that Docker set up which includes the PWD hierarchy.
I was aware that Docker is not on the list of supported Hyperbackup apps, but I assumed that that was OK because I could just reinstall Docker and then point to my PWD shared folder and be right back up. For example you do not see Docker on this list of what Hyperbackup will do for you:
But what I didn't think through was that doing a clean Docker install also forces a clean PWD install. And a clean PWD install basically means a blank versions of all the supporting containers for InfluxDB, Grafana etc. so when I got the new PWD running all my history was gone. And that's when I hit the brick wall because I hadn't saved any of my Docker config files.
After banging my head with multiple failed attempts to hack the new cleansheet PWD containers to point to my old PWD data directories, the obvious solution was to just go back up my original Docker files/directories. But of course they didn't exist in any of my backups for reasons stated above! Fortunately, I had the old HDAs so I dismounted all the new drives, re-installed the old drives (thereby confusing Synology OS which definitely does not like having that done), saved off a copy of my Docker files, and then reinstalled the new bigger disks and refired up the whole new system (confusing Synology all over again).
THE LESSON IS SYNOLOGY USERS MUST MANUALLY BACKUP THEIR DOCKER DIRECTORIES & FILES IF YOU WANT TO BE ABLE TO RESTORE IN THE FUTURE.
Procedure - What to Tar Up
What you need for this to work is to grab the "Docker" subdirectory in each of the "@app" directories plus the entirety of the "@docker" folder in your root drive. It will look something like this:
The "@docker" directory is where all your container binaries and data are stored. The 5 "@app" directories are generic Synology folders that contain applications you install on your NAS. Depending on your particular setup, some or most of these will already be supported by Hyperbackup so no need to deal with them. Here are the contents of my "@appconf" folder for example. Of these 7 applications, only AntiVirus, DNSServer and HyperBackup are supported by HyperBackup for automatic backup. Docker, Git, Python3.9 and VPNCenter would not be captured.
And if we look at the "@appconf/Docker" directory, voila the keys to the kingdom as far as Powerwall Dashboard is concerned.
The "@appdata/Docker" directory contains a critical symlink:
The "@apphome/Docker" and "@apptemp/Docker" directories are generally empty (but required with correct permissions). And the "@appstore/Docker" folder contains critical binaries and other files for Docker operation.
Therefore, here is the minimum list of what needs to be tar'ed up:
I have also gotten into the habit of adding a copy of my /[VOLUME_NAME]/docker directory as well (that includes the Powerwall_Dashboard subfolder). In theory Hyperbackup is getting this already with the Shared Folder backups but having been bitten once I'd rather have two copies and disk is cheap. I've gone back and recovered all my historical Powerwall data so my database goes all the way back to 2019 and I never want to have to repeat that process.
The full command I run (from the root directory of the volume with Docker on it) is as follows using todays date as an example:
sudo tar cvf 2024-09-25-docker-PWdash-volume2-fullbackup.tar ./@appconf/Docker ./@appdata/Docker ./@apphome/Docker ./@appstore/Docker ./@apptemp/Docker ./@docker ./docker
It has to be run as 'su' for obvious reasons. Right now that is generating a tar file of about 9.5GB given all my history.
Procedure - When to Run Tar & Where to put the Tar file
You have a couple of choices here. The easiest would be to move the shiny new tar file into the shared folder with your PWD instance. In this case Hyperbackup will pick it up without any further work.
I do not recommend this method however. Because Hyperbackup is notoriously slow (backing up my 20TB array takes almost 3 full days) so by the time the backup is finished the PW data is out of date. Also, having been through this once if I ever need it again I don't want to have the extra step of pulling the tar file out of the Hyperbackup image which is only machine readable. I prefer having a standalone copy of the tar file on the backup medium.
Therefore, what I do is finish the Hyperbackup over however many days. Once that is done, only then do I run the 'tar' command so I have up-to-the-minute PWD data. For backup media I use an external 20TB HDA plugged into an eSata dock. When the tar create wraps up I 'cp' the tar file over to the external Sata disk before I dismount it and rotate in the next backup drive.
Good hunting!
The text was updated successfully, but these errors were encountered: