The nix config files should be copied into /nixfiles like:
rsync -ahP /nixfiles root@<host>:/
The sops secret files '/keys/age/.txt" should be copied to the machine first like:
rsync -ahP --relative /keys/age/<hostname>.txt root@<host>:/
SSH into the target host as root and run the install script for help and then fill in the params.
bash /nixfiles/bootstrap/nixos-root-on-zfs/install-linux.sh
Optionally, create the windows partitions:
bash /nixfiles/bootstrap/nixos-root-on-zfs/install-windows.sh
bash /nixfiles/bootstrap/nixos-root-on-zfs/install-windows.sh --separate_esp
After installing Windows on the 2nd disk, if Windows format and label the dummy ESP partition as ESP,
fatlabel <2st_disk_esp_partition> ESP2
fatlabel <1st_disk_esp_partition> ESP
and consider using efibootmgr to clean up the entries like
efibootmgr -b <entry_number> -B
Verify the content of /mnt. Then export the zfs pool:
umount -Rl /mnt
zpool export -a
Reboot the machine. After reboot, switch to tty and setup home manager:
nix build --no-link path:/nixfiles#homeConfigurations.${USER}@$(hostname).activationPackage
"$(nix path-info path:/nixfiles#homeConfigurations.${USER}@$(hostname).activationPackage)"/activate
If the machine is a remote headless machine, we always want to deploy it via deploy-rs, so remove the nixfiles (/keys shouldn't be removed because sops decrypt in activation scripts):
rm -rf /nixfiles
For rescuing an existing system in the live usb, mounts the directories:
DISK=</dev/disk/by-path/...>
zpool import -a -f -N -R /mnt
zfs load-key rpool
zfs mount -a
ESP_PART=${DISK}-part1
mkdir -p /mnt/boot
mount -t vfat $ESP_PART /mnt/boot
In install-linux.sh
, before nixos-install
, you might want to verify the configuration first by
nix build --experimental-features 'nix-command flakes' "path:/nixfiles#nixosConfigurations.<target_hostname>.config.system.build.toplevel"
The neededForBoot=true
in the sed replacement of hardware-configuration.nix is important to allow sops-nix to access the encryption keys in its activation script. (ref)
On the new machine, destroy the existing dataset:
zfs destroy -r rpool/data/home/sinkerine
On the machine that has the backup, run:
sudo zfs send -c <backup@snapshot> | ssh <root@new_machine_host> "zfs recv rpool/data/home/sinkerine"
Create the backup-disabled datasets back as needed:
zfs create -o canmount=on rpool/data/home/sinkerine/.cache
zfs create -o canmount=on rpool/data/home/sinkerine/vmware
chown -R 1000:1000 /mnt/home/sinkerine/.cache /mnt/home/sinkerine/vmware
zpool can find more than one matching pool by the pool name if there are leftover zpool label on the disk with old data. It's probably caused by forgetting to zpool labelclear
or wipefs -a
on the existing zpool device and then create a new partition table on the device. dd zero to the device to clear all data:
dd if=/dev/zero of=</dev/disk/by-path/disk> bs=1M status=progress
Nix-env error opening lock file
Solution: rm ~/.nix-profile
. ref
Activating dconfSettings. Error receiving data: Connection reste by peer
Solution: sudo systemctl restart [email protected]