Storage & Backups
Create zpool10
Storage Pool
The host's rpool
can be used for templates and whatnot but I want a big pool for storing all my data.
ashift=12
to use 4k blocksfailmode=continue
to let us keep reading if a drive goes badcompression=lz4
save save space and increase speedxattr=sa
be more efficient for linux attributesencryption=aes-256-gcm
use fast/secure encryption algorithmkeyformat=passphrase
unlock with a passphrase
This is the command I used to build my ZFS pool.
# zpool create -o ashift=12 -o failmode=continue -O compression=lz4 -O xattr=sa -O atime=off \
-O encryption=aes-256-gcm -O keyformat=passphrase \
-m /storage/zpool10 zpool10 raidz2 \
/dev/disk/by-id/ata-WDC_WD100EMAZ-00WJTA0_JEKH3DVZ \
/dev/disk/by-id/ata-WDC_WD100EMAZ-00WJTA0_2YK148SD \
/dev/disk/by-id/ata-WDC_WD100EMAZ-00WJTA0_JEKH8RWZ \
/dev/disk/by-id/ata-WDC_WD100EMAZ-00WJTA0_JEK6ESAN \
/dev/disk/by-id/ata-WDC_WD100EMAZ-00WJTA0_JEK53ZHN \
/dev/disk/by-id/ata-WDC_WD100EMAZ-00WJTA0_2YK0HL0D
Setup ZFS Scrub (Data Integrity)
Automate ZFS scrubbing so the data integrity on disks areis monitoredactively monitored, repaired if necessary, and II'm canalerted replaceif oneit ascan't soonbe as there is a problem detected.repaired.
Create Service/Timer (source)
# /etc/systemd/system/zpool-scrub@.timer
+ [Unit]
+ Description=Scrub ZFS pool weekly
+
+ [Timer]
+ OnCalendar=weekly
+ Persistent=true
+
+ [Install]
+ WantedBy=timers.target
# /etc/systemd/system/zpool-scrub@.service
+ [Unit]
+ Description=Scrub ZFS Pool
+ Requires=zfs.target
+ After=zfs.target
+
+ [Service]
+ Type=oneshot
+ ExecStartPre=-/usr/sbin/zpool scrub -s %I
+ ExecStart=/usr/sbin/zpool scrub %I
Enable ZFS Scrub
systemctl daemon-reload
systemctl enable --now zpool-scrub@rpool.timer
systemctl enable --now zpool-scrub@zpool10.timer
Setup Storage Layout
Setup my dataset layout.
zpool10/backups
local backups ofrpool
zpool10/downloads
landing zone of downloadszpool10/downloads/incomplete
landing zone for bittorrent downloads (recordsize=16k
for bittorrent)zpool10/media
storage for audio/tv/movieszpool10/proxmox
additional storage for proxmoxzpool10/proxmox/backups
backup for proxmox containers/vmszpool10/services
storage for services (possibly databases, so userecordsize=16k
)
zfs create zpool10/backups
zfs create zpool10/downloads
zfs create -o recordsize=16K zpool10/downloads/incomplete
zfs create zpool10/media
zfs create zpool10/proxmox
zfs create zpool10/proxmox/backups
zfs create -o recordsize=16K zpool10/services
Setup Sanoid/Syncoid (Data Backup)
Run Sanoid for automating snapshots and Syncoid for remote backups.
Install (source)
apt-get install debhelper libcapture-tiny-perl libconfig-inifiles-perl pv lzop mbuffer
sudo git clone https://github.com/jimsalterjrs/sanoid.git
cd sanoid
ln -s packages/debian .
dpkg-buildpackage -uc -us
apt install ../sanoid_*_all.deb
Configure Sanoid
I want to take hourly snapshots because sometimes I am not as careful or thoughtful as I should be about what I am doing at any given moment.
# /etc/sanoid/sanoid.conf
+ [rpool]
+ recursive = yes
+ frequently = 0
+ hourly = 36
+ daily = 30
+ monthly = 3
+ yearly = 0
+ autosnap = yes
+ autoprune = yes
+
+ [zpool10/services]
+ recursive = yes
+ frequently = 0
+ hourly = 24
+ daily = 30
+ monthly = 1
+ yearly = 0
+ autosnap = yes
+ autoprune = yes
# /usr/lib/systemd/system/sanoid.service
[Service]
- Environment=TZ=UTC
+ Environment=TZ=EST
Eanble Sanoid
systemctl daemon-reload
systemctl enable --now sanoid.service
Configure Syncoid
Backup rpool
to zpool10/backups
Right now rpool
is just running on a single non-redundant 512GB SSD disk. Even though it is only used for Proxmox (config, templates, ISOs) this isn't great practice and I'll work on this in the future. But in the meantime I am backing up everything on a daily timer to my main ZFS pool so I could recover very quickly if the SSD dies.
# /etc/systemd/system/rpool-backup.timer
+ [Unit]
+ Description=Backup rpool daily
+
+ [Timer]
+ OnCalendar=daily
+ Persistent=true
+
+ [Install]
+ WantedBy=timers.target
# /etc/systemd/system/rpool-backup.service
+ [Unit]
+ Description=Use syncoid to backup rpool to zpool10/backups/rpool
+ Requires=zfs.target
+ After=zfs.target
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/sbin/syncoid --recursive rpool zpool10/backups/rpool
Backup zpool10/services
offsite
All my docker containers store their configuration and data under the zpool10/services
dataset. It is imperative this is backed up offsite so if anything catastrophic ever happens I don't lose anything important and can get back up and running as quickly as I can download my backup.
# /etc/systemd/system/zpool10-services-backup.timer
+ [Unit]
+ Description=Backup zpool10/services daily
+
+ [Timer]
+ OnCalendar=daily
+ Persistent=true
+
+ [Install]
+ WantedBy=timers.target
# /etc/systemd/system/zpool10-services-backup.service
+ [Unit]
+ Description=Use syncoid to backup zpool10/services to backedup.swigg.net:bpool/zpool10/services
+ Requires=zfs.target
+ After=zfs.target
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/sbin/syncoid --recursive zpool10/services root@backedup.swigg.net:bpool/zpool10/services
Enable Syncoid
systemctl daemon-reload
systemctl enable --now rpool-backup.timer