In preparation for the arrival of my HBA, I’m creating a backup of my server. As things currently stand:
Filesystem Size Used Avail Use% Mounted on udev 79G 0 79G 0% /dev tmpfs 16G 86M 16G 1% /run rpool/root 6.9T 1.4T 5.5T 21% / tmpfs 79G 28K 79G 1% /dev/shm tmpfs 5.0M 8.0K 5.0M 1% /run/lock tmpfs 79G 0 79G 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 16G 0 16G 0% /run/user/1000
I’m using 1.4T. Thats less than a formatted 2T hard drive. That’ll definitely fit!
Except it doesn’t. Left the rsync running overnight, got to work today, and the drive was full at approximately 1.8T.
Why? Because apparently ZFS compression is doing it’s job..
That was a question I had regarding disk usage measurement with ZFS compression enabled. du output is (surprise) how much space is used on the disk, not how much data you actually have. In my case:
root@tnewman0:~# zfs get all rpool | grep compressratio rpool compressratio 1.17x - rpool refcompressratio 1.00x -
1.17 x 1498796032 kilobytes is 1753591357 kilobytes, or 1.8T. Tight fit. Probably could have done a bit of slimming down and squeezed it in, but wheres the fun in that.
My solution:
root@tnewman0:~# zpool status pool: backup state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 wwn-0x5000cca22de70c5e-part1 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: scrub repaired 0 in 2h15m with 0 errors on Mon Jan 9 22:03:44 2017 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 scsi-22f6baa1200d00000-part1 ONLINE 0 0 0 scsi-22f4b9a2e00d00000-part1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 scsi-22f4be2f000d00000-part1 ONLINE 0 0 0 scsi-22f5b32bc00d00000-part1 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 scsi-22f5b92a900d00000-part1 ONLINE 0 0 0 scsi-22f5bc2a900d00000-part1 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 scsi-22f6b1ee800d00000-part1 ONLINE 0 0 0 scsi-22f6b5eb900d00000-part1 ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 scsi-22f7b0a1900d00000 ONLINE 0 0 0 scsi-22f7b4a0d00d00000 ONLINE 0 0 0 cache scsi-22f7bda1b00d00000 ONLINE 0 0 0 spares scsi-22f4b4ac400d00000-part1 AVAIL errors: No known data errors
Make a compression enabled pool on the external!
Aaaand now we wait for rsync to do its business..
UPDATE: Interesting change in I/O wait time between filesystems. When going from ZFS pool to EXT4, the average I/O wait percentage is ~13.14%. When going from ZFS pool to ZFS pool, the I/O wait percentage is ~6.58%.
Leave a Reply