I just attached a new volume to my vps and usually I follow the instructions provided using parted
and mkfs.ext4
but I decided to try ZFS.
The guides I’ve found online are all very different and I’m not sure if I did everything correct to know the data will be safe.
What I mean is running lsblk -o name,size,fstype,type,mountpoint
shows this
NAME SIZE FSTYPE TYPE MOUNTPOINT
vdb 100G disk
└─vdb1 100G ext4 part /mnt/storage
vdc 100G disk
├─vdc1 100G part
└─vdc9 8M part
You can see the type and mountpoint of the previous volume are listed, but the ZFS’ ones aren’t.
Still I can properly access the ZFS pool I created and I also already copied some test data.
root@vps:~/services# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
local-zfs 99.5G 6.88G 92.6G - - 0% 6% 1.00x ONLINE -
root@vps:~/services# zfs list
NAME USED AVAIL REFER MOUNTPOINT
local-zfs 6.88G 89.5G 6.88G /mnt/zfs
The commands I ran were these ones
parted -s /dev/vdc mklabel gpt
parted -s /dev/vdc unit mib mkpart primary 0% 100%
zpool create -o ashift=12 -O canmount=on -O atime=off -O recordsize=8k -O compression=lz4 -O mountpoint=/mnt/zfs local-zfs /dev/vdc
Does this look good?
Should I do something else? (like writing something to fstab)
The list of properties is very long, is there any one you recommend I should look into for a simple server where currently non-critical data is stored?
(I already have a separate backup solution, maybe I’ll check to update it later)
I wouldn’t set recordsize so small on a modern system, I left mine at 1M for a system that can store anything from small config files to full movies. Also set dedup=off (I can’t remember why now, but I believe it was bad to turn it on when your drive is nearly full?). Another option you might want to look at is xattr, I set this to ‘sa’ for all my systems because a lot of program use those features now. All of my pools are set up as raid-z2 or mirror, but zfs is an awesome filesys for protecting the integrity of your files.
Otherwise it looks pretty good? Keep an eye on “zpool status” for any bad drives in your pool(s), and hopefully your system added a cron job to scrub your pools once a month.
Dedup is apparently very very resource intensive and not worth the load unless you’re expecting 5x+ reductions in space usage
Probably depends though
Oh, for some reason I thought the reason to remove it was that it limited or broke some of the built-in sanity checking to maintain data integrity. I do remember reading in a number of places that it absolutely should be turned off at all times, but it’s been awhile since I set up a new zfs pool so I was just going off my notes of what settings I use (and confirmed I still used those on all the current pools).