What happens if I have to change the underlying hardware behind the
zfs pool? Like the mobo / processor, what happens if that dies on me
in a year or two; can I port my zfs pool somehow?
A ZFS pool is not hardware dependent. Just make sure your HBA (Host Bus Adapter) isn’t doing something like encrypting your data at the hardware level. ZFS works best with a HBA like an LSI 9211-8i or an IBM m1015 cross-flashed to use the 9211-8i firmware, not a full blow “hardware” RAID card.
I’ve got quite the set of different sized drives, and i’m trying to
get the most storage space out of it with redundancy. What is the best
setup for this config, and how much space will I be losing by using
these different size drives. I am not creating this for any speed
requirements, I just want a file server for multiple HTPCs. My
currently available drives for this are: 1x 500GB ‘Hybrid’ Drive 1x
1TB Drive 1x 3TB Drive 1x 4TB Drive (will be added to the pool later,
currently holding all the data from the the drives listed above)
If I were you I would sell the smaller drives and put the money towards larger drives of all the same size. It will make your life a lot easier. Also, you cannot just add drives to a ZFS pool. There are constraints. Read here.
Will adding the 4TB drive to the pool later be a problem of any kind?
Possibly. I am in a similar position. At some time in the future I will have to increase my storage capacity. At that time I plan on purchasing a second HBA and a new array of larger drives. I will then transfer all the data from my existing drives to my new drives then sell my existing drives. There may be other (cheaper) ways around this, but doing it this way:
- Keeps all of my drives the same size
- Only has the additional cost of an extra HBA, which isn’t a bad thing to have laying around anyhow
- Does not require me to replace my drives one at a time, re-silvering after each replacement.
Any recommendations on a Linux OS to run this all on, and should I use
a separate drive for the OS? I’m familiar with Ubuntu, RHEL, and
OpenSUSE / SLES.
Don’t use Linux, it does not have native ZFS support. Linux support of ZFS comes from ZFS on Linux and zfs-fuse. The current state of ZFS is in flux as Oracle tries their best to ruin it. ZFS will likely branch at version 28 in the very near future, so don’t make your ZFS pool with any version greater than 28 unless you are 100% certain you want to stick with an Oracle solution. Currently FreeBSD and its spinoffs support ZFS version 28.
Since you are a self proclaimed ZFS noob I would recommend FreeNAS. I have been using it for awhile now and I’m pretty happy with it. It will definitely allow the most straight forward setup for you.
Make sure you choose the correct level of parity for your particular use case. Specifically, make sure you plan around URE. Basically you don’t want to use RAID 5 (RAID Z1) if you are using anything larger than 2TB drives. There are some other factors to consider that may prompt you to increase your level of parity data as well. Here is a good article on the subject.
It has been 1.5 years since I posted this answer and in that time I have been giving ZFS on Linux (Ubuntu server specifically) another chance. It has come a long way since I tried first tried it and I’m pretty happy so far. My reason for switching was the installation restrictions on FreeNAS and the jailing system. I wanted to use my server for more than just a NAS server and FreeNAS makes that hard. The jailing system is good and very secure, but I didn’t really need that level of security in my home and I didn’t want to deal with logging into a jail every time I wanted to unzip a file. I think FreeNAS is still a good choice if you are just getting started with ZFS (because of the web interface) or if you just want a NAS appliance (i.e. no other server functionality needed).
1: there is no problem changing anything. The pool should be importable regardless of the CPU, mainboard or anything similar.
2: ZFS works best with devices of the same size. Moreover, as you want redundancy, devices larger than the smallest one would have their extra size wasted. Finally, you cannot add a device (eg: the 4 TB disk) to a RAIDZ. If you only want metadata redundancy (which I doubt), you can create a stripe with all of your disks, and add the 4 TB disk later to the pool.
Alternatively, you could first create a 500 GB pool containing a mirror with disk 1 and disk 2 and keep disk 3 for later, then add a second mirror when you have the 4 TB disk available with disk 3 and disk 4 making a 3.5 TB pool.
3: yes, see #2
4: No recommendation.
If you want to create a raid with zfs using different disk sizes you need to use “zpool create (name of your pool) raidz1 -f sdb sdc sdd” the -f arqument force zfs to use different sizes example 500gb 1tb 250gb hd
Turns out you can’t create with ashift=12 in zfs-fuse:
# zpool create -n -o ashift=12 test /dev/disk/by-id/scsi-SATA_...... property 'ashift' is not a valid pool property
But it works with the one from github zfs+spl 0.6.5.x:
# dd if=/dev/zero of=/tmp/testfile bs=1M count=64 # zpool create -o ashift=12 test /tmp/testfile # dd if=/dev/zero of=/tmp/testfile8 bs=1M count=64 # zpool create -o version=8 test8 /tmp/testfile8
ubiquibacon’s answer covers all your direct questions, but I thought I’d chime in some “first hand experience”.
ZFS on FreeBSD is my primary area of experience, although most ZFS implementations are similar enough that the resources might be analogous. I chose FreeBSD for my installation because it gives me a general-purpose operating system that I can use for whatever devious purposes I choose, as opposed to a friendlier but special purposed solution such as FreeNAS. ZFS, configured correctly, can be a great system. ZFS configured incorrectly can be a total pain. It is a relatively new filesystem and is not as well understood as older, more well established filesystems (like UFS2 in FreeBSD or ext2/3/4 on Linux. The mailing lists are fairly active and it’s probably worth your while to at least scan them to understand what will be expected of you, should you run into any problems. The people on them are generally friendly and very helpful as long as you are willing to help figure out what’s going on. For this general “newness”, you get neat features like compression which can be turned on in a lot of circumstances with little lost, and dangerous features like De-duplication, which can require a lot of resources, be impossible to turn off without copying all your data off, and can make your computer unbootable (as mine did, one happy day).
It’s a great filesystem if any of the benefits outweigh the tradeoffs. I’ve been happy with mine overall.
An ad-hoc solution to this is to partition the disks into sets of equal sized partitions and then create multiple pools using partition sets of equal size.
There may be some performance issues due to having multiple pools on the same physical disks, but you do get to use most of the space on your disks.