PatrickVogeli Wrote:yeah, and with unraid you don't care about raid at all... what happens if your motherboard fries and you have to change it? Probably, the raid implementations of the new board won't be compatible and maybe you can't redo the array.
also, I've had some bad experience with bios raid.. like once I swapped motherboards (same models) and forgot to activated bios in raid, booted, then activated and well... bad stuff happened.
I don't know if/how this could affect freenas / zfs. Any ideas?
I have read the whole thread and it seems like the same concerns regarding Unraid and ZFS reappear quite often. So I will try to explain how they both work from what I gathered through this thread. I have to point out that my knowledge regarding ZFS nearly exclusively stems from this thread. I will edit the information if somebody points out flaws.
Similarities of how Unraid and ZFS function
1. Unraid and ZFS both work via SOFTWARE. So you don't have to worry with either of them too lose all your data if you change your motherboard.
2. Both protect your data via parity drives. You can lose as many drives as you have parity drives without loosing data.
Differences in how they protect data and hard drives are added
1. Unraid
a) Unraid only offers one parity drive.
b) Data is not spread among several drives. As a consequence, you can access the data directly from the drive without Unraid. Additionally, if you access a file, only one drive has to be spinning. As a negative side effect, the read/write speed is worse than with ZFS.
c) If you lose more than one non-parity drive, you lose the data on these drives. However, you can still access the remaining data (see b).
d) As long as the parity drive is as big as your biggest non-parity drive, you can add any hard drive you want and your storage increases by the whole amount of storage offered by the new hard drive. So expending storage is pretty inexpensive. The data stored on the new drive is protected by the parity drive already in place.
2. ZFS
a) ZFS arrays (combined storage of several had drives) are called vdev.
b) One vdev offers up to 3 parity drives (RAIDZ3= 3 parity drives, RAIDZ2= 2 parity drives, etc.).
c) You store your data in pools which you can assign any name. The data of one pool can be spread among several vdev.
c) Data is spread among several drives. You cannot take out a single drive and read from it. If you read/write data, all drives of the corresponding vdev have to be spinning. Faster than Unraid when it comes to read/write of files.
e) If you lose more non-parity drives in a vdev than you have parity drives, all the data stored in that vdev will be lost.
f) If you create a vdev with drives of different sizes, the smallest drive capacity will be the limit of capacity for the other drives.
Example: If you create a vdev with one 500 GB and three 2 TB drives, you will only be able to use 500 GB per drive.
g) Adding hard drives: You cannot add hard drives to an existing vdev. Instead, you can create a new vdev and add it your pool.
Example: You have already a vdev with RAIDZ3 and six 2 TB hard drives. This means you have 6 TB of storage and another 6 TB for parity. Now you buy 4 new 2 TB drives which you would like to add to your storage. For this, you have to create a new vdev. The question is then if you still want to use RAIDZ3 for the new vdev as that would mean you would only get 2 TB of new storage as 6 TB would be used for parity. Consequently, you either settle for less parity protection or buy more drives. For the example, let us assume you create a RAIDZ2 vdev. So you know have a new vdev with 4 TB of storage and 4 TB of parity. In total you know have 5 disks used for storage and 5 disks used for parity. However, you don't have to lose more 5 disks to lose data. If you lose more than 2 drives on the on the RAIDZ2 vdev, you will lose all of its data. So while there is an additional safety in this setup, it is not as save as 5 parity drives protecting all the data. Additionally, every time you add a new vdev, you have to pay extra for new parity disks and lose Sata ports due to them. This makes expending storage more expensive in relation to Unraid.
Data Protection
1. ZFS is superior regarding data protection when no drives fail because of additional features that prevent bit rot and other issues. This is not debatable.
2. DRIVE FAILURE
Well this is a heated debate throughout this thread and really depends on what the end-user is more afraid of. For an illustrative example, let us assume a data security paranoid individual (seem to be quite a few around here
) who can either choose between RAIDZ3 vdev from ZFS (3 parity drives) or Unraid. Let us further assume that money is of no issue and no other means of backup are used as they are available for both systems. Then the question boils down to what you are more afraid of:
a) The
unlikely event of losing the data of 2 drives in the case of 2 non-parity drives failing with Unraid. You would not lose any data with your RAIDZ3 ZFS vdev in this scenario.
b) The
even less likely event of losing the data of 3 drives in the case of 3 non-parity drives failing with Unraid. You would not lose any data with your RAIDZ3 ZFS vdev in this scenario.
c) The
very unlikely event of losing
all your data of your RAIDZ3 ZFS vdev array if 4 drives fail. You would "only" lose the data of the failed 4 non-parity drives in the case of Unraid.
The risks are seen relatively. I do not know how probable it is that 2,3, or 4 drives fail at the same time. But I think we can all agree that it is more likely for 2 drives to fail at the same time then 4. Furthermore, some people claim that the likeliness of multiple drive failure increases if you buy multiple drives from the same manufacturer and batch as you often do with ZFS. However, all these additional risk considerations are more of a feeling thing than anything else. So far I have not seen any statistical tables exactly quantifying for example the threat of bit rot or multiple drive failures from a bad batch. If anybody reading this is looking for a master thesis topic, data security of media NAS seems interesting