Kodi Community Forum
FreeNAS versus unRAID as the operating-system for a DIY NAS? - Printable Version

+- Kodi Community Forum (https://forum.kodi.tv)
+-- Forum: Discussions (https://forum.kodi.tv/forumdisplay.php?fid=222)
+--- Forum: Hardware (https://forum.kodi.tv/forumdisplay.php?fid=112)
+--- Thread: FreeNAS versus unRAID as the operating-system for a DIY NAS? (/showthread.php?tid=82811)

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17


- darkscout - 2010-10-16

gadgetman Wrote:Pros/Cons

Raid-z3 is also available. Allowing up to 3 disk failures.

All drives do NOT have to be online to access the data.

Code:
pfexec zpool offline tank c6t0d0p0
pfexec zpool offline tank c5t2d0p0

pfexec zpool list
pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
scrub: scrub completed after 8h23m with 0 errors on Mon Oct 11 05:20:58 2010
config:

        NAME          STATE     READ WRITE CKSUM
        tank          DEGRADED     0     0     0
          raidz2-0    DEGRADED     0     0     0
            c6t0d0p0  OFFLINE      0     0     0
            c5t2d0p0  OFFLINE      0     0     0
            c5t3d0p0  ONLINE       0     0     0
            c5t4d0p0  ONLINE       0     0     0
            c5t5d0    ONLINE       0     0     0

ls /tank/
Movies Music Software TV Pictures

+ Also has Nexenta, NexentaStor, GNU/kFreeBSD, etc

You do not need a ton of spare drives. 2 at minimum.
zpool add tank mirror disk1 disk2
zpool add tank raidz2 disk1 disk2 disk3 disk4 disk5


http://docs.sun.com/app/docs/doc/819-2240/zpool-1m
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide


- poofyhairguy - 2010-10-16

WHS

+Available on Pre-Built Systems
+Very easy to use
+Provides autobacking up of Windows systems on the network with the WHS box
+Has large pluggin community and easy pluggin development
+You can add any size or type of drive you want and it will use the maximum space on those drives
+Easy remote access
-Uses duplication for protection, not parity (almost a double minus)
-Costs money and is closed software
-Prebuilt systems usually have a low (4 or less) drive count
-Moderare performance when compared to a striped RAID solution


- froggit - 2010-10-16

poofyhairguy Wrote:ZFS
-Far more robust than hardware RAID solutions while delivering equivalent performance
-Includes real time protection against "bit rot."
-Allows you to put together many arrays (called vdevs) into a single storage pool. This allows you to customize how much redundancy you have and optimize a solution for your situation
-Can have up to three parity drives
-Faster than Unraid on writes and reads, sometimes by a large amount
-Array can be moved to any system that support ZFS, and therefore is no OS dependent
-Designed for corporate use, so has some real money behind it
-Yet can be used by freely available OSes like FreeBSD
-Downsides include: data is striped so data on drive can't be read on other computers, you must use the same size drives or you waste any space larger than the smallest drive in the vdev, and when data is accessed from a vdev all the drives in that vdev spin up because of the striped data
-Best Uses: Servers with some important non-media data. Mediaserver with many (4+) clients, anything commercial, places where you would usually use RAID 5/6 (basically ZFS makes those RAId levels obsolete).

Unraid
-Allows you to mix and match drives of different sizes and makes into a single array of pooled storage
-Allows you to pull the drives of the array out and read the data on them on another computer
-Allows drives to spin down if they are not being accessed, saving power and possible prolonging their life
-Unraid allows for the growing of the array in size by replacing one drive at a time with full use of that drive after addition and no data loss
-Downsides include: Unraid costs money for real versions, Unraid's write speeds are pretty low without a cache drive, Unraid's read speeds are slightly lower than the drives by themselves, Unraid has no protection against "bit rot," Unraid relies on its OS based on the primitive Slackware Linux, Unraid currently only allows for one parity drive
-Best Uses: A media server that is grown periodically as storage is needed, one cheapest disk available at a time.


How about that?


Sounds pretty good poofyhairguy. After reading it, my suggested additions/changes are below - what do you think?:

ZFS:
-Can have up to three parity drives per vdev, and n-drive mirrors
-Yet can be used by freely available OSes like FreeBSD, OpenIndiana/Illumos, Linux* (* http://www.osnews.com/story/23416/Native_ZFS_Port_for_Linux)

-Downsides include:
* with RAID-Z1, RAID-Z2 and RAID-Z3 (but not mirrors) data is striped so data on drive can't be read on other computers that don't have ZFS read capability
* you must use the same size drives within a vdev or you waste any space larger than the smallest drive in the vdev
* and when data is accessed from a vdev all the drives in that vdev spin up because of the striped data or mirror (*1)

-Best Uses: Servers with some important non-media data. Mediaservers benefit from ZFS too, with faster access for multiple HTPCs, anything commercial, places where you would usually use RAID 5/6 (basically ZFS makes those RAId levels obsolete).

*1: but the drives are already spinning unless power management is used to spin-down the drives. Spinning drives is also a pro as it gives faster access because you don't need to wait for drive(s) to spin up.

----

unRAID:
-Allows drives to spin down if they are not being accessed, saving power and possible prolonging their life (but what about spin-up/down wear and tear on drive components?)
Downside include: when a non-parity drive dies you will probably lose all data on that drive, unless you are able to recover it using tools like (name them here or point to recovery URL perhaps?), or you have backups. In case of data loss you will need to use backups if you have them, or re-rip your media from original media.


- froggit - 2010-10-16

gadgetman Wrote:@froggit: not to belittle any potential problem, but correct me if I'm wrong here: the possibility of bitrot happening is far less than a drive failure, within the same amount of time.

Not to mention, bitrot should not happen on read-only access on drives- which is like 90% use of media content servers.

I don't remember the probability of bit rot occurring in a certain time-frame, but I seem to remember that it's not that rare, in fact, reasonably common.

If I dig out something informative, I'll post it here. In the meantime, this might be interesting:
http://blogs.sun.com/bonwick/entry/zfs_end_to_end_data


- froggit - 2010-10-16

harryzimm Wrote:Nice comparisons guys. It's good to know that some people can see the positives in other than what they have chosen for their own setup. Stop taking this so personally (froggit). I think the amount of research you have put in has clouded your judgment. Take a step back once in a while. You don't need to prove your setup works for you, we believe you.

cheers

It was the research I did that led me to considering usage of ZFS: it was right for me because I did *almost* lose a lot of irreplaceable data, and I said 'never again'.

But I accept other people will use whatever they want, and of course, that is fine, I just wanted to throw ZFS into the discussion to show that it is another strong contender, and let others decide on whatever they want, armed with some facts about ZFS etc. Try not to get so uptight if someone else tries to show some other solution backed up by some data and facts. No flames please. As you said, let's move on... it seems we are now moving onto creating a useful wiki of pros and cons of the various solutions suggested here in this thread, and I think that is a very useful and practical outcome.


- froggit - 2010-10-16

darkscout Wrote:Why does everyone keep saying this? It's not true. True, with ANY RAID (even unRAID) you're going to get better performance with matched drives. If you have mismatched drives, you either give up security or space. I have mismatched drives right now for my Xen virtual disks.




You seem to be confused. You can't expand a vdev. You can certainly expand pools. You should read up a bit more on how they all relate. If you have 14TB of data and want to expand the system, all you need to do is slap more drives into a vdev then add the vdev to the pool.

And giggity. I just re-read the Wiki. ZFS is in GNU/kFreeBSD. So that's yet another option.

Not to nit-pick but you can 'expand' a vdev in capacity, but not number of devices, by simply replacing drives in a vdev with larger ones and resilvering (scrub).


- poofyhairguy - 2010-10-16

[quote=froggit]Sounds pretty good poofyhairguy. After reading it, my suggested additions/changes are below - what do you think?[/quotes]

Looks good overall.

[quote]
-Yet can be used by freely available OSes like FreeBSD, OpenIndiana/Illumos, Linux* (* http://www.osnews.com/story/23416/Native_ZFS_Port_for_Linux)[/quote]

I think mentioning the open source nature of ZFS is important.

[quote]
Spinning drives is also a pro as it gives faster access because you don't need to wait for drive(s) to spin up.[/quote]

Very true, good point. My wife still hasn't gotten completely used to the extra three seconds it takes to spin up a drive when she clicks on something in XBMC.

[quote](but what about spin-up/down wear and tear on drive components?)[/quote]

Again good point, two sides of the same coin I guess.

[quote]
Downside include: when a non-parity drive dies you will probably lose all data on that drive[/quote]

That needs to be changed to "when two non-parity drives die at once you will lose the data on those drives" but otherwise looks great!


- harryzimm - 2010-10-16

froggit Wrote:It was the research I did that led me to considering usage of ZFS: it was right for me because I did *almost* lose a lot of irreplaceable data, and I said 'never again'.

But I accept other people will use whatever they want, and of course, that is fine, I just wanted to throw ZFS into the discussion to show that it is another strong contender, and let others decide on whatever they want, armed with some facts about ZFS etc. Try not to get so uptight if someone else tries to show some other solution backed up by some data and facts. No flames please. As you said, let's move on... it seems we are now moving onto creating a useful wiki of pros and cons of the various solutions suggest here in this thread, and I think that is a very useful and practical outcome?

I agree, Lets make this thread as useful as we can. Sorry for picking on you Smile, hopefully the wiki and xbmc users will benefit from this discussion.

cheers


- maxinc - 2010-10-16

froggit Wrote:unRAID:
-Allows drives to spin down if they are not being accessed, saving power and possible prolonging their life (but what about spin-up/down wear and tear on drive components?)
Downside include: when a non-parity drive dies you will probably lose all data on that drive, unless you are able to recover it using tools like (name them here or point to recovery URL perhaps?), or you have backups. In case of data loss you will need to use backups if you have them, or re-rip your media from original media.

I think you must be confusing unRAID with something else because none of that applies to unRAID. What would be the purpose of a parity drive if not for offering redundancy in case one of the non-parity drives decides to die one day?


- froggit - 2010-10-16

harryzimm Wrote:I agree, Lets make this thread as useful as we can. Sorry for picking on you Smile, hopefully the wiki and xbmc users will benefit from this discussion.

cheers

No probs Wink

And yes, I think the info in this thread will benefit others. And we can put it into the wiki too.


- froggit - 2010-10-16

poofyhairguy Wrote:[quote=froggit]Sounds pretty good poofyhairguy. After reading it, my suggested additions/changes are below - what do you think?[/quotes]

Looks good overall.



I think mentioning the open source nature of ZFS is important.



Very true, good point. My wife still hasn't gotten completely used to the extra three seconds it takes to spin up a drive when she clicks on something in XBMC.



Again good point, two sides of the same coin I guess.



That needs to be changed to "when two non-parity drives die at once you will lose the data on those drives" but otherwise looks great!

Wow, you have quick drives!

What happens if one non-parity drive dies?


- froggit - 2010-10-16

maxinc Wrote:I think you must be confusing unRAID with something else because none of that applies to unRAID. What would be the purpose of a parity drive if not for offering redundancy in case one of the non-parity drives decides to die one day?

Yes, that's quite possible. I thought that's what an unRAID user posted here, but I have asked poofyhairguy what happens if one drive dies so I expect he'll let me know.


- maxinc - 2010-10-16

froggit Wrote:What happens if one non-parity drive dies?

What would you expect to happen?


- Flomaster - 2010-10-16

if one drive fails thats when parity steps in to save your data off that disk, if another drive fails before you can replace said failed drive and rebuild data you end up loosing data on both of those failed drives. your other 1-17 drives remain intact with all data in place

-=Jason=-


- darkscout - 2010-10-16

Never at any time have I loosened any of my data.
I right a tight setup.