FreeNAS versus unRAID as the operating-system for a DIY NAS? - Printable Version +- Kodi Community Forum (https://forum.kodi.tv) +-- Forum: Discussions (https://forum.kodi.tv/forumdisplay.php?fid=222) +--- Forum: Hardware (https://forum.kodi.tv/forumdisplay.php?fid=112) +--- Thread: FreeNAS versus unRAID as the operating-system for a DIY NAS? (/showthread.php?tid=82811) |
- darkscout - 2010-10-16 gadgetman Wrote:Pros/Cons Raid-z3 is also available. Allowing up to 3 disk failures. All drives do NOT have to be online to access the data. Code: pfexec zpool offline tank c6t0d0p0 + Also has Nexenta, NexentaStor, GNU/kFreeBSD, etc You do not need a ton of spare drives. 2 at minimum. zpool add tank mirror disk1 disk2 zpool add tank raidz2 disk1 disk2 disk3 disk4 disk5 http://docs.sun.com/app/docs/doc/819-2240/zpool-1m http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide - poofyhairguy - 2010-10-16 WHS +Available on Pre-Built Systems +Very easy to use +Provides autobacking up of Windows systems on the network with the WHS box +Has large pluggin community and easy pluggin development +You can add any size or type of drive you want and it will use the maximum space on those drives +Easy remote access -Uses duplication for protection, not parity (almost a double minus) -Costs money and is closed software -Prebuilt systems usually have a low (4 or less) drive count -Moderare performance when compared to a striped RAID solution - froggit - 2010-10-16 poofyhairguy Wrote:ZFS Sounds pretty good poofyhairguy. After reading it, my suggested additions/changes are below - what do you think?: ZFS: -Can have up to three parity drives per vdev, and n-drive mirrors -Yet can be used by freely available OSes like FreeBSD, OpenIndiana/Illumos, Linux* (* http://www.osnews.com/story/23416/Native_ZFS_Port_for_Linux) -Downsides include: * with RAID-Z1, RAID-Z2 and RAID-Z3 (but not mirrors) data is striped so data on drive can't be read on other computers that don't have ZFS read capability * you must use the same size drives within a vdev or you waste any space larger than the smallest drive in the vdev * and when data is accessed from a vdev all the drives in that vdev spin up because of the striped data or mirror (*1) -Best Uses: Servers with some important non-media data. Mediaservers benefit from ZFS too, with faster access for multiple HTPCs, anything commercial, places where you would usually use RAID 5/6 (basically ZFS makes those RAId levels obsolete). *1: but the drives are already spinning unless power management is used to spin-down the drives. Spinning drives is also a pro as it gives faster access because you don't need to wait for drive(s) to spin up. ---- unRAID: -Allows drives to spin down if they are not being accessed, saving power and possible prolonging their life (but what about spin-up/down wear and tear on drive components?) Downside include: when a non-parity drive dies you will probably lose all data on that drive, unless you are able to recover it using tools like (name them here or point to recovery URL perhaps?), or you have backups. In case of data loss you will need to use backups if you have them, or re-rip your media from original media. - froggit - 2010-10-16 gadgetman Wrote:@froggit: not to belittle any potential problem, but correct me if I'm wrong here: the possibility of bitrot happening is far less than a drive failure, within the same amount of time. I don't remember the probability of bit rot occurring in a certain time-frame, but I seem to remember that it's not that rare, in fact, reasonably common. If I dig out something informative, I'll post it here. In the meantime, this might be interesting: http://blogs.sun.com/bonwick/entry/zfs_end_to_end_data - froggit - 2010-10-16 harryzimm Wrote:Nice comparisons guys. It's good to know that some people can see the positives in other than what they have chosen for their own setup. Stop taking this so personally (froggit). I think the amount of research you have put in has clouded your judgment. Take a step back once in a while. You don't need to prove your setup works for you, we believe you. It was the research I did that led me to considering usage of ZFS: it was right for me because I did *almost* lose a lot of irreplaceable data, and I said 'never again'. But I accept other people will use whatever they want, and of course, that is fine, I just wanted to throw ZFS into the discussion to show that it is another strong contender, and let others decide on whatever they want, armed with some facts about ZFS etc. Try not to get so uptight if someone else tries to show some other solution backed up by some data and facts. No flames please. As you said, let's move on... it seems we are now moving onto creating a useful wiki of pros and cons of the various solutions suggested here in this thread, and I think that is a very useful and practical outcome. - froggit - 2010-10-16 darkscout Wrote:Why does everyone keep saying this? It's not true. True, with ANY RAID (even unRAID) you're going to get better performance with matched drives. If you have mismatched drives, you either give up security or space. I have mismatched drives right now for my Xen virtual disks. Not to nit-pick but you can 'expand' a vdev in capacity, but not number of devices, by simply replacing drives in a vdev with larger ones and resilvering (scrub). - poofyhairguy - 2010-10-16 [quote=froggit]Sounds pretty good poofyhairguy. After reading it, my suggested additions/changes are below - what do you think?[/quotes] Looks good overall. [quote] -Yet can be used by freely available OSes like FreeBSD, OpenIndiana/Illumos, Linux* (* http://www.osnews.com/story/23416/Native_ZFS_Port_for_Linux)[/quote] I think mentioning the open source nature of ZFS is important. [quote] Spinning drives is also a pro as it gives faster access because you don't need to wait for drive(s) to spin up.[/quote] Very true, good point. My wife still hasn't gotten completely used to the extra three seconds it takes to spin up a drive when she clicks on something in XBMC. [quote](but what about spin-up/down wear and tear on drive components?)[/quote] Again good point, two sides of the same coin I guess. [quote] Downside include: when a non-parity drive dies you will probably lose all data on that drive[/quote] That needs to be changed to "when two non-parity drives die at once you will lose the data on those drives" but otherwise looks great! - harryzimm - 2010-10-16 froggit Wrote:It was the research I did that led me to considering usage of ZFS: it was right for me because I did *almost* lose a lot of irreplaceable data, and I said 'never again'. I agree, Lets make this thread as useful as we can. Sorry for picking on you , hopefully the wiki and xbmc users will benefit from this discussion. cheers - maxinc - 2010-10-16 froggit Wrote:unRAID: I think you must be confusing unRAID with something else because none of that applies to unRAID. What would be the purpose of a parity drive if not for offering redundancy in case one of the non-parity drives decides to die one day? - froggit - 2010-10-16 harryzimm Wrote:I agree, Lets make this thread as useful as we can. Sorry for picking on you , hopefully the wiki and xbmc users will benefit from this discussion. No probs And yes, I think the info in this thread will benefit others. And we can put it into the wiki too. - froggit - 2010-10-16 poofyhairguy Wrote:[quote=froggit]Sounds pretty good poofyhairguy. After reading it, my suggested additions/changes are below - what do you think?[/quotes] Wow, you have quick drives! What happens if one non-parity drive dies? - froggit - 2010-10-16 maxinc Wrote:I think you must be confusing unRAID with something else because none of that applies to unRAID. What would be the purpose of a parity drive if not for offering redundancy in case one of the non-parity drives decides to die one day? Yes, that's quite possible. I thought that's what an unRAID user posted here, but I have asked poofyhairguy what happens if one drive dies so I expect he'll let me know. - maxinc - 2010-10-16 froggit Wrote:What happens if one non-parity drive dies? What would you expect to happen? - Flomaster - 2010-10-16 if one drive fails thats when parity steps in to save your data off that disk, if another drive fails before you can replace said failed drive and rebuild data you end up loosing data on both of those failed drives. your other 1-17 drives remain intact with all data in place -=Jason=- - darkscout - 2010-10-16 Never at any time have I loosened any of my data. I right a tight setup. |