Guest - Testers are needed for the reworked CDateTime core component. See... https://forum.kodi.tv/showthread.php?tid=378981 (September 29) x
  • 1
  • 6
  • 7
  • 8(current)
  • 9
  • 10
  • 17
FreeNAS versus unRAID as the operating-system for a DIY NAS?
darkscout Wrote:It was a case to illustrate a point.

What if you have 3 people watching different things that happen to be on the same hard drives? Something at the beginning of the 1st platter. Something at the middle and something at the end of the 3rd platter?

Technically it should work given that SATA is 3Gb/s. But given you have 3 different clients trying to access 3 different points on he same hard drive at the same time...

I did that too. In fact, I did that first before I tried playing the same thing. At the time the Unraid test box only had two data HDs and on one I put HD movies and on the other I put HD TV to test.

Unraid was able to deliver three 10-14GB mkv movies from the same drive at the same time. As I said maybe with really high bitrate stuff you couldn't get away with it, but I know at least 2 Blu Ray level streams from the same drive work in Unraid because I tried that more recently on my Pro Box.

In my experience when it comes to reading data, Unraid can outdo my cheap gigabit network. I would have to put some real money into my network to support enough clients to be able to outdo my Unraid box.

On writes though its a different story, and a good reason for a ZFS box.

Reply
froggit Wrote:I can't say for all RAID systems, but with ZFS you can choose your preferred level of redundancy. For example, if you're happy with the capacity of one disk for parity data then you could choose a simple 2-drive mirror, or RAID-Z1. If you want more safety then you can choose double-parity like RAID-Z2. If you want triple-parity then you can choose RAID-Z3, which allows three drives to fail before you lose any data.

So depending on your wallet / paranoia level, you can choose a level of data safety that suits your own situation.

So if I choose a single parity drive and two drives fail what's left? What data recovery tools do I have available for the drives that failed and I pulled? How many drives worth of data did I have to sacrifice in your example? Must all of those drives be spinning for me to watch a movie?

froggit Wrote:Why not use enterprise-level data safety mechanisms that are *free* and available within ZFS? What happens with unRAID if your parity drive dies, or one or two of your data drives dies? With ZFS, using RAID-Z2, if *any* one or two drives die you lose no data, and you can rebuild any dead drives so *no* data is lost.

If my parity drive fails I lose no data. If I lose TWO drives I lose two drives worth of data but can use standard repair tools or services to recover them. If I lose a parity drive and a data drive I lose one data drive worth of data. You don't understand how unRAID works do you? It shows.


froggit Wrote:USB sticks are not generally suitable for booting OS's from as they tend to write frequently to the devices (logs etc), and this will reduce the life of the USB stick. They are also fairly slow to boot from.

My USB stick isn't written to during a normal course of operation and I reboot my server maybe once every couple of weeks to months - it boots in about 2minutes. I lose ZERO storage space to my OS which boots from a USB stick. I take it you're telling me that's not the case for you?

froggit Wrote:Solaris has corporate support from Oracle.


Solaris X86 is corporately supported still?

froggit Wrote:Also, for ZFS newbies there is a great forum available on the OpenSolaris.org site which is frequented by very knowledgeable & helpful ZFS users - see here, including Oracle staffers: http://opensolaris.org/jive/forum.jspa?forumID=80

Umm, unRAID is supported by folks too including the guy who wrote the software. Mind you the forum is dedicated mostly to folks using the software to run a NAS and not a general OS.

froggit Wrote:ZFS is simple - here I will create a 6-drive array which can survive any two drives failing before any data is lost:
# zpool create tank RAID-Z2 drive1 drive2 drive3 drive4 drive5 drive6

How much data space are you losing for that level of redundancy? Must all drives be spinning if I watch a movie? ZFS spans drives right?

froggit Wrote:<snip>
Voila, what's hard about that?

Now, for one of the killer features of ZFS, that virtually no other RAID system has -- the ability to find and repair all files that have become corrupt due to 'bit rot':
# zpool scrub tank

This last command will read the contents of *all* files within your storage system and compare each block with a 256-bit block checksum. Any blocks not matching the checksum will be recovered by using the parity data stored when the file was originally created. This feature is priceless and gives you peace of mind that your data is 100% correct as it should be. Can unRAID or FreeNAS do that?

I guess my question is do I need it to do that? Yeah, I CAN run a parity check and I CAN get data errors corrected. Mine runs monthly as I recall. How much space are you dedicating to storing that checksum?

Look, my needs are pretty simple. I store mostly video and I want to use as much disk space as possible while using as little electricity as possible. I want the array to be fast enough to stream my data and I want it to be able to suffer the inevitable drive failure. For the fool who calls those of us using this system "fanboys" for defending it it's pretty apparent you don't understand the fact that this system meets all of or needs and has in my case for something like 5 YEARS now. No
Openelec Gotham, MCE remote(s), Intel i3 NUC, DVDs fed from unRAID cataloged by DVD Profiler. HD-DVD encoded with Handbrake to x.264. Yamaha receiver(s)
Reply
froggit Wrote:Not if the data is in cache. ZFS uses available free memory to store files read.

However, it will need to spin the drives if data is not in cache.

Ah, so all drives need to spin, an answer at last. My movies are far larger than the 2gigs of memory in my systems.

froggit Wrote:However, for energy reduction, you may use the OS power management features to spin-down drives after a set time period like 5 minutes, and a lot of the 'green' drives auto-park heads for reduced power usage anyway.

And then they all spin back up when I read a file because the system spans them across multiple disks right? You realize that's not the case with unRAID right?


froggit Wrote:You are very generous. I am not willing to lose any data. And the price of 2 drives to give peace of mind helps me sleep really well. And I have a backup of everything.

No it is you who are very generous using multiple disks to provide parity. Why stop at two drives? Why not throw away three or four?


froggit Wrote:Read my previous reply about ZFS to get an idea of how easy it is to use and setup, and the data safety mechanisms built in. You can look at parity data a bit like insurance: have no parity data and you lose data when drive(s) die, have parity data and your loss is zero to low, depending on how much parity data you have and how many drives die. It's not rocket science. Once this is understood well, one can plan the level of safety required.

I DO have parity data. I am willing to use one disk to store parity for all of my disks. So if I have 15 data disks I am using ONE parity disk. If you do this with ZFS and lose two disks what's left? If you are not left with 14 perfectly good ready to go disks then I'm sorry I see that as a disadvantage. Losing an entire array because I've surpassed the amount of party disk space I was willing to sacrifice is not in my best interest....
Openelec Gotham, MCE remote(s), Intel i3 NUC, DVDs fed from unRAID cataloged by DVD Profiler. HD-DVD encoded with Handbrake to x.264. Yamaha receiver(s)
Reply
darkscout Wrote:I finally got unRAID running running on VirtualBox (no small feat since mlabel c:UNRAID would add random glyphs to the end "UNRAIDHuh". So I had to do label X, edit the bzroot to mount LABEL=X, etc).
Nonstandard setup, skip the bitching.

darkscout Wrote:I can't explain how unimpressed I am.
I'm running "version: 5.0-beta2."

Under AFP & FTP it lists "Coming soon!". So there goes my TimeMachine backups.
You missed the beta label? Huh

darkscout Wrote:When adding a user I can't specify the UID, so NFS... well must just be global read/write (I'm on my XP laptop at work so no proper NFS).

(I know you can do "user shares") but by default all I see are "disk1" and "disk2" folders.
Create a folder at root and set it as a share. RTFM

darkscout Wrote:I swapped out one of my virtual 10GB drives for a virtual 20GB drive (The scenario you keep repeating). I instantly got the error "Stopped. Disk in parity slot is not biggest." "If this is a new array, move the largest disk into the parity slot. If you are adding a new disk or replacing a disabled disk, try Parity-Swap." Except I can't find the Parity-Swap button.
RTFM, parity drive must be as big or bigger than all other drives. Why don't you start with a 2TB parity and 2 10gb and start swapping data drives.

darkscout Wrote:I also notice that all my data isn't there. Isn't the parity drive just supposed to the missing data? I mean with ZFS I just do a disk swap and I have full coverage, in the mean time NONE of my data is missing.
Eh? Huh I can yank single drives from my system all day and still see the data <shrug>

darkscout Wrote:Fine. Power down. Put the 20GB drive into the parity disk slot... still have only 20GB available (disk1 & disk2).
Umm yeah, you didn't add a larger data disk did you?

darkscout Wrote:At which point I get the fun message "WARNING: canceling Parity-Sync will leave the array unprotected!" For another 37 minutes. Meaning if a drunk driver hits a power line, neighbor digs into his power, who knows what else could happen: my data is unprotected.


Lol, yes if you stop the array from creating parity then it cannot protect the data. Novel concept, the server software doesn't have ESP. Does ZFS have no process for preparing a disk or recovering when a new drive is inserted in place of a failed one?

darkscout Wrote:The Free version only supported 2 disks (3 if you count parity). Free version of NexentaStor supports up to 12TB.
Your point? The developer chose a different path for his paid version. <shrug>

darkscout Wrote:I suppose if I wanted to slap a bunch of drives in a case and have a NAS for unimportant stuff, maybe... then again I'd have to pay for using more than 2 drives. So I'd go with FreeNAS.... at which point I'd just use ZFS.

Ah so this was your conclusion before you started and you looked for ways of producing results to support it. You're right, those of us who chose this system were obviously somehow stupid and couldn't see all of these failings you're finding. Rolleyes
Openelec Gotham, MCE remote(s), Intel i3 NUC, DVDs fed from unRAID cataloged by DVD Profiler. HD-DVD encoded with Handbrake to x.264. Yamaha receiver(s)
Reply
PANiCnz Wrote:I hate to fire up an already heated debate but everyone also seems to be ignoring the performance benefits that ZFS has, especially with coupled with ARC and L2ARC. When researching unRAID I'm pretty sure its widely acknowledged performance isn't great.


I think its might just be a case of us three being the only ones who've taken the time to read all the ZFS articles on the net and fully understand them.

I agree that ZFS realtime checking to guard against bitrot is probably one of its biggest selling points, until you've seen a few hundred gigs of data slowly corrupt without your knowledge you won't truly appreciate it.

unRAID is just JBOD with parity, I guess it serves its purpose.

Performance is fine now that SAMBA is faster. I can stream my movies on 100meg ethernet and they are high bitrate HD so how much bandwidth do you really need exactly? I know I saw a serious performance DECREASE when I put one of my unRAID boxes on a 100meg segment. Upgraded the segment to Gig ethernet and I saw my speeds back to normal. I can stream multiple HD movies at once without issue. Since movies, music, and backups are what my servers are used for I see no reason to use a system that parallelizes the disks. I'm not running an enterprise sized database or SAN here. I guess saying unRAID performance isn't "great" depends on what you are comparing it to and what it is you NEED. :p
Openelec Gotham, MCE remote(s), Intel i3 NUC, DVDs fed from unRAID cataloged by DVD Profiler. HD-DVD encoded with Handbrake to x.264. Yamaha receiver(s)
Reply
GJones Wrote:Which is horrible, since you are commenting on the lack of quality of community support on the community support forum for another piece of software.

You seem to be forgetting the main (and huge) difference that unRAID is a commercial and proprietary software. You pay for their software (I bought the Pro version) and I would say it's unfair (to the community) if you wholly and fully depend on community support out of it. It's certainly different than xbmc.

Quote:I've seen few people have a problem with unRAID: buy hardware that is certified and it works. I've asked quite a few questions along the way and never had to wait more than an hour or two to get not just an answer but the correct one.

Perhaps you would want to ask my specific issues with them, before making false assumptions?

i should add that their list of officially 'certified hardware' are quite outdated and I did buy the best hardware that are 'community certified, which works'.
Reply
BLKMGK Wrote:So if I choose a single parity drive and two drives fail what's left? What data recovery tools do I have available for the drives that failed and I pulled? How many drives worth of data did I have to sacrifice in your example? Must all of those drives be spinning for me to watch a movie?



If my parity drive fails I lose no data. If I lose TWO drives I lose two drives worth of data but can use standard repair tools or services to recover them. If I lose a parity drive and a data drive I lose one data drive worth of data. You don't understand how unRAID works do you? It shows.




My USB stick isn't written to during a normal course of operation and I reboot my server maybe once every couple of weeks to months - it boots in about 2minutes. I lose ZERO storage space to my OS which boots from a USB stick. I take it you're telling me that's not the case for you?



Solaris X86 is corporately supported still?



Umm, unRAID is supported by folks too including the guy who wrote the software. Mind you the forum is dedicated mostly to folks using the software to run a NAS and not a general OS.



How much data space are you losing for that level of redundancy? Must all drives be spinning if I watch a movie? ZFS spans drives right?



I guess my question is do I need it to do that? Yeah, I CAN run a parity check and I CAN get data errors corrected. Mine runs monthly as I recall. How much space are you dedicating to storing that checksum?

Look, my needs are pretty simple. I store mostly video and I want to use as much disk space as possible while using as little electricity as possible. I want the array to be fast enough to stream my data and I want it to be able to suffer the inevitable drive failure. For the fool who calls those of us using this system "fanboys" for defending it it's pretty apparent you don't understand the fact that this system meets all of or needs and has in my case for something like 5 YEARS now. No

Thanks for the entertainment - very amusing Laugh

So:
- you hate the idea of spending money on spinning disks up
- you hate the idea of spending money on buying one extra disk*

but:
- you like to spend money on something like unRAID which has inferior data protection
- you like to spend money on leaving your unRAID NAS on 24/7

Thanks for clearing that up.

*The money you refused to spend on one extra disk will result in a 2 drive failure losing you 2 drives worth of data (movies).

My system would lose no data but yours loses 2 drives of data. I can see now why you love unRAID Laugh
Reply
Man, this bit made me die laughing...

BLKMGK Wrote:I DO have parity data. I am willing to use one disk to store parity for all of my disks. So if I have 15 data disks I am using ONE parity disk.

I take it all back... you are not generous at all.

If you had one parity disk for 15 data disks I think you would:
a. Put Ebeneezer Scrooge out of business
b. Would be taking a ridiculous gamble with your data.

I wish you luck, you'll probably need it, or at least a lot of time to spend attempting to recover your data when those drives get cranky. And just imagine all that bit rot goodness... Rolleyes
Reply
froggit Wrote:Thanks for the entertainment - very amusing Laugh

So:
- you hate the idea of spending money on spinning disks up
- you hate the idea of spending money on buying one extra disk*

but:
- you like to spend money on something like unRAID which has inferior data protection
- you like to spend money on leaving your unRAID NAS on 24/7

Thanks for clearing that up.

*The money you refused to spend on one extra disk will result in a 2 drive failure losing you 2 drives worth of data (movies).

My system would lose no data but yours loses 2 drives of data. I can see now why you love unRAID Laugh

Why bother debating over this?
Who cares if someone even store their media on twenty 80GB USB drives connected thru USB 1.1 hubs, setup with windows JBOD?

Seriously though, I run 2 storage servers at home: a 12TB zfs on a freenas, and a 27TB unRAID.

Guess which one is for my media?

The zfs machine is used for ip camera and personal backup system where I need the highest performance and I want to avoid any bitrots and immediate access.

unRAID is just GREAT for building a media cabinet. I started out with something like 4TB a few years ago and slowly added a drive at a time to expand the unRAID volumes. None of the original drives were used anymore, yet the original data and drive/volumes are still the one used; except expanded thru more than a dozen steps. I have a few email triggers on it (temperature, SMART checks, and daily reports) to avoid such slim catastrophe like 2 drives failing; but if that happens I could just rerip the data on just those 2 drives. No big deal, the benefit far outweigh the cons for me.

Horses for courses.

froggit Wrote:Man, this bit made me die laughing...



I take it all back... you are not generous at all.

If you had one parity disk for 15 data disks I think you would:
a. Put Ebeneezer Scrooge out of business
b. Would be taking a ridiculous gamble with your data.

I wish you luck, you'll probably need it, or at least a lot of time to spend attempting to recover your data when those drives get cranky. And just imagine all that bit rot goodness... Rolleyes

Are you guys talking about media server or something to hold much higher valued commodities?

Do you expect people to take you seriously with all those ignorant and derisive remarks?
Reply
I think it would be awesome if this thread could stay above a schoolground level of maturity. I don't see anyone flocking to a particular solution based on what's being displayed here, frankly.

One of the things that concerns me most about the ZFS solution is the part of Simon's blog described as RAIDZ expansion. If I have 14TB of data, I have to have at least 28TB of storage to increase the size of the storage pool. Actually, it's 28TB, plus whatever you want to grow the pool by. I'd have a hard time justifying buying, building and managing a spare box just to keep a spare, empty 14TB of space around.

To someone like me, who uses unRAID, but is interested in ZFS's robustness, it seems a better idea might be to have two boxes, an unRAID box for replaceable media rips that grows pretty effortlessly as needed and another using ZFS for storing irreplaceable documents, photos, to-do lists (which my wife claims are critically important).
OpenELEC 2.95.5
Reply
gadgetman Wrote:Why bother debating over this?
Who cares if someone even store their media on twenty 80GB USB drives connected thru USB 1.1 hubs, setup with windows JBOD?

Seriously though, I run 2 storage servers at home: a 12TB zfs on a freenas, and a 27TB unRAID.

Guess which one is for my media?

The zfs machine is used for ip camera and personal backup system where I need the highest performance and I want to avoid any bitrots and immediate access.

unRAID is just GREAT for building a media cabinet. I started out with something like 4TB a few years ago and slowly added a drive at a time to expand the unRAID volumes. None of the original drives were used anymore, yet the original data and drive/volumes are still the one used; except expanded thru more than a dozen steps. I have a few email triggers on it (temperature, SMART checks, and daily reports) to avoid such slim catastrophe like 2 drives failing; but if that happens I could just rerip the data on just those 2 drives. No big deal, the benefit far outweigh the cons for me.

Horses for courses.



Are you guys talking about media server or something to hold much higher valued commodities?

Do you expect people to take you seriously with all those ignorant and derisive remarks?

If you read the derision of BLKMGK's putdowns to various people in his last posts you might start to appreciate my humour... then again, perhaps not Big Grin
Reply
froggit Wrote:So:
- you hate the idea of spending money on spinning disks up
- you hate the idea of spending money on buying one extra disk*

I think you are misconstruing things here. It is not always about straight costs of power and disks.

On the first point, I personally don't care about the power costs of spinning disks up. Here in Texas my power comes from Wind Turbines so I can waste as much as I want and all it hurts is my wallet, not mother nature.

But I am scared that running consumer drives ALL THE TIME wear them out based on my experience with my RAID 5 server, and I think the only way you can get four years out of consumer drives is to spin them down whenever you can.

There is a cost component there because I could easily just buy enterprise drives made for RAID (and therefore ZFS) instead, but personally the idea of spending twice as much on storage as I could seems like a poor deal when I can just use different software that leaves the drives spun down most of the time.


On the second point, buying an extra disk is no big deal. Heck I have a extra 2TB drive on my desk that isn't in my Unraid server yet because I don't need the space yet.

The real problem is that the extra disk takes an extra slot that I could use for storage. My larger server can only take 16 disks as is, and the second I go past 16 disks I need to spend serious money upgrading to a 20 bay Norco. And once you get enough data to fill those drives, the next option is to rebuy EVERYTHING (mobo/case/PSU/etc.) to buy another server.

The real costs that hurt is not the per drive cost, it is the per bay cost. Drives are outright cheap when compared to all the hardware it takes to make that drive work.

Therefore I want as many available bays as possible working towards storage, as that means I can go that much longer before I need to build another server and spend some real money.

The idea of tying up two or three bays with parity drives when I could just use one seems wasteful, especially because this isn't some server that is gonna get me fired if some data gets lost (I just lose some free time to re-rip/download).

The main reason why RAID servers have more than one drive fail at once is because:

-The usage of the server is so intense that when another drive dies the rest are super stressed till it is replaced because the server can't have downtime

Or

-Because RAID/ZFS requires all the drives to be the same make and size for optimal results you often buy the drives at one out of single batch that could have issues


With Unraid I avoid the second scenario by mixing and matching drive sizes and brands. In my big server no two drives are the same.

I avoid the first scenario by taking my server offline when a drive dies, and I let it rebuild the lost disk without any other pressure. I tell the wife that "tonight we can't watch XBMC because my server needs repairs, lets go see a movie in the theater instead" and I let Unraid do its rebuild thing in a low stress environment.

Now obviously I couldn't do this at work where that server must be online for business to happen, but that is why home use can have differing solutions.

No one solution is better than another completely, and there is no "obvious answer" for a mediaserver. If you value you time more than anything and you are only ripping stuff you legitimately own, than maybe ZFS with tripple redundancy is perfect. If you value hoarding as much illicit content off the usenet as possible before it gets past the retention mark and every byte you can free for storage seriously helps with the quest, then Unraid with its one drive parity makes sense.

Each to their own, and each technology for whatever needs fit it best!

Reply
This thread has become borderline ridiculous. Why can't we agree that unraid and zfs are both good storage solutions. People can decide, which is best for there needs. This being the xbmc forum and all, most people will be using there nas for media storage. Like many people in this thread have stated, if you store priceless photo's, docs, etc, on your nas, you should backup over multiple devices.

Zfs seems to be an excellent storage solution, way more than i need for my media collection. But that is my preference. Why not let people make their own decision without shoving your setup in their face. What works for you, doesn't necessarily have to work for the next person.

Unraid or an OS with zfs? Why not try them both and make a decision on what best suits you.

cheers
HTPC 1 : Acer revo R3700 ion2 HTPC 2 :Apple TV2 HTPC 3 : Apple TV2 HTPC4 Acer revo R3700 ion2 Remote : x2 Riimote2
SERVER : 10TB Ubuntu Server 10.04, dual wintv nova hd s2 cards, tvheadend, Newcs, Omnikey reader, White *Sky uk* Card, Mysql Db, Sabnzbdplus, SickBeard, Couchpotato, FlexRaid. :cool:
--------------------------------------------
Image
Reply
froggit Wrote:My system would lose no data but yours loses 2 drives of data. I can see now why you love unRAID Laugh

But if you are in really bad luck and 3 drives fail out of 16, I would loose 3 drives and you would loose them all.

So I guess it boils down to your paranoia level. For reasonably paranoid users (like the majority) unRAID would do just fine and very unlucky few, if any, will suffer data loss. For seriously paranoid users like you, it is clearly a bad choice but for insanely paranoid people like me, unRAID is still better for data protection since it minimises the loss in case of a disaster.

Spinning individual drives. Is not the power saving but decreasing the potential for mechanical failure of a hard drive if you spin it unnecessarily. And statistically that's a much higher risk than getting a bit rot.

The one thing is certain. Hard drives DO fail and WILL fail. It would be foolish to believe they don't. unRAID offers excellent protection for that, especially in todays TB age when people don't fully realise how much data they will loose when their "1TB external" gets knocked over.

unRAID is also cheaper. The $60 for the the license cost is easily recovered by its flexibility to use different types and sizes of hard drives.
Reply
markguy Wrote:I think it would be awesome if this thread could stay above a schoolground level of maturity. I don't see anyone flocking to a particular solution based on what's being displayed here, frankly.

One of the things that concerns me most about the ZFS solution is the part of Simon's blog described as RAIDZ expansion. If I have 14TB of data, I have to have at least 28TB of storage to increase the size of the storage pool. Actually, it's 28TB, plus whatever you want to grow the pool by. I'd have a hard time justifying buying, building and managing a spare box just to keep a spare, empty 14TB of space around.

To someone like me, who uses unRAID, but is interested in ZFS's robustness, it seems a better idea might be to have two boxes, an unRAID box for replaceable media rips that grows pretty effortlessly as needed and another using ZFS for storing irreplaceable documents, photos, to-do lists (which my wife claims are critically important).

If you want to simply expand a ZFS storage pool, you either:
1. add a new vdev (a bunch of drives, as many as you like >1), OR
2. replace drives in an existing vdev with larger ones
Reply
  • 1
  • 6
  • 7
  • 8(current)
  • 9
  • 10
  • 17

Logout Mark Read Team Forum Stats Members Help
FreeNAS versus unRAID as the operating-system for a DIY NAS?0