Thread Rating:
  • 0 Vote(s) - 0 Average
New Network Attached Storage suggestions
Im getting to the point where Id like to prepapre myself to be able to expand my current storage array at any point, and using the HP Media Server with WHS with an extra eSATA enclosure, Im now maxed out at 8 2TB discs.... so giving me a total of 14.5 TB of storage , but this is obviously used for backups as well...

Now I want to move towards a racked system and wondering if anyone has any good suggestions of where I should be going next...Not too worried about it being a bit more expensive than the current setup, reliability is more important than reduced cost at the moment.

So has anyone made the move to this sort of system, or has anyone got anyt thoughts of which wasy I should go...either way I'd love to hear your thoughts and comments.

Thanks in advance
Are looking at a server or NAS based storage system?
TugboatBill Wrote:Are looking at a server or NAS based storage system?

To be honest Bill, Im kinda open to both options.

Have been looking at the QNAP 8 bay NAS's today....and they look like they might be fine, but Im jnot sure if I can daisy chan those after I run out of space with 8 bays full of 2TB drives.

The reason I think I might be swinging more towards NAS's is becasue they come pre built....but all servers are either self builds, or run out of space at 4 bays plus an extra eSata enclosure..
A server allows you to run other applications, IE torrent clients, rippers, transcoders, etc. NAS tend to just serve files. The line is a little fuzzy, as some NAS devices allow you to run some applications (typically this would be a torrent client).

Then there's the issue of how you will use it. Storage needs for media (DVD, BR, music, pictures) are different than what you'd need for games or excel files. With media the ability to expand and use many drives is more important than speed. Reliability is a key issue for most, as most users have no way to back up a multi TB array.

There are systems designed specifically for media storage -unraid/flexraid/etc. They have some nice features:
- the ability to expand as needed. Need more storage, add another drive. It doesn't even have to be the same size/mfgr/interface type.
- the use of a parity drive. If one drive fails you can rebuild it with the data from the remaining drives. If 2 drives fail only the data from those drives is lost.
- the ability to spin down drives when not in use. When it spins up a drive it spins up only the drive that has the requested data on it.
- These systems are slower than typical raid5/6/10/etc arrays. They are plenty fast to serve up several BR rips simultaneously though.

Flexraid is an application that runs on a existing server.
UnRaid (my fave) runs on it's own hardware.

These are usually DIY, but unraid does sell quite a few prebuilt systems.
I add another vote for Unraid. Nothing is better for XBMC.
Hi Bill

Thanks agian for your reply

I understand more what you mean about the server/nas question now...

You are correct that the lfine line that separates the divide is now blurred but in all honest I only really use my WHS for storage at the moment.

I run a couple of server type items from it but they can be moved to another machine as its only minimal tasks like runnning a photo server for frames and running sonos server (but think this will work from a NAS anyway).

I currently have an 8x2TB disk setup...with offline storage for part of that as well (due to failed WHS in the past).

Ive seena couple of different options, Thecus, QNAP, but in all of the posts Ive searched, no one has really said they do or dont use these systems specifically to feed things like XBMC.

Obviously my main issue is being able to expand at a later date, and that is the main reason I am wanting to move away from the standard WHS setup at the moment as I have effectively reached the end of the exandability options open to me with my current hardware.

I was toying with the idea of of building a new 20 disk WHS box with the Norco 4220 case, but when I looked into it, I dont think I would be able to do it right, and even though it would be cheaper or no more expensive in the long run, I do like the idea of buying something out of the box that jsut works.

Suggestions would be great if you have any as you do sound like you know what you are talking about.

I kinda just wish someone out there could build me a WHS with a case that could take 20 discs.....then Id be done, and although I think there is a market for it out there, no one seems to be doing it just yet
If your not keen on self builds, I use a Synology NAS for XBMC and it works perfectly fine. The top of the line model the 1010+ with its extension module handles up to 20TB.

I dont think you can expand it any further than that, though theres nothing stopping you from buying another NAS and adding that to your network, the XBMC library handles multiple network source locations.
The Norco 4220 case is very popular with the unraid crowd. It supports near the max # of drives unraid supports.

One thing I would caution you about large arrays. If you have a drive failure in any type of system you have to go through a rebuild or parity recalculation. Depending on the system you may not be able to use it during this period, and if you can, it will run significantly slower. Large many drive arrays can take days to rebuild. During that time all the drives are getting worked hard. If one fails, you just lost a bunch of data.

If you're set on a prebuilt solution Lime-Technology sells a unraid box set up for 12 drives for $700. You still need to move/buy drives but you don't need to buy an OS, MB, etc.

If you do consider moving your data onto an unraid system note that they don't use NTFS. This means you'll have to "migrate" your drives & data over. Typically this is done by buying a parity drive that is at least as large as your largest drive and one more drive. Then set up the array with those 2 drives. Then it's just the process of moving data over to the array off of one of your existing drives until its empty, move the drive into the array, and repeat until done.
Hi again Bill

Your Lime Technology solution sounds like a great idea....and to be honest I thik I would just go with that, BUT I am in the UK....and I dont think many companies here cater for that sort of option...although I could see how much shipping would be for that to be sent to the UKI guess
If you want parity, ZFS with opensolaris or freenas is the way to go. On any system you feel like building. (I had an 8 drive ZFS array with opensolaris but have since scrapped that) Opensolaris is a dead project with the takeover from oracle now so without opensource zfs updates i wouldnt use it plus the tools available for recovery arent great.

If you want well tested and supported (been around forever) use linux software raid5 with mdadm. This can be achieved easily with openfiler or there is freenas which uses a different software raid from BSD. And you can run other services on theese machines.

Right now my setup is an old p4 with 512mb ram pci sata and gigabit card, plenty quick enough to serve media to xbmc with 3 2tb drives and is expandable. I'm using CentOS which is basicly exactly the same as redhat. You cant get a better free server OS that works great out of the box. I've partitioned the drives with one 50GB partition and another for the rest of the space, the 50GB partitions are a mirror (raid1) array for backup purpose of important data and the rest is raid5. Samba(windows file sharing) works wonders with the computers in my house mapping the drives, its automatically setup to scrub the arrays once a week (checks for data inconsistencies) and set to email me if there is a smart error with the disks or if an array has an error. This IMO is the best way to go because you have the flexibility of adding drives and more sata ports, put it in a case or rack that can house as many drives as the case allows and being a proper OS you can use it to run any services you want. Raid5 however has a problem whereas if the power goes out during a write procedure data can get corrupted (i think thats the write whole problem). Anyway this can be simply fixed with a UPS.

Anyway thats my 2 cents Big Grin unraid im sure would do the job, but it lacks performance and the parity drive is only 1 disk so im asuming if that disk dies you would have some problems with the rest of the array? I haven't looked into it properly i might do that now actually.
readynas 3200 (or 4200)
12 disk bays, dual redundancy, redundant power/nic's

thats what I would buy if I had a big budget
joel_ezekiel Wrote:unraid im sure would do the job, but it lacks performance and the parity drive is only 1 disk so im asuming if that disk dies you would have some problems with the rest of the array? I haven't looked into it properly i might do that now actually.

If a parity drive fails you'll have to replace it and rebuild the array. No data is lost. If you lose 2 drives then the data on both of those drives is lost. With Raid 5 2 bad drives means you restore everything from backup as all the data on the array is gone.

For most media users raid5 is not a good solution. It uses a lot of power (drives spin all the time), expandability usually isn't near as flexible as the parity drive systems, and its reliability isn't what is needed.
TeknoJnky Wrote:readynas 3200 (or 4200)
12 disk bays, dual redundancy, redundant power/nic's

thats what I would buy if I had a big budget

ReadyNAS is a good solution. It too uses a parity type system so drives can be spun down when not in use and you can expand it (within the confines of the chassis). It is also more expensive than most of the DIY systems as well as the Lime-Technology unRaid box.

Oh, and Adaptive Load Balancing doesn't work on the ReadyNAS line (though it's unlikely any media users would use that feature).
Ok, you've got me interested, because power usage is a concern for me. 3 WD green drives isnt a big deal but i used to run 8 1TB drives and im sure eventually i'll have that many 2TB drives as well.

From what i was reading there is no striping, data can be accessed on each single drive if the whole raid is scrapped. The few concern i have is that the parity disk has heavy usage but if its a write once and read many times this shouldn't be an issue. The performance, im assuming the performance should be that of reading directly from one drive? The last one is software, do i have to buy the unraid software to get full unrestricted usage of it? eg. there is a free version nexenta but its limited to 2TB.

I'm really keen on this though, as i lost my 8 drive array due to me not scrubbing before pulling a disk and there happened to be errors on one of the disks i didnt pull killing the whole zfs array. So while the new array isnt full i would like to find the right solution.
The unraid parity drive is used when data is written. When you read only the drive with the data you're reading is spun up.

IIRC, unRaid uses a ReiserFS. If you decide to break up the array these drives can be read by another system, as long as it can read that file system type. Windows doesn't do it natively, but I recall someone posting a link to a driver that would allow Windows to do so.

UnRaid is free for up to 3 drives. The Plus license is good for 6 drives, and the top end version is good for 20 drives. IIRC, it is ~$70 for the Plus and ~$120 for the Pro.

Since it runs on almost any hardware it is easy to try out. Probably the biggest hurdle is making sure you have a USB drive that is bootable and that the MB allows boot from the USB drive.

I get 20MB/sec+ write speeds. Read is faster but I really haven't bothered to test it as it is plenty fast to feed a BR rip.

New Network Attached Storage suggestions00