Kodi Community Forum

Full Version: Raid
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7
Just an update for anyone who was following my progress. I set up a RAID5 and it was incredibly slow. We are talking 9 hours to copy 1tb of data. So I went with RAID0 for now with 8tb of space and the data copy is about 3 hours. I figure maybe Ill get a battery back up and a NAS.
Please please please don't make a raid0 for your movies.... If you lose 1 drive, you will lose ALL data...
I had slow speeds too first with raid5. What you need to enable is write cache. This will make it about 5x faster.
If you are using normal consumer level drives (WD green/blue/black) forget about hardware raid. WD have stripped an essential command (TLER) command from their firmware. This commmand takes care of a read time out and overruiles your raidcard. there are hundreds of reports about this on the net.
What I recommend you iis try to use ZFS filesystem. IT's very easy to setup, you can have your array ready in a few minutes. Downside is that it only runs on FreeBSD or ubuntu, not on windows.
If you have a server for your storage, and would like help contant me... i can give it a try...
RAID 5 is relatively slow writing to the disks because it has to calculate and write the parity information that provides the fault tolerance. However the read performance is pretty good. Just let the copy run overnight.

*DO NOT* enable the controller write cacheing unless your controller has a battery backup for it's cache. Otherwise your run a major risk that a power outage will corrupt the disk.

RAID 0 is fast writing to the disk because it doesn't have to calculate any parity info. However if you use RAID 0 the failure of any one of the four disks will wipe all your data.

JR
Hmm, should I go with what gollum said? What drives should I be using instead of these consumer level drives? I dread wiping everything and creating a new array with less space and crappy performance unless I know I have good drives for the task. On a side note, the controller card I am using is an LSI and only does SATA2. My onboard 990FX has SATA3 and does up to RAID5 I believe.

EDIT: Gollum are you saying to have my HTPC run Ubuntu and then a separate box on my network with the storage?
I would always use a hardware RAID controller rather than Gollum's suggestion.

Post the model of the disks and the raid controller and I'll look them up.

JR
Seagate "green" 2TB SATA 3 drives from Newegg and an LSI SAS/SATA 6g Raid controller. It wasn't until after I bought the unit that I found that SATA drives only operate in SATA II mode. So I'm gonna guess you're going to recommend different drives and a controller with SATAIII? My 990X board has SATAIII RAID controller capable of up to RAID5. I think I only used the card because I couldn't get my system to boot from my 500GB Hybrid SSD/HD drive with the array set up for some reason.
go raid ten assuming you have four or more drives.
jhsrennie Wrote:I would always use a hardware RAID controller rather than Gollum's suggestion.

Post the model of the disks and the raid controller and I'll look them up.

JR

Yes and no I'd say. When I was in the research and planning process for a media server (with some redundancy) I gave this a good thought. My conclusion was that for a consumer like myself, the choice of hardware raid came with the risk of ending up with a broken controller/MB and having to source one for /perhaps/ quite a premium price or loose the array.

Long story short, I choose SW raid (FreeNAS in my case) since a HW breakdown wont matter (if the disks stay good), just build a new rig, add the disks, plug in the OS USB and off you are (hopefully).

To OP:
No offence, but seriously, take a deep breath...

Then read up a bit about the various RAID levels and pros and cons that come with them. As it is now you have - based on some premature perception of low performance at write - jumped all the way into the other ditch and choosen a setup which is performance only. No redundancy, no "risk management", it's basically pants down...

You should go somewhere in the direction of RAID 1, 5, 6 and, if building a separate machine, perhaps also look into stuff like unRAID (see forum hardware section).
Thanks for the deep breath part, that's what I'm doing today. I've been RAID 5, 6, and back to 0 all week. the 10 idea is intriguing. Is that the performance of RAID 0 plus the redundancy of RAID1? I'm at close to an hour to copy an MKV to the shared video folder on my HTPC and I think that is the crappy controller. I'm going to SATAII when I was using mobo SATAIII.

I'm gong to run another full backup tonight and then figure out what I'm doing before I more movies.

So are you guys saying that a lot of people have a small relatively low powered computer by the TV and a separate powerful computer with a RAID to host and stream the content?

EDIT: Just to clarify my initial build and current situation. Home built AMD FX 8150 rig with Gigabyte 990FX chipset, 2x 2TB drives in RAID0 (software), 16GB RAM, DVD drive, 6870X2 video card (had it in a closet) Windows 7 x64. Had great performance, distorted audio. Changed the video card to an nVidia 430 and sound problems went away. i was ripping my blu rays on my main rig and copying them across the network to the HTPC in about 10-15 minutes each.

Thinking I knew what I was doing I bought 2 more of the same 2TB drives and tried putting them on the mobo but couldn't get Windows to boot so I put the LSI SAS/SATA controller and set up RAID 6. To my dismay my space out of 8tb was only 3.6TB and speed was jokingly slow. Went to RAID5 and gained some space and still suffered slow speeds. Went back to RAID0 and speed is fair but copies across the network near the 45 minute-1 hour mark now.

I suppose I could connect back to the motherboard and go with Windows 7 software RAID 1 leaving software and Windows on my 500GB hybrid boot drive and my media on the 4TB RAID1 strip and back up to an external 4TB drive, if any exist.

EDIT #2: I just had a very bright idea. Since I did my data restore on my main machine, I 'm going to keep the data here and just have XBMC look here for media until I get straightened out hardware-wise on my HTPC. Smartest idea all week lol.

EDIT #3: Well, I had my media all still stored on my main PC so I created shares and removed everything from the HTPC. Only problem is it's completely unwatchable. I started a movie and had to wait 20 minutes for it to buffer and then it played about 3 seconds and just started buffering again. Wtf did I do to myself?
I was at raid 5 or 6 forget which but have read lots and lots of nightmare stories about it. So far raid ten makes a lot of sense for me, although you do lose space compared 5.

Pretty quick though I get 50MB/s down and 20MB/s up which is fine for my application.
Yeah I just put all 4 drives back on the mobo and did RAID10. I'll let you know how it goes.
It sounds as if you're sorted anyway, but "LSI SAS/SATA 6g" isn't enough of a description of the controller because LSI have dozens of controllers that fall into this category.

JR
patseguin Wrote:So are you guys saying that a lot of people have a small relatively low powered computer by the TV and a separate powerful computer with a RAID to host and stream the content?

Yes, although the use of RAID is not the defining part, it's the splitting up of playback and storage. The storage server does however not need to be particularly powerful, file sharing and media streaming is pretty much peanuts.

As you can see from my sig, I went the simple and cheap way by building a file server from a scrap Dell computer, which I supplied with 3x 2TB WD greens. OS (FreeNAS) loads from a USB and except for a power out, I am now at some 430 days uptime without incidents. Contrary to other comments, I've also rather seen slow read than write, but can't tell if that's due to client computer bottle necks. At any rate, fastest computer gets write transfers of up to 650 MB/s to the server, whereas the read transfers are usually around 250 MB/s. All in all, I achieved my two goals of a quiet (it's fanless) HTPC and a file repository with a decent level fault tolerance.

This may not be your way to go, but if money, space, network etc permits, I'd definitely recommend splitting up the two roles of playback and storage between two machines, since it allows for the HTPC to be small and quiet and the file server/NAS to be larger and noisier, but at the same time expandable etc. The machine you refer to as your main PC could very well be enough for storage and streaming, so then you only need a HTPC.
All this is getting a bit frustrating. Just for the record, I look after several hundred Dell servers with Perc RAID controllers and four to eight disk (usually Samsung disks) RAID5 or 6 arrays. In fact as I type this I my own server is contentedly writing to a four disk RAID5 array on a Perc controller.

Anyhow, the performance of even the relatively puny four disk RAID5 on my server is substantially faster than the Gigabit network it's connected to. I get steady 100MB/s speeds copying to and from the server across the network. The big arrays are ridiculously fast. We use them on Hyper-V servers running half a dozen virtual machines and disk speed is rarely an issue.

The Perc is a relatively high end controller and would cost you around £250 new, though they're available for around £100 on ebay. My point is that it isn't that hard to get stellar performance from RAID.

JR
jhsrennie Wrote:All this is getting a bit frustrating. Just for the record, I look after several hundred Dell servers with Perc RAID controllers and four to eight disk (usually Samsung disks) RAID5 or 6 arrays. In fact as I type this I my own server is contentedly writing to a four disk RAID5 array on a Perc controller.

Anyhow, the performance of even the relatively puny four disk RAID5 on my server is substantially faster than the Gigabit network it's connected to. I get steady 100MB/s speeds copying to and from the server across the network. The big arrays are ridiculously fast. We use them on Hyper-V servers running half a dozen virtual machines and disk speed is rarely an issue.

The Perc is a relatively high end controller and would cost you around £250 new, though they're available for around £100 on ebay. My point is that it isn't that hard to get stellar performance from RAID.

JR

Sounds very good, so which part is it that's "getting a bit frustrating"?
Pages: 1 2 3 4 5 6 7