Any storage ideas?
#1
I currently have a server in my loft. It runs Debian, headless, and does various tasks for me.

One of the main tasks is a fileserver, it has an Adaptec 3805 and 8 Western Digital RE3 1TB drives in it, in RAID6. I'm rather paranoid about data loss, so I also back it up every couple of months to cheap 2TB drives.

The server case is more or less full - it could take a few more drives I suppose - and there are no more ports left on the Adaptec card.

I'd like to expand the storage but I can't see any options that build on top of what I've got.... so I thought I'd ask in case anyone had any ideas.

Requirements... Use what I have already and add to it. Have the same or comparable level of redundancy/safety as what I have

I don't think it's possible... but I'm open to ideas if anyone has any!
Reply
#2
Add another raid card is about the only option you have with your requirements.

Otherwise flexraid may be an option or even unraid (requires dedicated MB - may be able to virtualize).
Reply
#3
I thought so! Thanks to everybody who had a read.

I think my next best option is to go for software raid under Debian, and use the SATA headers on the motherboard to start off with - there's 4 spare - for 4 x 3TB drives in RAID6, migrate everything across to them, and then I can get rid of the 1TB drives, and simply add 3TB drives over time up to a maximum of 12 which would be 30TB. More likely though I'd not exceed 8 drives, and I'd have them running on the Adaptec card.

The problem with that approach is I either go for very expensive 3TB drives, or I go for cheap 2TB drives and face the same problem eventually. Either that or I hold off until bigger cheaper drives arrive and go for 3 or 4 TB drives when they're cheap.
Reply
#4
Man I am also paranoid cross my fingers....
Reply
#5
Do note that you cannot just mix and match software raid and hardware raid, even when you think you're not using hardware raid because you haven't created any arrays.

For software raid there are the following requirements:
- Use pure HBA's, like SATA controllers on mobo's (which are the best for sw raid), non-raid SATA controllers or LSI-based raid-controllers flashed with the IT (Initiator Target) firmware as opposed to the IR variant.

Ironically, using a raid controller like your Adaptec is considered harmful when using mdadm or ZFS (or most other forms of sw raid). This is because with software raid, the software needs to have full control over the disks if you want to properly protect your data. With non-LSI based IT flashed raid controller those disks will be hiding behind the abstraction layer or the raid card. This means that the raid card is essentially lying to the OS (regarding sync commands and such) and instead relying on its own protection, which, if you don't actually use hardware raid, there is none.

- Disks without TLER/ERC/CCTL. As opposed to hw raid, mdadm and ZFS will NOT kick out a disk if it takes longer than let's say 10 seconds while trying to recover from an error. Limiting the recovery time will thus be harmful rather than helpful. Note that with the exception of newer WD disks (WDxxEARS) you can disable/enable ERC/CCTL using smartmontools or, if you use Hitachi, the Hitachi feature tool.

- A UPS is preferred. Since the built-in protection mechanisms in file systems are not as fool proof as a BBU, you need another protection mechanism. A UPS would be great, with which you can maybe bridge power interruptions or at least safely umount and power off your server.

And for HW raid you need:
- A hardware controller WITH BBU. If you care about your data get one immediately if you haven't already. The BBU is the only (albeit an effective) barrier of protection in case of a power failure. Like I said when using a raid controller, it will be lying to your os and fs, rendering the built-in protection mechanisms in those file systems useless. Therefore, a BBU is imperative, unless you're will to disable all memory (controller and disks) completely; this will be extremely safe, but also VERY slow.

- Disks with TLER/ERC/CCTL. Hw raid will kick out a disk if recovery takes too long. Don't let that happen. Note that with the exception of newer WD disks (WDxxEARS) you can disable/enable ERC/CCTL using smartmontools or, if you use Hitachi, the Hitachi feature tool. Your OS will need to support ATA passthrough for your controller though.

- Disks with the write cache (actually, write buffer) disabled! While the data is queued in your battery protected memory on your raid controller it's safe, but when you have a BBU but the data is written to the drive cache, it's no longer safe, as drive cache is not battery protected. I think you can do this with smartmontools. Again, ATA passthrough will be required. Or if your raid controller can do this for you, it's also fine.

Note 1: for controlling ERC/CCTL on Samsung/Hitachi/Seagate use `smartctl -l scterc,x,y` with x and y being the max recovery time in deciseconds for read and write, repsectively. Values of 0 will disable the feature. See `man smartctl` for more info. This feature is volatile, so put it in your rc.local or something!
Note 2: for controlling TLER on older WD drivers download and use the WDTLER tool. Newer drives don't work, meaning you will have to buy overly expensive RE drivers (or just buy another brand, where you can just enable or disable it at will.

So you can see the requirements are rather conflicting. Of course you could build one array with your Adaptec and one in mdadm, but if that's convenient/wise...? You'll have to maintain two separate mechanisms and have to buy a BBU and a UPS.

Hope this helps!
Reply
#6
As for expanding:

1. If you are short of SATA/SAS ports, just add another card. In case you use sw raid, you can just expand your existing arrays (make sure you get a real HBA!). If you use hw raid you might be able to combine multiple cards if they are of the same brand, type and fw, but I'm not at all sure. It's very likely they simply won't be aware of each other.


2. If you are short on drive bays, you can get a 5-in-3 or 4-in-3 drive bay thingies. With those you can put 4 or 5 3,5" drives in 3 5,25" bays. Read this: http://www.wegotserved.com/2009/03/08/ro...e-reviews/

Or, if you've already used up that space too, you can go external. You can do this without having loose HDD's lying everywhere connected to slow interfaces (USB or something). Get something like this (Storage Tower): http://www.addonics.com/products/HDD_MultiBay/ which is basically an empty tower with just a PSU, some fan and a lot of bays.

You can then simply connect the drives to your 'real' server using SATA to eSATA bridges, or better, SATA to Infiniband bridges. The data will be connected through those bridges and the power will be supplied from the PSU of the external enclosure.

3. If you are short on both, combine 1 and 2 or conclude that it might just not be worth it and start over with a really big ass tower and a 16 ports raid controller.

Good luck!
Reply
#7
Well, I'm back looking at this and I had missed LB06's update, I should have thanked you for that!

At the moment I have a battery backed up Adaptec 3805 which is a full on hardware RAID card with cache and IOP. I have 8x 1TB WD RE3 - Western Digital RAID edition 3k, which have TLER disabled, and are high spec drives. So I tick all the boxes for hardware RAID.

At the moment the server is in a Lian Li PC-A71FB. I like the case and I'd be very reluctant to go for something else.

At the moment there are 10 drives inside it... the 8x 1TB, an optical which can come out, and a boot drive.

The case has racks for 7 drives in the bottom and 3 in the top. It also has 5 (I think!) 5.25 inch bays. I do have rails to convert a 3.5 inch drive to a 5.25, so without spending money I could have 5 3.5s in the 5.25... I realise I could 7 in there, but it'd mean spending a bit of money... I could also switch to an SSD for the boot drive, but again that'd cost money.

That means I have room for 15 drives, and I'm using 9.... which means I can add 6 drives.

I've since learned a very important thing, which I tried to explain in another thread. It's a BAD idea to replace all my drives... if I replace all my drives then I'm saying I'll have 8 drives and no more. That means when I next expand I'll have to replace all the drives.. which means to make it worth it I'd need to go for 4TB drives (if I went for 2TB now) which means I'd need to wait for them to arrive, and then drop down in price... bearing in mind 3TB drives are very expensive compared to 3TB then it's a long wait.

If instead I add 2TB drives to the pool, and then later retire the 1TB drives for 3TB drives, then I can retire the 2TB drives for 4TB drives etc... and I can do that as and when drives reach the cheap point.

The end result is that I think I need to run lots of drives, in order for this to work - 8x1TB and 6x2TB is one answer - although in future I think 7x1TB and 7x2TB would mean that I could upgrade 7 drives at a time, so I'd probably get 8 2TB drives if I was to go down this route.

Alternatively I could stagger it further and get 3x2TB drives to add to the mix, and then later get 3x3TB drives when 3TB drives come down to 2TB drive prices... from that point on I can then replace some 1TB drives with 4TB drives (or retire all 8 at that point).

The second option is probably preferable as I'm going to have to move to some kind of storage pool in order to do it, and even if it does have parity, I do require a backup.. meaning that if I'm going to have more space I'll need to also buy more backup drives.

From what LB06 has said, I'll need to drop (or at least should drop) the Adaptec 3805 if I go for some kind of software RAID/storage pooling. I was intending putting the drives in NMR/single JBOD and leaving them at that, thinking the storage pool wouldn't be worried about low level access to the drives. Perhaps I need to also look at adding some HBA. I think I can remember finding some non RAID adaptors, I'll need to have a look around.

Software RAID is out - at least any traditional level of RAID. It's going to have to be some kind of storage pool.... since I use the server for other tasks changing OS isn't going to work - it runs Debian, with no GUI.

Edit : Going forward from this then here appears to be my best solution

Basically the cheapest thing I can think to do is to buy four 2TB drives... this would be to add 2 to the storage pool, and keep 2 for backups. I should then be able to switch to some kind of storage pooling and restore from backups. It'd move my capacity up from 6TB to 10TB, which is enough to be going on with. As and when that gets full, I just buy pairs of 2TB drives until I reach the limit of what my server can hold... and at that point I start replacing 1TB drives with 3TB drives... so long as I reach that point after 3TB drives become cheap then I'm fine.

I would not be replacing the Adaptec card... I'd just be running the 8 drives in non member RAID, and hoping that this didn't have a nasty effect. The alternative is to go for cheap and nasty SATA controllers and have compatibility issues... I have an ASUS motherboard and I've seen issues myself and read of lots where add-in card boot roms get ignored.

I'd also be using drives with TLER disabled.. but they are decent drives and I'm not prepared to chuck em away, I also don't believe I'd get the value of the drives in normal 2TB drives if I sold them.

So my initial expense... is 4TB drives which comes to about £200 - plus a lot of time backing up and restoring etc!
Reply
#8
I'll need to do some research into FlexRAID to see if it's the solution for me... it's encouraging that it's done above the OS level, so it means there would be less reason to dump the Adaptec controller.

I can't seem to see if it's easy to admin it with just a root account, and no GUI on a Debian server. I do see references to the command line, but I've not yet seen one that also says linux. It seems very unlikely that there would be no CLI.

Something I have noticed right away is that I may not need backups... I can assign more than one drive to parity it seems, and if there was to be a data loss incident it would be only on one drive, rather than all data. That means I could backup only some parts, and backup less often.

I don't see any alternatives to FlexRAID that would permit the Debian server to co-exist, so it seems the best option.

I'm about to move house in a couple of weeks though, so I've got a few weeks to kick the idea about and see if it keeps sounding like the right idea.
Reply
#9
Just for the record, hardware RAID needs drives with TLER/CCTL/ERC enabled. Fortunately for you the WD RAID Edition drives qualify.
Reply
#10
That's what I meant - although not what I said! The RE3 drives come with it as standard, and Western Digital are now forcing people to buy those drives or have RAID arrays with drives that frequently drop out.
Reply
#11
Unless you buy brands that allow you to enable or disable said feature at will.

With hardware raid I would have avoided WD like the plague. I wouldn't want to pay a €90 surplus per drive just for one fucking feature that one could easily have made user controllable.
Reply
#12
I certainly regret my purchase, I made it just as the whole home server thing was taking off, if I had waited a while I'd have had more options. I certainly paid through the nose.

It's looking like FlexRAID is the right option for me, and I've made a thread over on their forums for specifics....
Reply
#13
Quartermass Wrote:I certainly regret my purchase, I made it just as the whole home server thing was taking off, if I had waited a while I'd have had more options. I certainly paid through the nose.

It's looking like FlexRAID is the right option for me, and I've made a thread over on their forums for specifics....
Well, if you're going to go with FlexRAID, it's probably also not going to like having TLER drives. And it may not like not having a real HBA. Essentially FlexRAID is a form software raid. I highly doubt Flexraid is going to kick out a drive just because error recovery takes too long, so you might just as well let the drive attempt to recover itself, potentially avoiding unnecessarily degrading your array.

Having said that, running sw raid with TLER enabled is not as bad as running hw raid with TLER disabled.
Reply
#14
Indeed... not perfect but if I was looking for perfect I'd need to spend a fortune... FlexRAID seems more towards files rather than blocks so I'm less worried about the non-member RAID.

I'll research into disabling TLER on the drives at a later date, it won't affect any decisions I make now, even if it's not possible...

Once I get FlexRAID up and running, it'll be a real sigh of relief... from that point on I can get cheap inexpensive drives and chuck them in over time.
Reply
#15
If you have older WD's you might be able to disable TLER using the official TLER tool btw.
Reply

Logout Mark Read Team Forum Stats Members Help
Any storage ideas?0