Large Case Suggestions 20+ drives
#16
drewy Wrote:I'm in the process of "rolling my own". I wanted a case that could handle in excess of 22 drives and a pre-built one + caddies would have cost me £600+. So I decided to invest a little quality time in the man shed and see what i could come up with.

I don't need hot swap drive bays, so that saves a fair amount of dosh but it did take me a while to built the drive cages. The case is being built with 2 x 11 3.5" cages and has space for another 10 or so when I have the need to add more.

It's all built from various aluminum stock and hundreds of rivets, it may not end up being very pretty but should be functional, which is all I need. It should also cope with my space needs for a number of years to come.

To try and keep it as small as possible I'm mounting the 2x11 cages side by side at the front of the case, so it's a little fatter than the usual tower but isn't excessively tall.

I'm very interested in this. Even the Lin cases, after you get the required modules (if you can find them over here), it's over 600 pounds.

I too don't "need" hot swap drive bays, in that I'm hoping my drive failures are in-frequent enough that it isn't a major annoyance.

Do you have any photos? How did you address mounting the motherboard?
Reply
#17
Wouldn't it be more cost effective and just plain easier just to have two huge servers rather one gigantically massive server?
Reply
#18
T800 Wrote:Wouldn't it be more cost effective and just plain easier just to have two huge servers rather one gigantically massive server?

You also have options when one server goes down. This is what I do- one server for movies and one for TV.

Reply
#19
T800 Wrote:Wouldn't it be more cost effective and just plain easier just to have two huge servers rather one gigantically massive server?

It might be, I'd have to work that out.

I know that if there were two servers instead of one, then that be a minimum of 3 extra hard drives. 4 extra if you wanted the same protection of 3 parity drives per server. But 3 would allow you 2. (1 drive for the additional OS and the other 2 as parity).

Then the additional MB, Ram, CPU, PSU, Case.

And the additional cost of running two PSUs. Could still be cheaper running two, that would depend on what the efficiencies are.

Something to consider certainly
Reply
#20
fional Wrote:It might be, I'd have to work that out.

I know that if there were two servers instead of one, then that be a minimum of 3 extra hard drives. 4 extra if you wanted the same protection of 3 parity drives per server. But 3 would allow you 2. (1 drive for the additional OS and the other 2 as parity).

Then the additional MB, Ram, CPU, PSU, Case.

And the additional cost of running two PSUs. Could still be cheaper running two, that would depend on what the efficiencies are.

Something to consider certainly

Depends on the server. I don't know the complete ins and outs of it all but take unRAID for example:
1 USB flash drive for OS (not using a SATA port) and one single drive for parity. 1 Hard Drive

You could build one 30TB unRAID server for about £400, minus the drives of course. That's £800 for all UK sourced, easily replaceable and upgradeable parts for 60TB worth of server.
Or even 80TB (40TB x 2) for a bit more.
Reply
#21
I embarked on this journey in January of this year, did a great deal of research and ended up with a system running hardware raid 6 on an Areca 1880ix-16 card and software raid across two 60GB SSDs in a Norco 4220 case. The raid card I bought does 4 internal SFF-8087 ports (breaks out to 16 drives) and an external SFF-8088 port. I was able to convert the SFF-8088 port to an internal using a 40$ PCI card that converts it to SFF-8087 and allowing the last 4 drive bays of the Norco 4220 to be utilized. I run all this on a AMD 1055T with a motherboard that has two PCIe 8x card slots and a single PCIe 4x card slot. The 4x slot is being used by a Quad GigE card bonded together and the remaining 8x slot would be used for future growth or whatever I see fit. Right now I only have 16 drives in the raid 6 and have room for 4 more, I will utilize a cold spare for max space usage. This all runs on Ubuntu and serves the following purposes:

Virtual Box VM running Windows 7 to serve iTunes Media to Home
sabNZBd
Sickbeard
Couchpotato
Headphones
WebServer
ZoneMinder (Monitors my home Cameras)
My two IP cameras dump H264 video all the time via SMB
SMB Server
AFP Server
TimeMachine Server
mySQL DB for the XBMC environment

With all this said and done here is the following disk layout so far:

Code:
root@beast:~# vgs
  VG             #PV #LV #SN Attr   VSize  VFree
  vg_areca_raid6   1  18   0 wz--n- 25.47t 14.65t
  vg_ssd           1  10   0 wz--n- 54.96g 31.90g
root@beast:~# lvs
  LV          VG             Attr   LSize    Origin Snap%  Move Log Copy%  Convert
  backup      vg_areca_raid6 -wi-ao  150.00g                                      
  cameravideo vg_areca_raid6 -wi-ao 1000.00g                                      
  documents   vg_areca_raid6 -wi-ao    1.00g                                      
  dump        vg_areca_raid6 -wi-ao   10.00g                                      
  ebooks      vg_areca_raid6 -wi-ao    5.00g                                      
  itunes      vg_areca_raid6 -wi-ao  200.00g                                      
  music       vg_areca_raid6 -wi-ao  100.00g                                      
  news        vg_areca_raid6 -wi-ao  250.00g                                      
  pictures    vg_areca_raid6 -wi-ao  100.00g                                      
  scripts     vg_areca_raid6 -wi-ao    5.00g                                      
  software    vg_areca_raid6 -wi-ao  150.00g                                      
  source      vg_areca_raid6 -wi-ao    5.00g                                      
  syslog_ng   vg_areca_raid6 -wi-ao   10.00g                                      
  tm-kitt     vg_areca_raid6 -wi-ao    1.00t                                      
  tm-rebecca  vg_areca_raid6 -wi-ao    1.00t                                      
  torrents    vg_areca_raid6 -wi-ao  200.00g                                      
  videos      vg_areca_raid6 -wi-ao    6.49t                                      
  vms         vg_areca_raid6 -wi-ao  200.00g                                      
  home        vg_ssd         -wi-ao    2.95g                                      
  root        vg_ssd         -wi-ao    1.91g                                      
  swap01      vg_ssd         -wi-ao    1.91g                                      
  swap02      vg_ssd         -wi-ao    1.91g                                      
  swap03      vg_ssd         -wi-ao    1.91g                                      
  swap04      vg_ssd         -wi-ao    1.91g                                      
  swap05      vg_ssd         -wi-ao    1.91g                                      
  tmp         vg_ssd         -wi-ao  976.00m                                      
  usr         vg_ssd         -wi-ao    3.81g                                      
  var         vg_ssd         -wi-ao    3.91g                                      
root@beast:~#
As you can see below I get some decent speeds in the disk array

Write:

Code:
root@beast:/storage/video# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
16384000000 bytes (16 GB) copied, 24.688 s, 664 MB/s

real    0m25.093s
user    0m0.220s
sys    0m18.030s
root@beast:/storage/video#

Read:

Code:
root@beast:/storage/video# time sh -c "dd if=ddfile of=/dev/null bs=8k"
2000000+0 records in
2000000+0 records out
16384000000 bytes (16 GB) copied, 15.7293 s, 1.0 GB/s

real    0m15.733s
user    0m0.280s
sys    0m11.440s
root@beast:/storage/video#
HTPC(s): All running LibreELEC
  • AMD 2200G APU on Gigabyte AB350N-Gaming WIFI-CF
  • RPI3 x2 | RPI2 x2
NAS: FreeNAS (Latest Stable) | NFS/CIFS
Reply
#22
I'm hoping to have my case about finished tomorrow. Then I can get busy and throw some hardware into it. Initially an AMD E350 mATX board and 10 disks.
I'll take a few snaps and post them for anyone who may be interested. I intend to run this thing with ubuntu and greyhole, that meets my needs better than un-raid.
Reply
#23
drewy Wrote:I'm hoping to have my case about finished tomorrow. Then I can get busy and throw some hardware into it. Initially an AMD E350 mATX board and 10 disks.
I'll take a few snaps and post them for anyone who may be interested. I intend to run this thing with ubuntu and greyhole, that meets my needs better than un-raid.

I'm certainly interested in the pics - I was thinking of doing a side-by-side yoke modification or, depending on how the meeting goes this weekend, maybe a custom bend. For me, unraid doesn't sound as appealing as flexraid, I think I'll be giving that an experiment. Could be fun!
Reply
#24
If anyone is interested my home grown case can be seen here In it's current state it can house 22 drives and I have room to add a further 10 at a later date.
For (what will be) obvious reasons, I'm calling it UGLY!
Reply
#25
drewy Wrote:If anyone is interested my home grown case can be seen here In it's current state it can house 22 drives and I have room to add a further 10 at a later date.
For (what will be) obvious reasons, I'm calling it UGLY!

Actually, that is exactly what I want to build. It's great, because I could make it fit 25 drives easy I think.

How did you come up with the measurements for what goes where? I suppose there's a definite standard as to, like MicroATX, where the holes go.

I think that's really amazing, this is exactly what I want to do.

I was measuring out a regular case and trying to "scale" it upwards, but that's a lot more time-consuming than I initially thought it would be.

Very nice!
Reply
#26
fional Wrote:I suppose there's a definite standard as to, like MicroATX, where the holes go.

There is:
http://www.formfactors.org/formfactor.asp

You'll find the spec there.
Reply
#27
PatrickVogeli Wrote:There is:
http://www.formfactors.org/formfactor.asp

You'll find the spec there.

That's handy. Wow, this is turning out to be a rather good day. Earlier I was lamenting Big Grin
Reply
#28
I cut the motherboard tray out of another "donor" case. You'll just need to make sure that the one you use has the mounting holes for the form factor motherboard that you want to use.
Mine came from a full size ATX case but size I only wanted to support a MATX board I needed only 4 slots worth of tray. I actually cut it on the fifth slot to give myself a little elbow room.

I knew I wanted a bare minimum of 22 drives and the space I had for the server decreed that it had to be short'n'fat as opposed to long and skiny. After a little bit of thought I hit on the idea of mounting the drives in twin side by side stacks at the front. Once I had built the drive cages (the real fiddly bit) the rest of the case pretty much fell into place. The only part that was a headache was working out how to "fix" the drive cages in the machine, since I needed them to be removable, so I could add/remove drives. The clamp type arrangement I came up with seems to do the job and fits in with my space constraints.
Reply
#29
drewy Wrote:I cut the motherboard tray out of another "donor" case. You'll just need to make sure that the one you use has the mounting holes for the form factor motherboard that you want to use.
Mine came from a full size ATX case but size I only wanted to support a MATX board I needed only 4 slots worth of tray. I actually cut it on the fifth slot to give myself a little elbow room.

That's a good idea. And then you just put together the yoke for the drives by measuring the bits out? How long did the drive tray bit take you?
Reply
#30
Pretty much, yes. I used a few old IDE drives as patterns and the fact that I was using 10x10mm aluminum angle for the rails that the drives sit on naturally gave me approx 10mm spacing between each drive.
There is a fair bit of metal in those drive rails and a hell of a lot of holes! I had some difficultly getting the drive mounting holes to align. With hindsight I think I would have sliced the side of one of the old IDE drives and used that as a pattern. As it was I made a pattern from "eye" and some of the holes turned out not to align correctly. Not a big deal, I just opened them up a little to ensure I could screw in all four screws for each drive.

Probably took about a day to make the drive cages, few hours to cut all the metal, few more to mark and drill all the holes and another few to fire all the rivets in.
Reply

Logout Mark Read Team Forum Stats Members Help
Large Case Suggestions 20+ drives0