Windows 8 Storage Space: strategy, parity & spin down
#1
Hi,

I'm considering using windows 8 on my future Media Server box, but I have some questions about parity, stripping and disk spin down.
I'll build a 4 "spaces" configuration: (+ssd for os)
  1. Documents: mirroring space (1To)
  2. Photos: mirroring space (2To)
  3. Media: parity space (20To)
  4. Programs: no redundancy (for software and games installation)

According to this link, windows use 256 Mo Slabs.
So let's imagine I have a 4 HDD configuration, and I want to watch a 1Go movie (stored on Media space).
Do all the hard drives containing a movie slab will be spinning during all the movie, or only when accessing to their part ? (HDD1 during first movie quart, HDD2 for second one....)
I'm little affraid that in some years, when I will use a great number of hardrives (let's say > 20), no one will be in "spin-down" state when reading any media files!
I like the abilities of Unraid (or snapraid) to have usable drives outside of the "pool", a dedicaced parity disk and a great spin-down way.
I can deal with the loss of them, exept the spin-down management. (don't care abour write access, only read are relevant for my usage)
But I like the "all-in-one" solution windows 8 offers, especialy the silent slab correciton/regeneration.
So do you think the best way is to use Windows 8 partity storage, or use snapraid to have a better spin-down management ? (but loose in integration)

Thanks for your help



Reply
#2
I'm a proponent of unRaid,....why?
Because it's simple. I don't have to have a degree in RAID levels or hardware architecture to get it up and running.

I love Windows,..but I don't have the time, energy, or motivation to read all the docs,..and still end up having more questions.
With unRaid,...I installed their software to my thumb drive, added a license key,..and booted the box.
Sure I did have to read the materials,....but essentially,..I'm in a "set it, and forget it" mode now.

What ever solution you choose,...at least you're thinking about redundancy,...too often people do that after they've had a crash.
Reply
#3
Thanks for your answer.
I used Unraid some years ago, and I was satisfied with it.
But today, I'm building a whole htpc/dlna/server/gaming box, so I choose Windows as operating system.

And yes, i'm thinking about redundancy, no worry! I'm using redundancy, external usb backups and ftp backup for critical files (photos) Smile
Reply
#4
(2012-08-07, 15:14)mika91 Wrote: Thanks for your answer.
I used Unraid some years ago, and I was satisfied with it.
But today, I'm building a whole htpc/dlna/server/gaming box, so I choose Windows as operating system.

And yes, i'm thinking about redundancy, no worry! I'm using redundancy, external usb backups and ftp backup for critical files (photos) Smile

Very cool,..when you do put that system all together,..can you take pictures and post them.
A lot of readers have mentioned that they want an "all-around" machine.
Your experience will help them too.

Reply
#5
Let me pull up this old thread on "Windows 8 Storage Spaces". I am surprised that this has not gained more interest. Is anyone using it?

I know that many people LOVE unraid and I don't want to read why Unraid is better or what it does. Would be great to know what "Windows 8 Storage Spaces" and answers to the questions raised above.

Thanks in advance!!!
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#6
(2012-08-07, 15:17)GortWillSaveUs Wrote: Very cool,..when you do put that system all together,..can you take pictures and post them.
A lot of readers have mentioned that they want an "all-around" machine.
Your experience will help them too.

That's pretty much what I've built. A6-3500 (Soon to be A8-3870K), runs Windows 7, uses DriveBender to turn four HDDs into one 10GB HDD that is shared on the network, runs SickBeard, CouchPotato and Sabnzbd, XBMC of course, and starting to get Steam Big Picture going on it.

Thought he pictures are less than impressive as it's installed in a generic black ATX tower. Tongue

Reply
#7
I would recommend you use flexraid, I have a 10 tb server running on windows 8 and I have a 2 tb for parity, and its super easy to setup and when playing one movie only 1 HDD spins. Its pretty quick too, I haven't had a crash so far, plus great for smb sharing.
Reply
#8
Thanks! Are you confirming then that a Win8 "spaces" config will lead to all 16 disks spinning when only watching one movie?
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#9
I am building a system right now for my RV. I was going to use Windows 8 Storage Spaces to combine my drives into a large pool, then have folders for Movies and TV Shows on the pool which would be synced to my unRAID server when at home so it is always ready to travel.

In my research, it looks like the pool will spin down, but only when nothing is in use on it. Users report processes like shadow copies, indexing etc tend to keep it spun up and, once one drive is spun up, they all are spun up from what I've read. In all the scenarios I found, the users were using either mirroring or parity redundancy for their pool. Since the parity is spread out over every drive in the pool, I guess that explains all the drives having to spun up if one drive is being accessed, however not so much for mirroring. I don't plan on using any redundancy on my pool so I'm hoping I won't have the same issues. I may be wrong though cause there isn't a definitive answer from MS on it that I could find.

If Storage Spaces is a bust then Flexraid or Drive Bender are the other two options I'm looking at. I'll let you know in the next day or two if all the drives spin up when watching a movie and if they actually spin down when not in use.
HTPC 1 - AMD A8-3870K, ASRock A75M, Silverstone ML03B, Kingston HyperX 4GB DDR3 1866, Crucial M4 64GB SSD
HTPC 2 - HP Stream Mini, 6GB Ram
unRAID 6 Server - Intel Celeron G1610, 20TB Storage

Reply
#10
I use DriveBender and I can tell you that it only spins up the drives that have the data it needs. Since it'll put episodes on different drives somewhat randomly in an effort to balance the data across all the drives, when I'm watching episodes in sequence it'll sometimes pause and you'll hear a different drive spinup, because the next episode is on a different drive that was spun down.

So yeah, under normal operations DriveBender lets each drive spindown independently when not in use, as per Windows' power saving rules.
Reply
#11
Resurrecting this old thread searching for answers.
Can anyone actually confirm if drives spin down using Windows 8.1 Storage Space and/or if all drives need to spin when reading 1 title?
Is the data for 1 movie spread across the entire drive pool or placed in it's entirety to 1 drive?
I am to the point of ditching backup and switching to reliable redundancy only and considering options.
DriveBender looks interesting but if Windows already has it built in then.........
HOW TO - Kodi 2D - 3D - UHD (4k) HDR Guide Internal & External Players iso menus
DIY HOME THEATER WIND EFFECT

W11 Pro 24H2 MPC-BE\HC madVR KODI 22 GTX960-4GB/RGB 4:4:4/Desktop 60Hz 8bit Video Matched Refresh rates 23,24,50,60Hz 8/10/12bit/Samsung 82" Q90R Denon S720W
Reply
#12
(2015-04-06, 19:52)brazen1 Wrote: DriveBender looks interesting but if Windows already has it built in then.........

Windows already has a media player built in as well, so no reason to use Kodi right Wink

Storage Spaces is a standard Windows 1.0 release. Essentially it has bugs that you can call bugs or take the fanboy stance and call them "unsupported configuration errors"

Whatever you prefer to call it, there is clear inflexibility in WSS. Good read here http://arstechnica.com/information-techn...-it-works/

While my server runs 8.1 pro, I chose Flexraid tRAID instead of storage spaces. Flexraid has snapshot (RAID-F) and real time (tRAID) versions, both are very flexible and run overtop windows with little resource requirements

Neither variant of Flexraid stripes files at all. Reads are by that design only going to require the disk they are on to spin. There are multiple modes you can use to setup Windows Storage Spaces, and some that will not stripe files either. Since Storage Spaces is real time (like Flexraid tRAID) a write will spin all disks. Reads only need one disk (if WSS is using non-striped parity mode). Flexraid doesn't prevent disk spin down, and it's very flexible at raid expansions and contractions, starting with disks containing existing data, etc
Reply
#13
Hi Dark_Slayer.

It appears you are familiar with Flexraid. In further research for a solution for myself, I've seen other informative posts of yours in other forums. Thank you for your input here. I have a few questions perhaps you'd be kind enough to help with? I have no experience with RAID at all and I'm trying to learn. My goal is to ditch the 1:1 backup I've been doing and trade it for RAID with reliable redundancy. I feel that with 100% redundancy, backup is not needed. http://forum.kodi.tv/showthread.php?tid=223605 While WSS is not actually RAID, it is similar and I'm losing interest in it my the minute and considering Flexraid:

To recover from drive failure you simply replace the drive and WSS takes care of the rest. Very convenient. The cost for that convenience = 33% reduction usable space of every drive in the pool not including the parity disk which in itself is a complete loss and expected with any parity drive. So, 30TB of pooled disks have only 20TB of usable space. That's a lot of overhead for convenience. How does Flexraid compare to this? If a drive fails, can I just replace the drive and Flexraid takes care of the rest or do I have to jump through hoops and cross my fingers? If you know, same with DriveBender?

It appears there is also a problem with rebalance. I gather this means when you want to increase the size of a pool in WSS parity, you MUST add drive(s) at least as large as the original pool that was built. I assume it is so that the data is stored on the original pool while the new larger pool is created. Once that's done, the data transfers to the new pool of drive(s) added. Once that's done, the original pool of drives deletes the data. Once that's done, those drives are now included to your new pool. I cringe thinking I must double my pool every time I intend to increase it's size. Again, I'm naïve and I'm a technical idiot but this is what my mind perceives and I could be distorted. How does Flexraid compare to this? If you know, same with DriveBender?

Write speed using parity is 30MB/s at best. How does Flexraid compare to this? If you know, same with DriveBender?

Flexraid has either T or F RAID. The only real difference I see is T parity is real time and F parity is done on a schedule. Can I manually synchronize F 'Snapshot Raid' after editing the pool instead of waiting for a scheduled sync easily? I feel this would be real time parity as long as I manually do it every time I add a movie to my collection?

Finally, my drives are currently installed on all 11 of my motherboard internal ports with no controller card. I may add additional external USB drives. These ports use different controllers (Jmicron, Marvel, Gigabyte, and Intel) Will Flexraid accept this mix and match of SCSI, SATA, USB, SAS and Intel ports? DriveBender appears to.

I'm trying to narrow my choices between Flexraid and Drive Bender but don't know what to compare other than what I've asked since I'm unfamiliar with every aspect of RAID. The most important thing to me is not losing my data. 1:1 backup provided exactly that but it's time to rely on redundancy now instead. Along with that comes a certain amount of technical difficulty when it's time to call on that 'redundancy'. I want to trust one of these softwares before I commit since once it's started, there's no going back.

Thanks for any help you or another may offer.
HOW TO - Kodi 2D - 3D - UHD (4k) HDR Guide Internal & External Players iso menus
DIY HOME THEATER WIND EFFECT

W11 Pro 24H2 MPC-BE\HC madVR KODI 22 GTX960-4GB/RGB 4:4:4/Desktop 60Hz 8bit Video Matched Refresh rates 23,24,50,60Hz 8/10/12bit/Samsung 82" Q90R Denon S720W
Reply
#14
Lets just say, raid isn't backup and real backup means 3 mirrors with one offsite, etc

That being said, I've got 20+ TB of mixed movie/music/tv series and there is no way I'm ever going to mirror that much data. While it's not a full back up, parity protection gets you a long way. Any single drive can die and it's contents won't be lost

For raid there are a variety of options. There is true hardware raid which typically only supports standard levels like raid 0, 1, 4, 5, 10. Storage Spaces and Flexraid also support standard raid levels (this is new for flexraid, something like 5 or 6 months ago Brahim added the feature). Anything not using a dedicated HW Raid card is not *really* hardware raid. Motherboards that support raid levels are still considered "Software-Assisted RAIDs" or SARAIDs because the hardware performing the raid calculations on those drives also runs your OS (not dedicated like in a HW card). Then there is pure software raid like flexraid, storage spaces, and even basic RAID configurations from Windows Disk Manager.

Pure software raid is what you are already looking at, which just needs a jbod disk configuration (and it gives the most flexibility if you need to replace a motherboard, change your system, etc). I've moved flexraid between motherboards, but I'm not sure how easily WSS configurations are moved around. The good thing about most software raids (like snapraid, flexraid, wss) is the data disks can typically be read outside of the array they are in since the data is not striped in snapshot parity modes.

Flexraid's RAID-F and snapraid are snapshot parity protection RAIDs, but their actual RAID "level" is not a specifically defined type like RAID0, 1, etc. Instead they are both what you call "Hybrid RAIDs" and closest to RAID4 for the parity calculations.

Onto your questions, I don't really recommend snapshot raid from Flexraid anymore. There are more differences than just real time and snapshot

RAID-F has some quirks
-Starting the storage pool must wait for the parity to be build (10+ hrs)
-When restoring a failed disk your data is offline (10+ hrs)
-There is little you can do to *fix* parity mismatches, and they require a parity rebuild (10+ hrs)

All of that amounts to a bunch of time with your data inaccessible

tRAID has improvements all around these areas
-You can start your pool (and read data) before the parity is built
-Parity can be build online (force a verify sync)
-Parity can be overwritten when there are small mismatch snafus in the parity (verify sync rather than just a whole parity rebuild)
-Array can run in degraded mode after any disk failure
-Data is reconstructed live**

**This live reconstruction part took me a while to understand. When you give tRAID your disks and assign them as a UoR (unit of risk) either for data (D01, D02, etc) or parity (PPU1, PPU2, etc) flexraid's tRAID service takes your Windows Disk Management drive letters (Disk01, Disk02, etc) and puts them in a low-level offline mode. Then it creates a "virtual disk" instead with the name you give it in flexraid's tRAID web client. Essentially if you assign 10 disks in tRAID and start the storage pool, windows disk management will show 20 disks. . . so the live reconstruction part. Once a disk fails the data is still available from the parity on the "virtual disk"
--This is a neat feature but hard to fully describe, you almost have to use it to fully appreciate it
--Basically when the disk fails you get an alert (you can set them up to SMS, email, etc)
--When the disk fails you see no difference in your pool (all data is still accessible)

Basically it runs in degraded mode with all the data present from your parity. While running in this mode your *newly* written data won't be protected, but all data is still accessible. I know I've said that three times, but it's such a convenience. Then when you get a disk to replace the failed disk, you can use a standard "restore" in tRAID (though you have to run it throug the web client). There is also a caveat that you can only "restore" to a disk of the same size as the one that failed

That makes tRAID sound highly unflexible, but there is a much better way to restore a failed disk in tRAID. What you can do, is go to windows disk manager and assign a drive letter to the "virtual disk" that flexraid created. Even after your physical disk fails, the virtual disk is still available. When you give it a drive letter, you can plug in your empty disk and copy all the data off the "failed" disk (virtual). This way you can add a bigger disk when your old one fails, plus you can do all this with almost zero downtime. After it's done, you just create a new raid configuration, and start the storage pool immediately then create a new parity online (old will have to be overwritten)

There is always a warning about the risk of running the operations in "online" mode but I use it exclusively without issue

It sounds complicated, but it has a free trial period. I recommend that you build an array in the trial period and test a disk restore (with the copy data from a failed disk method) like I explained above during the trial period to get a feel for it. The hard work for it isn't automatic, but you can start the copy and recreate the array within about 5 minutes (waiting for them to complete can take a long time, but you don't have to watch it complete Wink )

I'll post my performance settings in tRAID shortly, but with the combination I'm using I actually get 80+ (usually 90+) MB/s writes to the pool. I actually use a landing disk as well. My server has a 256GB SSD, and I made a 128GB empty partition to use as a landing disk for writes. You can use a completely separate SSD as well, or no landing disk at all

With 80+ MB/s write speeds I didn't really *need* a landing disk, but I already had the SSD in the server and it had more than enough free space (over bought as I only need about 50GB, so I still have 70GB free even with the 128GB partition being used for the landing disk). With a landing disk all writes are dumped to the SSD and flushed to your array after a delay you can specify (I use 15 minutes). My writes approach 300MB/s this way Wink
Reply
#15
Also, you asked about USB/SATA/SCSI etc

I would advise against USB in any configuration, but they aren't officially supported in tRAID. There are surely some that will work and some that will not

If you have an extra PCIe (x4 or more) slot I'd recommend a Dell PERC H200, PERC H310, or IBM m1015. They can be had pretty cheap on ebay, and they are easy to cross flash into LSI firmware which is well supported and will simply present your disks in windows (jbod). Here is a guide for flashing the h200/h310 cards (which are cheaper right now) https://techmattr.wordpress.com/2013/08/...o-it-mode/

Then connect a couple of these cables http://www.newegg.com/Product/Product.as...6816116097 to get 8 sata slots from the "LSI" (flashed) card. If you have a norco or supermicro case with sff8087 backplanes then you can connect 8087 to 8087 instead of using the breakout to sata cables

USB can sometimes be had for cheap, but the enclosures usually suck and the warranty is typically shorter. They are also slower overall with access times which will result in a slower WSS or tRAID write. If you don't have space in your case for more drives you can also route the cables (breakout cables I listed are pretty long) and a single molex power cable to one of these http://www.newegg.com/Product/Product.as...-_-Product (which can sit outside your case without issue)
Reply

Logout Mark Read Team Forum Stats Members Help
Windows 8 Storage Space: strategy, parity & spin down0