Lets just say, raid isn't backup and real backup means 3 mirrors with one offsite, etc
That being said, I've got 20+ TB of mixed movie/music/tv series and there is no way I'm ever going to mirror that much data. While it's not a full back up, parity protection gets you a long way. Any single drive can die and it's contents won't be lost
For raid there are a variety of options. There is true hardware raid which typically only supports standard levels like raid 0, 1, 4, 5, 10. Storage Spaces and Flexraid also support standard raid levels (this is new for flexraid, something like 5 or 6 months ago Brahim added the feature). Anything not using a dedicated HW Raid card is not *really* hardware raid. Motherboards that support raid levels are still considered "Software-Assisted RAIDs" or SARAIDs because the hardware performing the raid calculations on those drives also runs your OS (not dedicated like in a HW card). Then there is pure software raid like flexraid, storage spaces, and even basic RAID configurations from Windows Disk Manager.
Pure software raid is what you are already looking at, which just needs a jbod disk configuration (and it gives the most flexibility if you need to replace a motherboard, change your system, etc). I've moved flexraid between motherboards, but I'm not sure how easily WSS configurations are moved around. The good thing about most software raids (like snapraid, flexraid, wss) is the data disks can typically be read outside of the array they are in since the data is not striped in snapshot parity modes.
Flexraid's RAID-F and snapraid are snapshot parity protection RAIDs, but their actual RAID "level" is not a specifically defined type like RAID0, 1, etc. Instead they are both what you call "Hybrid RAIDs" and closest to RAID4 for the parity calculations.
Onto your questions, I don't really recommend snapshot raid from Flexraid anymore. There are more differences than just real time and snapshot
RAID-F has some quirks
-Starting the storage pool must wait for the parity to be build (10+ hrs)
-When restoring a failed disk your data is offline (10+ hrs)
-There is little you can do to *fix* parity mismatches, and they require a parity rebuild (10+ hrs)
All of that amounts to a bunch of time with your data inaccessible
tRAID has improvements all around these areas
-You can start your pool (and read data) before the parity is built
-Parity can be build online (force a verify sync)
-Parity can be overwritten when there are small mismatch snafus in the parity (verify sync rather than just a whole parity rebuild)
-Array can run in degraded mode after any disk failure
-Data is reconstructed live**
**This live reconstruction part took me a while to understand. When you give tRAID your disks and assign them as a UoR (unit of risk) either for data (D01, D02, etc) or parity (PPU1, PPU2, etc) flexraid's tRAID service takes your Windows Disk Management drive letters (Disk01, Disk02, etc) and puts them in a low-level offline mode. Then it creates a "virtual disk" instead with the name you give it in flexraid's tRAID web client. Essentially if you assign 10 disks in tRAID and start the storage pool, windows disk management will show 20 disks. . . so the live reconstruction part. Once a disk fails the data is still available from the parity on the "virtual disk"
--This is a neat feature but hard to fully describe, you almost have to use it to fully appreciate it
--Basically when the disk fails you get an alert (you can set them up to SMS, email, etc)
--When the disk fails you see no difference in your pool (all data is still accessible)
Basically it runs in degraded mode with all the data present from your parity. While running in this mode your *newly* written data won't be protected, but all data is still accessible. I know I've said that three times, but it's such a convenience. Then when you get a disk to replace the failed disk, you can use a standard "restore" in tRAID (though you have to run it throug the web client). There is also a caveat that you can only "restore" to a disk of the same size as the one that failed
That makes tRAID sound highly unflexible, but there is a much better way to restore a failed disk in tRAID. What you can do, is go to windows disk manager and assign a drive letter to the "virtual disk" that flexraid created. Even after your physical disk fails, the virtual disk is still available. When you give it a drive letter, you can plug in your empty disk and copy all the data off the "failed" disk (virtual). This way you can add a bigger disk when your old one fails, plus you can do all this with almost zero downtime. After it's done, you just create a new raid configuration, and start the storage pool immediately then create a new parity online (old will have to be overwritten)
There is always a warning about the risk of running the operations in "online" mode but I use it exclusively without issue
It sounds complicated, but it has a free trial period. I recommend that you build an array in the trial period and test a disk restore (with the copy data from a failed disk method) like I explained above during the trial period to get a feel for it. The hard work for it isn't automatic, but you can start the copy and recreate the array within about 5 minutes (waiting for them to complete can take a long time, but you don't have to watch it complete
)
I'll post my performance settings in tRAID shortly, but with the combination I'm using I actually get 80+ (usually 90+) MB/s writes to the pool. I actually use a landing disk as well. My server has a 256GB SSD, and I made a 128GB empty partition to use as a landing disk for writes. You can use a completely separate SSD as well, or no landing disk at all
With 80+ MB/s write speeds I didn't really *need* a landing disk, but I already had the SSD in the server and it had more than enough free space (over bought as I only need about 50GB, so I still have 70GB free even with the 128GB partition being used for the landing disk). With a landing disk all writes are dumped to the SSD and flushed to your array after a delay you can specify (I use 15 minutes). My writes approach 300MB/s this way