2012-10-19, 04:15
I plan to add a whole new vdev - the raidz2 array of drives - at a time when expansion is required. Will be starting with 10 x 2TB in a raidz2 so ~14TiB / usable space.
When getting close to expand I'll likely slowly accumulate whatever X TB drives are good at the moment, build a new vdedv (another raidz2) and add them to the existing zpool. Also, if I decide to retire the 2TB drives at that point I'd make a separate zpool and then copy stuff over, kill old zpool.
Around Dec 2011 noticed md5sum's for xferred files from cache drive to mdadm raid5 array were not matching. Did a memtest and lit up red like an Xmas tree - bad RAM. Everything looked OK at that point. One of the drives in the raid5 failed in June 2012. Replaced it and mdadm / raid5 array looked OK. Did a reboot and all of a sudden 'bad superblock' on the ext4...couldn't be reapaired. Lost approx 100 movies, but did have an older backup.
Moral of that story is that I believe the bad RAM and/or the failing disk soured the filesystem and when the raid array rebuilt it completely hosed it. ZFS would likely have saved my ass in this case since it has the extra layer of protection and a scrub could have likely fixed some or all of the damage at that point...most RAID solutions wouldn't.
PS: I'm now sufficiently paranoid to keep a backup as well...should be protected against anything except house burning down - for now at least.
When getting close to expand I'll likely slowly accumulate whatever X TB drives are good at the moment, build a new vdedv (another raidz2) and add them to the existing zpool. Also, if I decide to retire the 2TB drives at that point I'd make a separate zpool and then copy stuff over, kill old zpool.
Around Dec 2011 noticed md5sum's for xferred files from cache drive to mdadm raid5 array were not matching. Did a memtest and lit up red like an Xmas tree - bad RAM. Everything looked OK at that point. One of the drives in the raid5 failed in June 2012. Replaced it and mdadm / raid5 array looked OK. Did a reboot and all of a sudden 'bad superblock' on the ext4...couldn't be reapaired. Lost approx 100 movies, but did have an older backup.
Moral of that story is that I believe the bad RAM and/or the failing disk soured the filesystem and when the raid array rebuilt it completely hosed it. ZFS would likely have saved my ass in this case since it has the extra layer of protection and a scrub could have likely fixed some or all of the damage at that point...most RAID solutions wouldn't.
PS: I'm now sufficiently paranoid to keep a backup as well...should be protected against anything except house burning down - for now at least.