[server pr0n] FreeBSD 9.1 + ZFS build
#1
Hi All,

Posting here some details on my new, long-term server. I haven’t seen too many stock FreeBSD + ZFS server builds on the XBMC forums so this may provide some alternative solutions for those interested.

TLDR version
Thus far its working great and I’d highly recommend it. I have a very robust 24-bay server using 125 W idle.
-- I am using a 10x 2TB raidz2 pool of cheap ‘green’ drives - 2 of 10 disks used up for hw redundancy
-- This box will be a glorified NAS; won’t do much else besides possibly sabnzbd, sickbear, etc at some point too. Hardware may be overkill, but I am building this to last.
-- Am currently getting 85+ MB/s read/write over NFSv4 so I am happy with that...could be better I know, but works for me. On to the next project once this has all stabilized and put into service.
-- Did NOT do any ZFS tuning; kept stock vmem* values, etc.
-- Idle power usage for FULL system is approx 125 Watts: 10 x HDD's, 1 SSD, all fans, cpu, mobo and other entrails.
-- I also kept the stock 5 x 80mm fan wall in the Norco - yes it is noisy but it keeps all cool and unit in diff room from HT

Image

Intro
Primary goal of this build is to replace my aging - and sometimes ailing - Ubuntu-based, RAID5 server. I wanted to go with something that of course offered hardware redundancy, but also a little extra in the way of long-term data protection. For that reason I chose ZFS.

Those not in the know can consult Mr Google, but the main selling point to me is the extra layer of data protection offered by ZFS’ checksum layer. It’ll help protect against bad HDD’s, bit rot, alien invaders, etc beyond what a simple parity-based RAID can offer. If something does go sideways there’s also the ability to ‘scrub’ or repair a zpool...lots of features I couldn’t find in other solutions.

I’m not starting this thread as filesystem flame-war, but needless to say the ability to also prevent and fix a corrupted chunk of data is appealing to many spending a lot of time and money collecting media. ZFS also offers a lot of other fancy features like deduplication, compression, snapshots, send/receive filesystems, etc but those are typically not needed for a simple home-based NAS so I didn’t pursue them.

One other thing I seemingly choose to do is things the hard-way. Meaning although this took much longer to google, collect and implement I now have a better understanding of how to fix stuff when it breaks instead of relying on a GUI-based installer / management system.

Build Pros
Awesome, well maintained and lots of cool sounding ZFS features not offered in any other open-source filesystems. Note if btrfs was ready to roll I’d be using that since wouldn’t have to learn about BSD...it’s not ready for prime-time yet though. Setting up and maintaining the actual ZFS pools is also super easy once you get to that point.

Build Cons
A bit of a learning curve for a new OS. You also can’t easily expand or shrink an existing raidz vdev. A zfs zpool is made up of one or more vdev’s. It can be a simple JBOD a raidz, raidz2 or raidz3 - think raid5, 6 and 7? respectively. Once the raidz vdev is created you can’t add another device to it. You can 1-by-one replace each unit with bigger drives and increase it that way. What I plan to do is simply add another 6 or 10 raidz2 vdev when space becomes an issue...it may be 5 or 6 TB drives at that point too so I may even be able to move all data over and retire old HDD”s at that point - we’ll see. You just need to plan ahead.

Notes
I had some shaky SAS card driver issues that turned out to be caused by 2 x dud HDD”s. Once they were replaced I have pushed approx 10 TB of data onto it, done multiple scrubs and it hasn’t missed a beat.
I still need to get the APC UPS working with the server for auto-shutdown after battery hits certain level. I have this sorted out just need to implement.

I also OF COURSE am also keeping a backup - using an LVM on an Ubuntu desktop for backups.

Build Specs
Quote:[cpu] Intel Xeon E3-1220-V2
[mobo] Supermicro X9SCM-F (bios 2.0a)
[ram] 16 GB total, (4x) 4GB Crucial CT51272BA1339 [DDR3 Unbuffered ECC]
[ssd] Crucial M4 64GB (fw 000F) for OS formatted as UFS2 with TRIM support
[case] Norco RPC-4224 (24-bay)
[sas card] (3x) IBM M1015 (IT mode v15)
[mps driver] manually loading LSI's mpslsi driver v15
[hdd] (3x) 2TB Seagate ST2000DL003
(3x) 2TB WD WD20EARS
(2x) 2TB WD WD20EFRX (RED)
(1x) 2TB Hitachi HDS5C3020ALA632
(1x) 2TB Samsung/Seagate ST2000DL004
[os] FreeBSD 9.1-RELEASE amd64
[NFS] v4
[ZFS] v28, dedupe, compression OFF
[firewall] ipfw
[HDD pwr cbl] Antec 77CM Molex Connector With Cable for NeoPower Series
[HDD sas cbl] LSI Multi-Lane Internal SFF-8087 to SFF-8087 SAS Cable 0.6M
[UPS] APC Back-UPS Pro 1000 - using apcupsd for system shutdown

Benchmarking
I created a 24GB random data file for benchmarking to hopefully saturate the 16GB of RAM. I am also aware that dd is not a de facto benchmark. I may try bonnie++ later if time permits, but this gives me warm fuzzies for the moment so HK.
Quote:/dev/urandom → SSD write: 86 MB/s [not really accurate measure of SSD write, but needed to make the test file somehow]
dd bs=1M count=24000 if=/dev/urandom of=/home/user/testdir/rnd24GB.dd
...
25165824000 bytes transferred in 291.965923 secs (86194388 bytes/sec)

SSD read: 573 MB/s
dd if=/home/user/testdir/rnd24GB.dd of=/dev/null
...
25165824000 bytes transferred in 46.863236 secs (537005680 bytes/sec)

HDD raw write: all 120+ MB/s
-- sent the rnd24GB.dd file to the devices
dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da0
...
14997782528 bytes transferred in 122.560944 secs (122369998 bytes/sec)

dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da1
...
25165824000 bytes transferred in 189.845046 secs (132559814 bytes/sec)
dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da6
...
25165824000 bytes transferred in 137.909168 secs (182481153 bytes/sec)

SSD → ZFS write: 78 MB/s
-- looks like SATA bottleneck since /dev/urandom & NFS better results; see below
dd if=/home/user/testdir/rnd24GB.dd of=/home/user/ztank0/file1.dd
...
25165824000 bytes transferred in 322.990155 secs (77915143 bytes/sec)

ZFS read: 196 MB/s
dd if=/home/user/ztank0/file1.dd of=/dev/null
...
25165824000 bytes transferred in 128.271675 secs (196191591 bytes/sec)

/dev/urandom → ZFS write: 85 MB/s
dd bs=1M count=24000 if=/dev/urandom of=/home/user/ztank0/file24GBrand.dd
...
25165824000 bytes transferred in 294.469267 secs (85461632 bytes/sec)
dd bs=1M count=24000 if=/dev/urandom of=/home/user/ztank0/file24GBrand2.dd
...
25165824000 bytes transferred in 293.075589 secs (85868032 bytes/sec)

NFSv4 ZFS → client disk write: 76 MB/s
dd if=/mnt/zfs/file24GBrand.dd of=/home/user/local/tt_temp/24GBxfer1.dd
...
25165824000 bytes (25 GB) copied, 329.331 s, 76.4 MB/s

NFSv4 ZFS → client read (stream): 88 MB/s
dd if=/mnt/zfs/file24GBrand.dd of=/dev/null
...
25165824000 bytes (25 GB) copied, 285.951 s, 88.0 MB/s

NFSv4 client write → ZFS: 86 MB/s
dd if=/home/user/local/tt_temp/24GBxfer1.dd of=/mnt/zfs/xferbak24GB.dd
...
25165824000 bytes (25 GB) copied, 291.752 s, 86.3 MB/s
ZFS info
I used the GPT partition per disk + GNOP for each drive as described here: http://forums.freebsd.org/showpost.php?p...ostcount=6
Code:
#uname -a
FreeBSD e1220.local 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec  4 09:23:10 UTC 2012     [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64

# zfs list tank0
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank0  5.94T  7.65T  5.94T  /home/user/ztank0

# zpool status -v tank0
  pool: tank0
state: ONLINE
  scan: scrub repaired 0 in 10h17m with 0 errors on Sun Jan  6 22:04:42 2013
config:

    NAME        STATE     READ WRITE CKSUM
    tank0       ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        da2p1   ONLINE       0     0     0
        da10p1  ONLINE       0     0     0
        da8p1   ONLINE       0     0     0
        da9p1   ONLINE       0     0     0
        da7p1   ONLINE       0     0     0
        da1p1   ONLINE       0     0     0
        da3p1   ONLINE       0     0     0
        da4p1   ONLINE       0     0     0
        da5p1   ONLINE       0     0     0
        da0p1   ONLINE       0     0     0

errors: No known data errors
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#2
Thanks for sharing the build. Always nice to what others are using for a NAS. I personally have a Synology NAS but all my media is local to the lone HTPC is the house.

What case did you use? The build specs are missing this tidbit.
Reply
#3
[case] Norco RPC-4224 (24-bay)

Thanks! Missed that part somehow in the specs.
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#4
Thanks for adding that. I was wondering which case it was because of the huge backplane. Did you have any issues with the case? The reviews on NewEgg are only so-so.
Reply
#5
I had one port on one of the six backplanes faulty. I RMA'd it and had a free, new replacement within a couple weeks. Also ordered a spare for future use.

I would still recommend it. There's nothing really that can touch it at that price-point - $400ish CAD - for a 24-bay solution. A case like this should serve a person well for many years to come.


If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#6
Great to see another ZFS user on the forums, it does seem to consistently get overlooked by the all the unRAID fans.
Reply
#7
@PANiCnz - yes, I agree.

The parity-based RAID has served me well over the years, but I was looking for something more powerful to replace it. Even now with my soon-to-be-decommissioned Ubuntu server I see the odd SATA error in dmesg and wonder what if any issues that's causing with my data.

I don't need to worry about that with ZFS. I had several drives drop in and out while sorting out initial dud HDD's issues on this unit and even after all of that scrub and md5 check's all turned up 100% solid.

Biggest entry barrier I see to ZFS is finding the OS to run it. This is my first dip into BSD and spent lots of time researching it + of course all / best compatible hardware.

It's been worth it though and I *hope* to get several years of reliable service out of this new box.

Next box I *may* turn back to Linux if BTRFS ever gets stable and feature-competitive, but full speed ahead with ZFS for the foreseeable future.



Another story for the crickets:

In current Ubuntu-based server I had bad RAM in Dec 2011 and then a failed HDD in the RAID5 array June 2012. mdadm happily rebuilt array once new drive added and everything looked fine. Upon next reboot, however, I ran into complete blockage trying to remount the ext4 filesystem on the RAID5 aray due to 'bad superblock'....fsck just completely freaked out too since apparently ALL of the superblocks were frakked.

What I think happened here is the something along the lines of the RAID 'write-hole' issue...the bad RAM wrote a bunch of crap parity stripes in Dec and when array rebuilt after failed drive in June it completely hosed the filesystem.

I should be able to avoid another issue like this with ZFS due to the enhanced data checksums, etc. I also went with the cheapo Xeon + ECC RAM as well in the new build for extra protection.
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#8
Nice hardware OP, I've been running this configuration for the last 3 years. I have upgraded from FreeBSD 8.0 > 8.Stable > 8.1 > back to stable > 8.2 > 9.0 and ZFS versions 13 > 14 > 15 > 28 and I'm happy to report that I have never had any issues.

I've actually switched mine to a combined setup a couple years ago when XBMC was first ported to FreeBSD. My FreeBSD media PC runs ZFS, Jails, XBMC, Samba, Sickbeard and Sabnzbd. Since its in the lounge it boots straight into XBMC and shutdown/reboot through XBMC via remote. No keyboard and mouse attached Smile
Reply
#9
One thought - the CPU for this looks to be overkill. Have you considered running ESX? I have ESX running on very similar hardware (same case, same SAS cards, same CPU, etc.). Under ESX I have at least 6 VMs, one of those is unRAID with 2x M1015 passed thru to run unRAID and the third M1015 is passed thru for FreeNAS - it could as easily have been some other OS. The unRAID boots from a USB that's passed thru after first booting a mini OS in a VM using a Plop boot to hit the USB. The FreeNAS VM comes off of an SSD and once it's up I have additional storage for ESX VMs via an NFS mount. I have a bit more RAM than you, 32Gig, but am not hurting for memory - your use of ZFS though is likely using it more heavily than any of my VMs so you might want more if you went virtual. I also see about 125W idle, about 200W with everything spinning - good to see our numbers jive!

Just a thought anyway, depending upon your needs and how much this is hitting the CPU you might find that you can get more work out of the hardware than for just storage by going virtual or adding virtualization. I chose unRAID for my own reasons (30TB off of 16ports) but I also have FreeNAS running (it can do ZFS too if I wanted) and may fire up some flavor of x86 Solaris to play with ZFS also.

P.S. It's not that unRAID users ignore ZFS it's that most of us don't see the data for dollar as being worthwhile not to mention the hassle. There are some unRAID users who have ZFS volumes underlying their unRAID and I may create something similar for a cache drive for my unRAID. Once you virtualize the options open up a great deal and I, like most unRAID users, haven't lost data so I'm good with it. Note that I've been running unRAID for at LEAST 6 years or so, I was one of the early adopters. <shrug> It's not been overlooked, many of us did careful consideration before going the way we did. Virtualizing, especially with nice 24bay cases, really opens up possibilities too!
Openelec Gotham, MCE remote(s), Intel i3 NUC, DVDs fed from unRAID cataloged by DVD Profiler. HD-DVD encoded with Handbrake to x.264. Yamaha receiver(s)
Reply
#10
You don't need virtualisation to make your server do multiple tasks. Whatever functions you doing over 6 VMs can be done on a single FreeBSD install.
Reply
#11
@BLKMGK - agreed on the CPU, but it is the cheapest Xeon you can buy and it does ECC. I had planned to move my i3-540 to handle ECC, but ended up going with a diff Supermicro mobo that uses another socket. Also, I spent a lot of time and effort getting this build up to this point since first toe into FreeBSD. Virtualization might be something to consider down the road, but about the only reason I keep a Windows VM now is for AnyDVD + EMM.

@blueprint - I will definitely build a dedicated rack in the next house. This FreeBSD build could also be headless outside of initial setup too since I've done all the config and management over ssh once the USB key was unplugged.

Thanks both for you feedback, comments - enjoy the weekend.
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#12
Nice looking setup, I don't suspect you'd have any problems going headless.

I've had nothing but good experiences with FreeBSD and ZFS so far, I've had my system for about two years now. It's been running continuously for 348 days in fact.
Reply
#13
@Skafte - thanks.

Yes, its been a learning curve with FreeBSD + all new hardware, but once I got the original hw bugs worked out...bad drives, suspect cables & backplanes, etc...its very easy to work with.

If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply
#14
@thethirdnut

I love your built, very similiar to mine (same board, e3-1220, usb drive for /, a little less hdds though :-D )

At the moment I am using ubuntu 12.04 with zfsonlinux, best of both worlds so to say.

Did you have any issues regarding the 4k block size of your drives? I upgraded from 1TB drives (freenas setup) to 2TB advanced format drives and it wouldnt add them. So I had to set up a new pool with the "ashift=12" option.

Btw.: except for the fact, that you cant add just another disk to an existing pool, zfs is awesome.
Reply
#15
@derechteversus - Thanks and I had no issues with that. I set them all up as 4k drives.

I suppose that 1 or 2 of them that did have real 512-byte sectors could have been configured as such, but I said the hell with it since that seemed overly complicated and potentially performance-crippling to have some 4k and some 512-byte drives.

I'm getting good performance and whole zpool shows up as ashift = 12 as well. Procedure I used for the 10-drive pool is below if you're interested:

Code:
i)
gpart create -s gpt da0
...
gpart create -s gpt da9
ii)
gpart add -t freebsd-zfs -l disk0 -b 2048 -a 4k da0
...
gpart add -t freebsd-zfs -l disk9 -b 2048 -a 4k da9
iii)
gnop create -S 4096 /dev/gpt/disk0
...
gnop create -S 4096 /dev/gpt/disk9
iv)
zpool create tank0 raidz2 /dev/gpt/disk0.nop /dev/gpt/disk1.nop /dev/gpt/disk2.nop /dev/gpt/disk3.nop /dev/gpt/disk4.nop /dev/gpt/disk5.nop /dev/gpt/disk6.nop /dev/gpt/disk7.nop /dev/gpt/disk8.nop /dev/gpt/disk9.nop
zpool export tank0
v)
gnop destroy /dev/gpt/disk0.nop
...
gnop destroy /dev/gpt/disk9.nop

REBOOT. Shouldn't show anything like da0.nop
ls /dev/gpt/
vi)
zpool import tank0

...and carry on with life.
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply

Logout Mark Read Team Forum Stats Members Help
[server pr0n] FreeBSD 9.1 + ZFS build0