[server pr0n] FreeBSD 9.1 + ZFS build
#1
Hi All,

Posting here some details on my new, long-term server. I haven’t seen too many stock FreeBSD + ZFS server builds on the XBMC forums so this may provide some alternative solutions for those interested.

TLDR version
Thus far its working great and I’d highly recommend it. I have a very robust 24-bay server using 125 W idle.
-- I am using a 10x 2TB raidz2 pool of cheap ‘green’ drives - 2 of 10 disks used up for hw redundancy
-- This box will be a glorified NAS; won’t do much else besides possibly sabnzbd, sickbear, etc at some point too. Hardware may be overkill, but I am building this to last.
-- Am currently getting 85+ MB/s read/write over NFSv4 so I am happy with that...could be better I know, but works for me. On to the next project once this has all stabilized and put into service.
-- Did NOT do any ZFS tuning; kept stock vmem* values, etc.
-- Idle power usage for FULL system is approx 125 Watts: 10 x HDD's, 1 SSD, all fans, cpu, mobo and other entrails.
-- I also kept the stock 5 x 80mm fan wall in the Norco - yes it is noisy but it keeps all cool and unit in diff room from HT

Image

Intro
Primary goal of this build is to replace my aging - and sometimes ailing - Ubuntu-based, RAID5 server. I wanted to go with something that of course offered hardware redundancy, but also a little extra in the way of long-term data protection. For that reason I chose ZFS.

Those not in the know can consult Mr Google, but the main selling point to me is the extra layer of data protection offered by ZFS’ checksum layer. It’ll help protect against bad HDD’s, bit rot, alien invaders, etc beyond what a simple parity-based RAID can offer. If something does go sideways there’s also the ability to ‘scrub’ or repair a zpool...lots of features I couldn’t find in other solutions.

I’m not starting this thread as filesystem flame-war, but needless to say the ability to also prevent and fix a corrupted chunk of data is appealing to many spending a lot of time and money collecting media. ZFS also offers a lot of other fancy features like deduplication, compression, snapshots, send/receive filesystems, etc but those are typically not needed for a simple home-based NAS so I didn’t pursue them.

One other thing I seemingly choose to do is things the hard-way. Meaning although this took much longer to google, collect and implement I now have a better understanding of how to fix stuff when it breaks instead of relying on a GUI-based installer / management system.

Build Pros
Awesome, well maintained and lots of cool sounding ZFS features not offered in any other open-source filesystems. Note if btrfs was ready to roll I’d be using that since wouldn’t have to learn about BSD...it’s not ready for prime-time yet though. Setting up and maintaining the actual ZFS pools is also super easy once you get to that point.

Build Cons
A bit of a learning curve for a new OS. You also can’t easily expand or shrink an existing raidz vdev. A zfs zpool is made up of one or more vdev’s. It can be a simple JBOD a raidz, raidz2 or raidz3 - think raid5, 6 and 7? respectively. Once the raidz vdev is created you can’t add another device to it. You can 1-by-one replace each unit with bigger drives and increase it that way. What I plan to do is simply add another 6 or 10 raidz2 vdev when space becomes an issue...it may be 5 or 6 TB drives at that point too so I may even be able to move all data over and retire old HDD”s at that point - we’ll see. You just need to plan ahead.

Notes
I had some shaky SAS card driver issues that turned out to be caused by 2 x dud HDD”s. Once they were replaced I have pushed approx 10 TB of data onto it, done multiple scrubs and it hasn’t missed a beat.
I still need to get the APC UPS working with the server for auto-shutdown after battery hits certain level. I have this sorted out just need to implement.

I also OF COURSE am also keeping a backup - using an LVM on an Ubuntu desktop for backups.

Build Specs
Quote:[cpu] Intel Xeon E3-1220-V2
[mobo] Supermicro X9SCM-F (bios 2.0a)
[ram] 16 GB total, (4x) 4GB Crucial CT51272BA1339 [DDR3 Unbuffered ECC]
[ssd] Crucial M4 64GB (fw 000F) for OS formatted as UFS2 with TRIM support
[case] Norco RPC-4224 (24-bay)
[sas card] (3x) IBM M1015 (IT mode v15)
[mps driver] manually loading LSI's mpslsi driver v15
[hdd] (3x) 2TB Seagate ST2000DL003
(3x) 2TB WD WD20EARS
(2x) 2TB WD WD20EFRX (RED)
(1x) 2TB Hitachi HDS5C3020ALA632
(1x) 2TB Samsung/Seagate ST2000DL004
[os] FreeBSD 9.1-RELEASE amd64
[NFS] v4
[ZFS] v28, dedupe, compression OFF
[firewall] ipfw
[HDD pwr cbl] Antec 77CM Molex Connector With Cable for NeoPower Series
[HDD sas cbl] LSI Multi-Lane Internal SFF-8087 to SFF-8087 SAS Cable 0.6M
[UPS] APC Back-UPS Pro 1000 - using apcupsd for system shutdown

Benchmarking
I created a 24GB random data file for benchmarking to hopefully saturate the 16GB of RAM. I am also aware that dd is not a de facto benchmark. I may try bonnie++ later if time permits, but this gives me warm fuzzies for the moment so HK.
Quote:/dev/urandom → SSD write: 86 MB/s [not really accurate measure of SSD write, but needed to make the test file somehow]
dd bs=1M count=24000 if=/dev/urandom of=/home/user/testdir/rnd24GB.dd
...
25165824000 bytes transferred in 291.965923 secs (86194388 bytes/sec)

SSD read: 573 MB/s
dd if=/home/user/testdir/rnd24GB.dd of=/dev/null
...
25165824000 bytes transferred in 46.863236 secs (537005680 bytes/sec)

HDD raw write: all 120+ MB/s
-- sent the rnd24GB.dd file to the devices
dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da0
...
14997782528 bytes transferred in 122.560944 secs (122369998 bytes/sec)

dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da1
...
25165824000 bytes transferred in 189.845046 secs (132559814 bytes/sec)
dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da6
...
25165824000 bytes transferred in 137.909168 secs (182481153 bytes/sec)

SSD → ZFS write: 78 MB/s
-- looks like SATA bottleneck since /dev/urandom & NFS better results; see below
dd if=/home/user/testdir/rnd24GB.dd of=/home/user/ztank0/file1.dd
...
25165824000 bytes transferred in 322.990155 secs (77915143 bytes/sec)

ZFS read: 196 MB/s
dd if=/home/user/ztank0/file1.dd of=/dev/null
...
25165824000 bytes transferred in 128.271675 secs (196191591 bytes/sec)

/dev/urandom → ZFS write: 85 MB/s
dd bs=1M count=24000 if=/dev/urandom of=/home/user/ztank0/file24GBrand.dd
...
25165824000 bytes transferred in 294.469267 secs (85461632 bytes/sec)
dd bs=1M count=24000 if=/dev/urandom of=/home/user/ztank0/file24GBrand2.dd
...
25165824000 bytes transferred in 293.075589 secs (85868032 bytes/sec)

NFSv4 ZFS → client disk write: 76 MB/s
dd if=/mnt/zfs/file24GBrand.dd of=/home/user/local/tt_temp/24GBxfer1.dd
...
25165824000 bytes (25 GB) copied, 329.331 s, 76.4 MB/s

NFSv4 ZFS → client read (stream): 88 MB/s
dd if=/mnt/zfs/file24GBrand.dd of=/dev/null
...
25165824000 bytes (25 GB) copied, 285.951 s, 88.0 MB/s

NFSv4 client write → ZFS: 86 MB/s
dd if=/home/user/local/tt_temp/24GBxfer1.dd of=/mnt/zfs/xferbak24GB.dd
...
25165824000 bytes (25 GB) copied, 291.752 s, 86.3 MB/s
ZFS info
I used the GPT partition per disk + GNOP for each drive as described here: http://forums.freebsd.org/showpost.php?p...ostcount=6
Code:
#uname -a
FreeBSD e1220.local 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec  4 09:23:10 UTC 2012     [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64

# zfs list tank0
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank0  5.94T  7.65T  5.94T  /home/user/ztank0

# zpool status -v tank0
  pool: tank0
state: ONLINE
  scan: scrub repaired 0 in 10h17m with 0 errors on Sun Jan  6 22:04:42 2013
config:

    NAME        STATE     READ WRITE CKSUM
    tank0       ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        da2p1   ONLINE       0     0     0
        da10p1  ONLINE       0     0     0
        da8p1   ONLINE       0     0     0
        da9p1   ONLINE       0     0     0
        da7p1   ONLINE       0     0     0
        da1p1   ONLINE       0     0     0
        da3p1   ONLINE       0     0     0
        da4p1   ONLINE       0     0     0
        da5p1   ONLINE       0     0     0
        da0p1   ONLINE       0     0     0

errors: No known data errors
If I helped out pls give me a +

A bunch of XBMC instances, big-ass screen in the basement + a 20TB FreeBSD, ZFS server.
Reply


Messages In This Thread
[server pr0n] FreeBSD 9.1 + ZFS build - by thethirdnut - 2013-01-09, 06:03
Logout Mark Read Team Forum Stats Members Help
[server pr0n] FreeBSD 9.1 + ZFS build0