Table Of Content
This little server adventure was completed in February of 2015, but I’m recounting it in November. Bear that in mind while reading it.
QNAP NFS/Samba/AFS Server
As mentioned in a previous post, my main login server is joker which has been a VM on VMWare and then KVM for quite a few years. I’ve kept my important files such as what’s in my home directory on an NFS server of some variety that entire time, as well. I consider my generic servers such as joker to be cattle and not pets. In other words, if they get “sick” some how, I just nuke ’em instead of trying to repair them. Having config, home, etc fires on a different server makes that possible.
A number of years ago, I managed to convince myself to get a QNAP 4-bay storage appliance. I chose QNAP at the time because it had the better options for remote logins via ssh, and the company Drobo (reasonably popular for some reason) didn’t know its ass from a hole in the ground when it came to network storage. Supposedly they’ve since updated that.
In any event, sticking with my “Batman rogues gallery” naming scheme: welcome bane to the fold. At the time I was reasonably satisfied with the strange way QNAP did server administration via a web page, though I will always rather use a UNIX CLI. QNAP really doesn’t want its end users doing that, though, so they make it challenging to get a good sshd installed and running. But it had NFS, SMB and AFP already ready to go. And with a couple clicks of a button, I got an internal MySQL server running on it as well.
Time for More Storage
When I set bane up initially, I did it with 4 3TB drives in a RAID10 configuration. Meaning I had around 6TB of space, give or take. Over the years, I began ripping my DVD and Blu-Ray collection and storing them in raw, uncompressed formats. That ate up a lot of space, and at some point I decided to move the storage from 4 x 3TB in RAID10 to 4 x 3TB in RAID5. That gave me around 9TB of space, at the cost of some storage performance because, let’s face it: RAID5 sucks. Specially when you’re doing it all in software, as the QNAP does.
Dying Disks
Those 3TB drives I purchased were a mistake. I used to trust Seagate drives with everything, but their SATA drives are just garbage. Through and through. The disks in the QNAP started dropping dead all at once; not literally, but all within a few months’ time (Christmas 2014 – early 2015). I’d have just enough time to get a dead disk replaced before another one would tank. No data was lost, but it was a bit on the stressful side. It was time for new disks, and maybe a new server with more disk bays. I wanted to return to RAID10, and I wanted to start using FreeBSD’s most excellent ZFS to manage it all.
Server Purchase
The kit included:
- 1 Supermicro MBD-X10SAT-O motherboard
- 1 Intel Core i7 4790 CPU
- 16GB DDR3
- 8 White Label 4TB 7200RPM drives
- 1 Plextor 128GB PCI-E drive
- 1 Supermicro SuperChassis CSE-743T-665B case
I chose the Supermicro board for a couple of reasons. First, it has 2 Intel NICs built in, and I intended to do NIC bonding, or lagg is FreeBSD calls it. Second, it has a whole slew of SATA ports on it. Third, I like Supermicro boards and cases when it comes to servers. I’ve used their stuff before with great success. The case was chosen because it has externally facing, hot swappable drive trays, and a pass-through logic board inside of it to connect the drives directly to the motherboard. That logic board literally is a pass-through; it does nothing to the disks except provide power and a straight data path to the motherboard. It has 8 SATA plugs on the back of it that provide a 1-to-1 for each disk to the motherboard.
Install
The build was easy: put the CPU and cooler on the motherboard, then the RAM, then pop it all into the server chassis. Wire up the storage mid-plane to the motherboard’s SATA ports, add the PCI-E drive, pop the disks into the trays and slide them in, power it up. It booted up first time and right into the BIOS (note: Supermicro still uses BIOS vs those stupid UEFIs found on most modern day motherboards!) I told it which to boot from first (the USB stick w/FreeBSD 10.1 on it) and off it went.
I had FreeBSD installed on the PCI-E drive in a matter of minutes. I let it use the default ZFS root map, figuring I’d customize it later. During the install, I skipped the network configuration because I intended to create a lagg after the fact. The installer doesn’t have an option for that. So once the machine was running, I sat down at the console and added these lines to /etc/rc.conf:
# Bring up 2xGE ints into an 802.3ad bundle
ifconfig_em0="up"
ifconfig_igb0="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport em0 laggport igb0 192.168.10.3/24"
defaultrouter="192.168.10.254"
ifconfig_lagg0_ipv6="inet6 accept_rtadv"
This will only work with a managed switch that can do LACP-enabled 802.3ad bundles. The Cisco switch I have in my basement, a model SG500-28 is one such switch. Two GigE interfaces needed to be configured to do LACP with a Port-Channel, and then the Port-Channel interface added to a VLAN:
interface gigabitethernet1/1/6
description "bane : em0 : po6"
channel-group 6 mode auto
!
interface gigabitethernet1/1/18
description "bane : igb0 : po6"
channel-group 6 mode auto
!
interface Port-channel6
description "bane : lagg0 : g1/1/6, g1/1/18"
switchport mode access
switchport access vlan 770
And with everything configured properly, the interface came up on bane:
bane# ifconfig lagg0
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4019b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,VLAN_HWTSO>
ether 0c:c4:7a:45:fd:80
inet 192.168.10.3 netmask 0xffffff00 broadcast 192.168.10.255
inet6 fe80::ec4:7aff:fe45:fd80%lagg0 prefixlen 64 scopeid 0x5
inet6 [redacted] prefixlen 64 autoconf
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
media: Ethernet autoselect
status: active
laggproto lacp lagghash l2,l3,l4
laggport: igb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
bane# ping 192.168.10.254
PING 192.168.10.254 (192.168.10.254): 56 data bytes
64 bytes from 192.168.10.254: icmp_seq=0 ttl=64 time=0.322 ms
64 bytes from 192.168.10.254: icmp_seq=1 ttl=64 time=0.308 ms
64 bytes from 192.168.10.254: icmp_seq=2 ttl=64 time=0.310 ms
64 bytes from 192.168.10.254: icmp_seq=3 ttl=64 time=0.326 ms
^C
--- 192.168.10.254 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.308/0.317/0.326/0.008 ms
With the networking done, I added a local user, made sure pkg was installed and updated, and installed sudo:
pkg install sudo
Once installed, I added my local user to the sudoers list.
Lock Down SSH
Everyone does this, right? RIGHT?! I hope so. I’ll happily call you an idiot if you don’t. Thankfully, unlike most Linux installations, FreeBSD’s default sshd will not let root log in. Good. But, to make sure you can ssh into the box before you get a chance to fill out your authorized_keys file, FreeBSD’s sshd allows PAM authentication. All well and good during the installation, but no bueno going forward. So the next step was to edit the /etc/ssh/sshd_config file to disable PAM auth:
ChallengeResponseAuthentication no
I kept one window open to the new server while I attempted to ssh into it from another. All good. No more passwords allowed.
Storage and ZFS
The whole point behind this server is storage. After running the gauntlet of returning and swapping a few of the new White Label drives due to bad sectors and other SATA errors on FreeBSD, I finally had 8 working drives: ada[0,1,4-9]. The reason for the jump is that the PCI-E SSD drive got assigned to ada2. An external E-SATA RAID box, which I’ll touch upon later, was assigned ada3.
Time to turn the 8 drives into a RAID10 array of 16(ish) GB. ZFS made that stupidly easy:
zpool create local mirror ada0 ada1 mirror ada4 ada5 mirror ada6 ada7 mirror ada8 ada9
Done. Instantly: a large filesystem automatically mounted on /local:
bane# zpool status local
pool: local
state: ONLINE
scan: resilvered 409G in 2h52m with 0 errors on Sat Mar 7 23:33:54 2015
config:
NAME STATE READ WRITE CKSUM
local ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ada6 ONLINE 0 0 0
ada7 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
ada8 ONLINE 0 0 0
ada9 ONLINE 0 0 0
errors: No known data errors
I created a new ZFS called local/export, which is where I intended to put the NFS, SMB, AFP, etc share:
zfs create local/export
A few more for various things, like moving /usr/src and /usr/ports over to the new RAID volume:
zfs create local/ports
zfs create local/src
zfs create local/export/music
zfs create local/export/movies
With the filesystems all created, I then NFS-mounted the old bane machine on to /old-opt, and did a simple:
cd /old-opt; tar cf - . | (cd /local/export ; tar xpvf -)
After multiple hours, everything was copied over. I then spent some time moving my ripped CDs into /local/export/music, and the aforementioned movies into /local/export/movies. I also moved everything out of /usr/local over to /local, deleted /usr/local, and soft-linked it to /local. With that:
bane# df
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/default 100G 2.6G 97G 3% /
devfs 1.0K 1.0K 0B 100% /dev
fdescfs 1.0K 1.0K 0B 100% /dev/fd
procfs 4.0K 4.0K 0B 100% /proc
local 10T 3.0G 10T 0% /local
local/export 11T 629G 10T 6% /local/export
local/export/movies 13T 3.1T 10T 23% /local/export/movies
local/export/music 10T 14G 10T 0% /local/export/music
local/ports 10T 1.5G 10T 0% /local/ports
local/src 10T 48M 10T 0% /local/src
zroot/tmp 97G 184K 97G 0% /tmp
zroot/var/crash 97G 96K 97G 0% /var/crash
zroot/var/log 97G 2.0M 97G 0% /var/log
zroot/var/mail 97G 120K 97G 0% /var/mail
zroot/var/tmp 97G 96K 97G 0% /var/tmp
Backup Directory
I took the questionable 3TB drives out of the QNAP and added them to an external RAID5 ESATA enclosure. Yes, I know: they were questionable. But even with them being questionable, I figured I was slightly increasing my odds of data recovery by keeping a copy on them. Anyway, as noted above, the ESATA enclosure showed up as ada3. I created a zpool called backups out of it:
zpool create backups ada3
And then created 3 ZFS entries on it:
zfs create backups/bane
zfs create backups/windows
zfs create backups/timemachine
This post is getting a bit long in the tooth, so I’ll write up a Part 2 of sorts, explaining getting NFS, AFP, Samba, MySQL, and other things running on bane.
2 thoughts on “FreeBSD as my Network Storage Server (Part 1)”