I Bought a 3D Printer Too!

| Comments

For the past three to four years, Pat and I have been talking about 3D printers. For a long time, we mostly just discussed them and eventually arrived at reasons for why we weren’t buying the 3D printers… yet. Each time, the tone of the conversations was the same: 3D printers were incredibly neat and opened an entire new realm of possibilities, but we couldn’t quite come up with the justification to make the purchase. Over the years we’ve tossed out quite a few reasons for not being ready to buy a 3D printer, but they all essentially boiled down to these three reasons:

  1. 3D printers are expensive.
  2. We couldn’t think of problems that we could solve with 3D printers.
  3. The utter lack of the creative skill needed to work with 3D-modeling software

For the longest time, we used these three reasons as excuses to not buy a 3D printer. But then Pat abandoned our ideology and bought a 3D Printer, and not too long afterward he convinced our local makerspace, TheLab.ms, into buying two of their own 3D printers. For a few months, I lived vicariously through Pat’s adventures at home and in watching as he helped members at our makerspace start designing and printing their own 3D models.

Every once in awhile, I would identify problems that I encountered and we’d come up with a solution for the problem that involved designing and printing something. Most famously, my last couple DIY NAS server builds featured a 3D-printed bracket to add support to the power supply. A variation of that object was designed for my own NAS to include a couple of brackets to hold a pair of SSDs that I couldn’t quite cram into a tight space. Pat wound up selling those brackets in his Tindie Store to other DIY NAS builders who used my build as their own DIY NAS blueprint.

Each time that I thought of a problem that could be solved with a self-designed and printed object, it became clearer and clearer that my prior reasoning was invalid. It was nice that Pat was willing and able to design objects and then print them to solve my problems, but in observing the process he was going through, I began to realize that I was missing out on some challenging fun that could provide hours of enjoyment.

About a month ago, Pat told me that he was shopping for a new 3D printer because he was considering upgrading his own 3D-printing capabilities. He eventually sent me a link to a printer that he’d seen on Craigslist that he thought was a good deal, but was a bit of a sideways upgrade for him. The price of that printer had eliminated my last remaining excuse—I was going to buy a 3D Printer.

New vs. Used

Any time I plan to buy something that I consider expensive, I almost always begin my search looking for a deal on a used one. Since I also had a little bit of insider knowledge and knew that 3D printing is a bit more difficult than most people assume it is, I felt that I could find a printer that someone perhaps got frustrated with and was willing to cut their losses and hopefully save me a few bucks in the long run.

The printers at our makerspace, TheLab.ms, are both Flash Forge Creator Pros. They are dual-extruder MakerBot clones with a nice full metal enclosure. Pat has labored for the last year fine-tuning the printers and training the makerspace’s members interested in their use. Our familiarity with these printers lead me to search pretty exclusively for similar MakerBot clones.

Ultimately, I wound up buying the same used QIDI Tech printer that Pat had found on Craigslist. It’s also a MakerBot Replicator Dual Extruder clone, extremely similar to the Flash Forge Creator Pro printer. I wound up picking up the printer for about $450.

But What About New Printers?

The good news is that new printers are not expensive enough to change my opinion on getting into 3D printing. New versions of the same printer that I bought can be found starting around $650. The question I wound up asking myself was: “What does that extra $200 buy me?” The answer to that question was: all the bonuses that come from a new product, like support and warranties; newer firmware on the printer; and, in the case of my specific printer, a newer generation of hardware for the printer.

I had budgeted around $750 to buy a 3D printer, so the new versions were well within my budget. But I wound up deciding to go with the used printer, forgo the benefits of buying a brand-new product, and use the remaining budget ($300) in order to upgrade the printer hardware further. Specifically, I’m interested in upgrading the build surface to something larger and swapping in improved hot ends for the two extruders.

I think there’s value in spending that extra $200 to buy the brand-new printer; I just happen to value the upgrades a bit more. However, I certainly wouldn’t have any objections if someone had the opposite view—3D printing is complicated enough that there’s a lot of value in being able to get support from the manufacturer.

My First Few 3D Prints

A common suggestion for your first few prints is to print things to supplement the printer itself. Thingiverse is literally full of objects that people have designed, shared, and tweaked for their own printers. Many of these objects greatly improve the function and the usability of the 3D printers.

Magnetic Door Latch

The first difference that I noticed between my QIDI Technology Dual Extruder Desktop 3D Printer and the FlashForge Creator Pros that we use at TheLab.ms is that the FlashForge printers’ doors have a magnetic latch to hold the door shut. On the QIDI Tech printer, the door hung loose without any kind of latch and oftentimes swung inside the printer, much to my chagrin. While surfing Thingiverse, I found an object, the QIDI Tech 1 – magnetic doorstop , which I modified to fit my own smaller magnets. My magnetic door latch does a fantastic job of preventing the door from swinging inside the printer, and the neodymium magnets that I used hold the door firmly shut.

Filament-Alignment Bracket

In addition, I decided to add an alignment bracket for the filament to the printer. The bracket restricts much of the travel of the two filaments and acts as a guide for the filament as it goes up through the tubing towards the extruders. The bracket ends up reducing the likelihood of tangled filaments during a print. At TheLab.ms, we had a couple occasions where the filaments became entangled because of how far they traveled throughout the various print jobs. On at least one occasion the result was a failed print. We haven’t had any similar failures since using the alignment bracket.

Glass Build Surface Retention Clips and Knobs

The best upgrade that I decided to pursue required a pair of objects. Rather than printing to the build surface of the printer, I wanted to be able to print on inexpensive picture-frame glass that I picked up at Lowe’s. The advantage of printing to glass is better adhesion of the filament to the heated surface, especially once aided by some Garnier Fructis Style Full Control Non-Aerosol Hairspray. I printed Pat’s Knobs for M3 Brass Standoffs (for FlashForge Creator Glass Clips) and the FlashForge Creator Pro – Corner Glass Clips +3mm that he designed for use with TheLab.ms’s two printers. The clips and knobs have done an excellent job at holding my glass in place atop of the heated build plate.

3D Design: Not Exactly my Strong Suit

My biggest concern in 3D Printing was my absolute lack of ability with anything creative. I don’t have an ounce of artistic or creative ability in my body. It’s just not something that I’m skilled at doing. Truly creative people are creating fantastically detailed, amazing 3D models and printing them on a daily basis. Before I decided to buy the 3D printer, I knew I’d never be able to do that.

Thanks to Thingiverse, that’s a bit of a moot point. For all the objects that I know I’d never be able to model on my own, somebody’s created and shared their 3D model of the same thing. Considering how many objects are available on Thingiverse, I think it’d be very likely that I would be able to find that someone else has already designed the object or figurine that I’m searching for to print.

Even better news—I learned that I could actually build 3D models of my own. OpenSCAD calls itself “The programmers’ 3D Modeler.” While I don’t really consider myself much of a computer programmer, OpenSCAD introduces elements of coding and uses that code to render your 3D models. I found that using logic, equations, variables, functions, etc. to build an object to be right up my alley.

Magnetic Webcam Mount

I designed a magnetic webcam mount so that I could attach a Logitech C270 near the print surface for the purpose of monitoring my prints and hopefully capturing some time-lapse video. Using the same neodymium magnets that I used in the printer’s door latch, I built a two-piece object whose base attached to the bottom of the frame that the heated build plate was mounted to. The second piece was an arm that fit into that base that the Logitech C270 mounted to just above the build surface. Ultimately, it didn’t work out because the webcam needed to be much further away from the print surface in order to get decent images of the entire build surface, but as far as being able to design an object with a specific purpose in mind, it was a rousing success for my first try.

Bottle-Drying Rack Shelf Support

My second attempt at designing a 3D part to solve a problem was both fruitful and successful. We have a pair of adjustable bottle drying racks that we use to dry out the numerous bottles we’ve been hand-washing daily for our five-month-old son. What we’ve found is that the upper shelf collapses down to the lower shelf under the weight of all the things that we were trying to load on top of it. Rather than load fewer things, I designed a Shelf Support for the Munchkin High-Capacity Drying Rack. The object slides down over the center spindle and holds up the top shelf at exactly the height we were wanting.

What’s Next?

I bought a printer and I’ve managed to even 3D model some of my own designs, so what’s next? LOTS of 3D printing, of course! But don’t let that rather obvious and simple answer distract you from the fact that I managed to save roughly $300 of my budgeted dollars on my printer. Do I apply that $300 to a different project, like the 2016 EconoNAS, or do I upgrade the 3D printer? Ultimately, I’ll wind up doing both, but I’ll spend that extra $300 working on upgrading the printer. Here are the upgrades I’m most likely to do:

  1. Upgrade to a current version of the Sailfish Firmware: The firmware that came with my 3D printer is one of the very early Makerbot Creator firmwares. There is a laundry list of new features available in the latest Sailfish firmware that should improve the function of the 3D printer.
  2. Micro Swiss MK10 All Metal-Hotend Kit with .4mm Nozzle: Upgrading the hot end of the printer should help out the consistency of the prints. The current extruders include some plastic tubing. The plastic tubing results in some variation in the temperature of the filament as it works through the extruder. Worst of all, this plastic tubing tends to get clogged up with filament. I’ve got one extruder which I think is partially clogged with this exact problem. Most importantly, the net effect of the all-metal hot ends are that print speed can be increased. At TheLab.ms, we’ve been able to increase print speed by 50% via this same upgrade.
  3. Removable Heated Build Plate Upgrade: The upgrade that I want the most is to increase the amount of print surface inside my printer. The stock build plate on the printer is 9” x 6”. Equivalent printers with larger print surfaces are the ones that tend to become quite expensive. The build plate upgrade measures at 11” x 6”. Those added two inches increase the printable area from 324 cu/in to 396 cu/in, which is a gain of right around 22% .

Between now and when I upgrade, I’m quite content to continue both working on my own 3D Models and printing things that I like off Thingiverse. If you’re interested in what I’ve been up to, feel free to follow me over on Thingiverse. I imagine I’ll be pretty social with the things that I’m printing. For now, I’m going to start wrestling with putting together a few more copies of the Velociraptor Business Card which I printed over the course of last weekend. What about you guys? What kinds of projects would you use a 3D printer for?

Nextion Enhanced HMI Touch Display (NX4024K032) Review

| Comments

Earlier this year, I published a blog reviewing the Nextion HMI Display from ITEAD and I was really excited by the product. So naturally when ITEAD released the next iteration, the Nextion Enhanced HMI Display, I wanted to get my hands on one and think about building a project around it.

I wound up coming up with an idea for a project that would wind up tying a few blog topics together, including some yet-to-be written blogs about my “new” 3D Printer. A very early and rudimentary prototype of this project to help me review the capabilities of the 3.2” Nextion Enhanced HMI Display.

Nextion Enhanced HMI Display

I received the NX4024K032 from ITEAD. It’s key features are:

  • A 3.2” TFT display w/ a resolution of 400 x 240
  • Battery powered real time clock (RTC)
  • 16MB Flash Storage space
  • 1024 Byte EEPROM
  • 3584 byte RAM

The Nextion Enhanced HMI Displays appear to be similar enough to the earlier Nextion HMI Displays. The resolutions seem to be mostly the same. The most exciting feature that I found on the Nextion HMI Enhanced Display was the fact that it had 16 MB of flash storage for storing the interfaces that you built inside the Nextion Editor. This is a quadruple the amount of flash that was on the earlier model.


Also, there appears to be some sort of additional connector on the Nextion HMI Enhanced Display that wasn’t on the prior models at all. Its pins are labeled Ground, IO_0 through IO_7, and +5V. I’m assuming this is some sort of interface that could potentially be used with other hardware, like the expansion board for the Nextion Enhanced Display — I/O Extended

Nextion Editor

The Nextion Editor continues to be available as a free download for building the interfaces uploaded to the flash storage on the Nextion Enhanced HMI Display. It’s possible to transfer the interface via serial directly to the device, or via the on board MicroSD card reader. The Nextion Editor apparently has also experienced some progress and revisions since the end of last year. Please keep in mind that I haven’t been a prodigious user of the editor the past year, but the newer version is much easier to use than I remember the older version.

Brian’s Server Monitor Project Prototype

As you may know, I’ve blogged about building my own DIY NAS server as well as building my own homelab server. The idea I came up with for my Nextion Enhanced HMI Display project was a simple little server monitor. I decided I wanted to pull together a number of different blogs all into one project: my ESP8266, my DIY NAS, my homelab server, and my 3D Printer (blogs coming soon!). What I decided to do was build a little “server monitor” that sits here on my desktop by my computer whose purpose was to keep an eye on my NAS, my homelab machine, and my website.

For my prototype, I decided that I’d start off simple and develop some code for the ESP8266 that would ping the server. Based on the responses for each server, it’d display a page on the Nextion HMI Enhanced Display that indicates which servers are up and which servers are down. And to cap things off, I’d design and 3D print some sort of case to retain all the hardware and prop up the project.

At this point, the prototype is more of a proof of concept than anything else. It’s a long ways off from being a finished product, and there’s a laundry list of features that I’d like to incorporate into it. However, for the sake of demonstrating the 3.2” Nextion Enhanced HMI Display, I mocked up a few screens and loaded them up.

All told, it took me maybe a couple of hours to create the screens and get them loaded onto the Nextion Enhanced HMI Display. And to be honest, most of that time was spent staring at images and getting them resized to all fit the way that I wanted to on the display.


I was pretty excited about the Nextion HMI Displays at the beginning of this year. Nothing about the new Nextion Enhanced HMI Displays has tempered that excitement. The displays are both low-cost and easy to develop solutions on. They are capable enough to run standalone code displays that you create in the Nextion Editor. But what really has me excited is the ability to incorporate other SoC hardware like the Arduino and RaspberryPi in order to create more complicated devices.

Regardless of your expertise level and interest level, the Nextion Enhanced HMI Displays offers something for most tinkerers. You could build a little device like a smart picture frame or a touchscreen menu using only the display and the Nextion Editor. Or if you wanted to get more complicated, you could easily add an interface to your Arduino and RaspberryPi projects.

Working inside the Nextion Editor was pretty simple. It was quite easy to throw together a few screens full of images, buttons, text, gauges, etc. The Nextion Editor “compiled” the entire thing into a single file that I copied to a MicroSD card, then plunked it into the Nextion Enhanced NX4024K032 display, and powered the unit back up. Once it booted up, it copied down the new file. At that point, all that was left was to power off the display and remove the card. The next time it boots up, it was running the new interface!

I said nice things about the Nextion HMI Display at the beginning of the year and that is also the case for the newest Nextion Enhanced HMI Displays. I’m especially pleased with the progress that ITEAD Studio has made in developing the Nextion Editor, which I found much easier to use this time around. I’m also pretty excited that the new enhanced displays tout more flash storage. I’m pretty intrigued about this new possible input/output interface and hoping I can find some additional documentation or examples of how to put it to use. But above all else, I’m really jazzed at how affordable this product is. The exact unit that I’m reviewing is currently listed for $24.50 on the ITEAD Studio website. The other displays range from 2.4” all the way up to 7.0” and the price range on those products is about $18 to $82. For what you can do, they all seem to be priced very competitively.

I’m also pretty jazzed about my little “server monitor” project that’ll feature this Nextion Enhanced HMI Display — NX4024K032. It’s going to be a fun little project to touch on a few of my blog topics. I’ll be using my 3D printer to design a case to hold one of my ESP8266, perhaps a movement sensor, hopefully an LED, and also this Nextion Enhanced HMI Display. With some luck, I’ll write an Arduino application that can monitor both the web interfaces as well as the ping responses from my blog out on the Internet, my homelab server, and finally my DIY NAS system. I love when half a dozen or so blog topics all converge into another topic! What kinds of things would you use the Nextion HMI Enhanced Display for?

Building a Homelab Server

| Comments

A few years back, I built my first NAS, and just this past spring, I upgraded my NAS to bring it up-to-date. In between building those two machines, I started a habit of building a new NAS every 6 months (or so) because I continue to find it to be an interesting project to keep repeating and is also rewarding to write about.

One of the things I always lamented about my NAS machines is that I wasn’t really thoroughly utilizing them. There’s plenty of free storage space that’s slowly being nibbled away by my backups of my Windows machines, but I don’t really have any dramatic need for storage beyond backups of a few PCs. No staggeringly large collections of media, or games, or of anything else that I imagine starts to take up quite a bit of space. In discussing this unused storage space, Pat convinced me that I should get off my butt and build a homelab server like he did ages ago, but in my case leverage my FreeNAS box for storage.

What’s a Homelab Server for, Anyways?

I’m probably not the best guy to ask to define what a homelab server is, but I’ll still take a stab at it. Nearly twenty years ago, I remember being envious of a friend’s home office. He had quite the collection of second hand computers from his office fulfilling a variety of purposes. He even had all of his networking equipment set up in something very similar to a Lack Rack. What’d he do with these computers? God only knows! If I recall correctly, he was working on numerous different certifications, and he used all of that hardware to practice and prepare for his tests.

Fast-forward to today, and we have the computing power to do all that on a single machine thanks to virtualization, and this purpose is at the core of what a homelab server is. Effectively, what people are doing is using a single machine to emulate all those secondhand servers that my friend had in his spare bedroom.

Technically my DIY NAS machine could be used as a homelab server; the latest version of FreeNAS is running atop FreeBSD 10, which features the bhyve hypervisor for hosting virtual machines. Right up until I upgraded my NAS this year, I was quite interested in the possibility of running my various virtual machines along-side FreeNAS. Ultimately, Pat wound up convincing me that separate hardware was the better direction for me to go in.

Important Features and Functionality

So, what exactly did I need a homelab server for in the first place? My initial reason is pretty silly—I wanted to show off by using my NAS as the primary storage of other machines! I built a series of three two-node 10Gbe networks here at the house which interconnect my primary desktop PC, my NAS, and now my homelab server. Just for the sake of doing it, I’ve wanted to host a machine’s (virtual or otherwise) primary storage on my NAS and then get faster performance than your typical platter hard-disk drive. The fact that I can do that affordably at my house is a bit mind blowing, and I really wanted to see it in action.

On top of that, I had some practical uses that I want want to dedicate virtual machines to:

  • Dedicated OctoPrint machine for my “new” 3D printer (a future blog topic)
  • A better test web server for working on my blog
  • A multimedia server that pushes content to my different FireTV and Chromecast
  • Home Automation using openHAB

I’m not unfamiliar with virtual machines. I’ve personally tinkered with a number of different virtualization packages over the years: VMWare, VirtualBox, Kernel Virtual Machine, etc. And professionally, it’s been over a decade since I worked directly with machines that weren’t being virtualized.

I cobbled together a few key requirements that I wanted my homelab server to have.

  • Free or Open Source: Seems pretty straightforward. Who doesn’t like free things?
  • Manageable via some Web Front-end: FreeNAS spoiled me by mostly making it unnecessary to spend effort at the command-line. I’d really like to be able to manage my Virtual Machines much like my NAS, via some sort of web front end.
  • Enterprise-quality Hardware: I mostly wanted this for bragging rights, but I’d also like the platform to be rock-solid stable.
  • Intelligent Platform Management Interface (IPMI): This goes hand-in-hand with the above requirement but it’s way more practical. I’ve enjoyed being able to manage my NAS via the IPMI interface on the ASRock C2550d4i motherboard and I think an IPMI interface is also a must-have for my homelab machine.



For the CPU, I picked out a pair of Intel® Xeon® Processor E5-2670 CPUs (specs). The inspiration for this selection came from an article I’d read recently: Building a 32-Thread Xeon Monster PC for Less Than the Price of a Haswell-E Core i7. In this article, I learned that the market is flooded with inexpensive used Intel® Xeon® Processor E5-2670 CPUs. The premise of the article is that you could build a very robust primary workstation of the Xeon E5-2670, but after researching the CPU prices on eBay, I knew I’d found the right CPU for my homelab machine—it made “two” (Haha! Dual-CPU pun!) much sense to build a dual-Xeon machine. Having 16 cores, capable of running up to 32 threads up at 3.3GHz for around $100, it was an incredible value and perfectly suited for my homelab server. To cool each of the Xeon E5-2670 CPUs, I picked out a Cooler Master Hyper 212 EVO (specs). It’s a CPU cooling solution that I’ve been happily using now for quite some time which also had my utmost confidence for this build.


The CPU might have been extremely affordable, but dual-CPU motherboards that accepted it are still quite expensive. I tinkered around eBay, hoping that I could find a good source for inexpensive motherboards that’d run the CPUs I picked, but I didn’t have much luck. Instead, I opted for a new motherboard. Using the criteria above, I eventually decided on the Supermicro X9DRL-IF (specs). Aside from the dual LGA-2011 sockets and support for my inexpensive Xeon CPUs, I was also pretty excited about the fact that there were 8 total DIMM slots supporting up to 512GB of memory, numerous PCI-e slots, 10 total SATA ports, and dual Intel Gigabit network onboard.


Memory wound up being my second largest expense, coming in just over $200. I wound up picking 4 Crucial 8GB DDR3-1600 ECC RDIMMs. I’m guessing that 32GB is a pretty good starting-off point for my adventures with different virtual machines. There are an additional 4 slots empty on the Supermicro X9DRL-IF motherboard, so adding additional RAM in the future would be quite easy. Hopefully some day the market will be flooded with inexpensive DDR3-1600 ECC DIMMs like it was with Xeon E5-2670s. If that happens, I’ll look to push my total amount of RAM towards the maximum supported by the Supermicro X9DRL-IF motherboard and CPU.


I planned my homelab server, my NAS upgrade, and my inexpensive 10Gb Ethernet network all simultaneously. In addition to the two onboard Intel Gigabit connections on the Supermicro X9DRL-IF, I also wound up buying a dual-port Chelsio S320e (specs) network card. I talk about it in quite a bit more detail in my cost-conscious faster than Gigabit network blog, but each of the ports on the card are plugged into my NAS or my primary desktop computer.


The bulk of my storage is ultimately going to come from my FreeNAS machine, but for the sake of simplicity and a bit of a performance boost, I decided to put a pair of Samsung SSD 850 EVO 120GB SSDs (specs) into the machine and placed them in a RAID-1 mirror.

Case, Power Supply, and Adapters

As I have many times when being frugal in the past, I decided to use the NZXT Source 210 (specs) for my case. The Source 210 is getting harder and harder to find at the great prices I’ve grown accustomed to finding it at, but I was able to find it at a reasonable price for this build. It’s inexpensive, well made, fits all of the components, and has lots of empty room for future expansion.

Of all the praises that I heap on the NZXT Source 210, I discovered it had one shortcoming that I didn’t account for—it lacked 2.5” drive mounting solutions. I was briefly tempted to break out my black duct tape and tape my two Samsung SSD 850 EVO 120GB SSDs inside the case, but I eventually decided to just pick up a 2.5” to 3.5” adapter tray that could hold both SSDs instead. Perhaps if I’d been willing to spend a few more dollars on a case, I would have found something that had some built-in 2.5” drive mounts for my SSDs, but I’m still quite happy with the Source 210.

Choosing a power supply was an interesting decision. My gut said I’d need a humongous power supply to power the two Intel® Xeon® Processor E5-2670 CPUs. But at 115W TDP for each CPU and hardly any other components inside the homelab server, I began to reconsider. Based on some guesswork and a little bit of elementary-school-level arithmetic, I was expecting to be using no more than 250-275 watts of power. Ultimately, I wound up deciding that the Antec EarthWatts EA-380D Green (specs) would be able to provide more than enough power for my homelab server.

The one flaw in my selection of the Antec EarthWatts EA-380D Green is that it lacked the dual 8-Pin 12-volt power connectors required by the Supermicro X9DRL-IF motherboard. When shopping for power supplies, I couldn’t find a reasonably priced or reasonably sized power supply which came with two of the 8-pin 12-volt connectors. Instead of paying too much money for a grossly over-sized power supply, I wound up buying a power cable that adapted the 6-pin PCI Express connector to the additional 8-pin connector that I needed. The existence of this cable is ultimately what allowed me to save quite a few dollars on my power supply by going with the Antec EarthWatts EA-380D Green.

Final Parts List

Component Part Name         Count Price
CPUs Intel® Xeon® Processor E5-2670 specs 2 $99.98
Motherboard Supermicro X9DRL-IF specs 1 $341.55
Memory Crucial 8GB DDR3 ECC specs 4 $211.96
Network Card Chelsio S320E specs 1 $29.99
Case NZXT Source 210 specs 1 $41.46
OS Drives Samsung 850 EVO 120GB SSD specs 2 $135.98
Power Supply Antec EarthWatts EA-380D Green specs 1 $43.85
CPU Cooling Cooler Master Hyper 212 EVO specs 2 $58.98
GPU to Motherboard Power Adapter Cable PCI Express 6-pin (male) to EPS ATX 12V 8-pin (4+4-pin) female N/A 1 $7.49
SSD Mounting Adapter 2.5” to 3.5” Drive Adapter N/A 1 $3.98
Total: $975.22


Operating System

For my homelab machine’s operating system, I wound up choosing the server distribution of Ubuntu 16.04 (aka Xenial Xerus). I chose this version largely because it includes the ZFS file-system among its many features. The inclusion of ZFS interests me because I’d like to start using ZFS snapshots and ZFS Send in order to act as a backup for my NAS. I’m always keeping an eye on hard drive prices, so the next time I see a good deal on some large drives, I may add three or four of them to my homelab server for this purpose.

Virtual Machine Management


My experience managing virtual machines is pretty limited. In the past, I’ve used Virtual Box and VMWare on Windows machines to host virtual machines mostly out of curiosity. In my various professional positions, I’ve used plenty of virtual machines, but I’ve never been on the teams that have to support and maintain them.

When it came time to pick what I’d be running on my homelab server, I deferred to Pat’s endless wisdom from his own homelab experience and I wound up electing to use KVM (Kernel Virtual Machine). I thoroughly appreciate that it is open source, that it has the ability to make use of either the Intel VT or AMD-V CPU instruction sets, and that’s it capable of running both Linux and Windows virtual machines. But ultimately, I wound up picking KVM because I have easy access to plenty of subject-matter expertise—as long as I can bribe him with coffee and/or pizza.

Virtual Machine Manager

Because I’m enamored with the ability to do almost all of my management of my NAS via the FreeNAS web-interface, I was really hoping that I could find something similar to act as a front-end to KVM. My expectation is that I’d be able to complete a significant percentage of the tasks required for managing the virtual machines through a browser from any of my computers. And for anything else, I intend to have a Linux virtual machine running that I can remote into and use Virtual Machine Manager to do anything that I can’t do easily through the web interface.

Ultimately, I wound up deciding to give Kimchi a try. Initially, I was pretty excited, since Kimchi was available within Ubuntu’s Advanced Package Tool. However, what I found for the first time ever was that it didn’t “just work” like every other apt package I’d installed before. In fact, it took Pat and I quite some time to get Kimchi up and running using the apt package. And once it was actually running, we found it to be quite slow. Finally, I was a bit bummed that the version in the apt package was decidedly older (version 1.5) than what was out on the Kimchi page (2.10) for download. Instead, I wound up following the directions on the Kimchi download page to install it manually, and to my surprise I was able to pull up the Kimchi interface in a browser and do some management of the virtual machines.

I found the Kimchi web interface to be handy for some basic virtual-machine configuration and remote-access to the virtual machines. However, tricky configuration, like passing a USB device—my 3D printer—through to a virtual-machine just couldn’t be done via the Kimchi interface. For that kind of virtual machine management, I am planning to use something like MobaXterm on my Windows desktop to access an Ubuntu Desktop virtual machine that has virt-manager on it. It’s a tiny bit more complicated than I would’ve liked, but I’m still pretty happy with the amount of functionality that Kimchi provides via the web-interface.


I’m a big fan of DHCP servers, primarily because I’m lazy and dislike manually configuring static IP addresses. I already had to manually configure six different network interfaces in building out my inexpensive 10Gb ethernet network, and I wasn’t really looking forward to having to continue doing that for each and every new virtual machine. Setting up a DHCP server to listen on my 10Gbe links between my homelab server would make it a bit easier on me when spinning up new virtual machines.


At the beginning of the year, I really wanted to have a single server at my house to take care of both my NAS and homelab needs. But as I thought about it more, I found that concept to have some constraints I found less than ideal. I’m still very pleased with FreeNAS, but ultimately, I thought there were more options available so that I wasn’t constrained to using a hypervisor that ran on FreeBSD. Furthermore, I’m a big fan of having the ability to do maintenance on one set of hardware without simultaneously impacting both my NAS and my hosted virtual machines.

For just under $1,000, I wound up building a homelab server featuring dual-Xeon E5-2670 CPUs (2.6GHz, octo-core), 32GB of RAM, two dedicated 10Gb links (to my NAS and desktop PC), and a mirrored SSD for the host’s operating system. As it stands right now, this machine is probably overkill for what I need. Pat’s inexpensive and low-power homelab machine is probably more in tune with my actual needs, but I relished the chance to build a cost-effective dual-Xeon machine.

What’s Next?

I need to finish putting together my OctoPrint virtual machine and get working on designing and printing things in the third dimension, which is surely to be a source for many upcoming blogs. After the OctoPrint virtual machine is sorted out, I am going to tackle some sort of media-streaming virtual machine. In the future, I’d like to leverage the fact that Ubuntu 16.04 is now shipping with the ZFS file system. I wouldn’t mind buying a few large HDDs and begin using my homelab hardware as a destination for snapshots from my NAS. If you had 16 cores at your disposal in a homelab server, what other purposes would you have for it? What great idea am I currently overlooking?

Building a Cost-Conscious, Faster-Than-Gigabit Network

| Comments

When we first moved into my house, my first project was to enlist Pat’s help and wire up nearly every room with CAT5e cable so that I had Gigabit throughout my house. At the time we were both quite confident that Gigabit exceeded my needs. Then I built my first do-it-yourself NAS and I remember being a tiny bit disappointed when my new NAS couldn’t fully saturate my Gigabit link on my desktop without opening many, many file copies. At the time, I hadn’t yet learned that I was bottlenecked by the NAS’s CPU, the AMD E-350 APU. But I began thinking about bottlenecks and quickly came to the conclusion that the network is the most probable first bottleneck. After building my first NAS, I began regularly building other DIY NAS machines and thanks to Moore’s Law I was building NAS machines capable of saturating the gigabit link before it even dawned on me that my first NAS’s biggest deficiency was its CPU. Earlier this year, I upgraded my NAS and expectedly arrived at the point where my Gigabit network was my actual bottleneck.

Is a faster-than-Gigabit network really necessary?

Calling my Gigabit network a “bottleneck” is accurate but also a bit disingenuous. The term bottleneck has a negative connotation that implies some sort of deficiency. The Bugatti Veyron is the world’s fastest production car but it has some sort of bottleneck that limits its top speed at 268 miles per hour, but nobody in their right mind would describe 268 mph as slow. I was perfectly happy with file copies across my network that were measuring 105+ MB/sec. In the time that I’ve been using my NAS, I’ve moved all of my pictures and video to the NAS and I’ve never felt that it has lacked the speed to do what I’m wanting.

This begs the question: Why am I even interested in a faster-than-Gigabit network? For a long time, I’ve wanted some hardware here at the house that can house some virtual machines. I’d like to build out a few little virtual servers for interests that have come up in the past, like media streaming, home automation, and a test server for working on my blog. My original plan was to run those VMs on the same hardware that my NAS is running on, but I ultimately wound up deciding that I didn’t want tinkering with my virtual machines to impact the availability of my NAS, especially since I’d started using my NAS for the primary storage of important stuff.

I was lamenting to Pat one day that I had tons of space available on my NAS, but I felt that the 105 MB/sec throughput was not fast enough for being the primary storage of my virtual machines. Furthermore, I didn’t want a bunch of disk activity from my virtual machines to possibly monopolize my network and impact my other uses of the NAS. Pat pointed out that the theoretical limits of a 10Gb network (1250 MB/sec) were well beyond the local max throughput of the ZFS array in my NAS (~580 MB/sec on a sequential read). With a 10Gbe (or faster) network, I’d have enough bandwidth available to use my NAS as the storage for my virtual machines.

Consequently, a seed had been sown; a faster-than-Gigabit network at home would enable me to build my homelab server and use my NAS as the primary storage for my virtual machines. I arbitrarily decided that if my NAS could exceed the read and write speeds of an enterprise hard-disk drive, that it’d be more than adequate for my purposes.


I immediately set out and started researching different faster-than-Gigabit networking hardware and reached a conclusion quickly; The majority of this stuff is prohibitively expensive, which makes sense. None of it is really intended for the home office or consumers. It’s intended for connecting much larger networks consisting of far more traffic than takes place on my little network at home. All things considered, I think we’re still a long ways away from seeing people using anything faster-than-Gigabit in their everyday computing. The end result of that is that the price of the equipment is likely to be out of the range of your average consumer’s budget.

What I wound up considering and choosing

Right out of the gates, I was thinking about re-cabling my entire house using CAT6 or running a few extra drops of CAT6 to the computers that needed it. But then I researched the price of both network cards and switches that would do 10Gb over twisted pair copper and quickly concluded that I wasn’t ready to spend hundreds, if not thousands, of dollars to supplement or upgrade my existing Gigabit network.

In talking to Pat, I immediately set off on the path of InfiniBand network hardware. In fact, our ruminating on this topic inspired Pat to build his own faster-than-Gigabit network using InfiniBand. When digging around eBay, there’s no shortage of inexpensive InfiniBand gear. Most shocking to me was routinely finding dual-port 40Gb InfiniBand cards under $20! I was very interested in InfiniBand until I did some research on the FreeNAS forums. Apparently, not many people have had luck getting InfiniBand to work with FreeNAS and my understanding of InfiniBand’s performance in FreeBSD is that it was also a bit disappointing. Without rebuilding my NAS to run on another OS (something I strongly considered) InfiniBand was not going to be the best choice for me.

What ultimately proved to be the best value was 10Gb Ethernet over SFP+ Direct Attach Copper (10GBSFP+Cu). SFP+ Direct Attach Copper works for distances up to 10 meters, and my network cupboard is conveniently located on the other side of the wall that my desk currently sits next to. 10-meter cables would easily reach from my desk to the network cupboard. However, running cables up into my network cupboard wound up being unnecessary due to the expense of switches and my desire to be frugal. There just wasn’t going to be room in my budget for a switch that had enough SFP+ ports to build my 10Gbe network.

Because I decided to forgo a switch, that meant that each computer I wanted a 10Gb link between would need to have a dedicated connection to each and every one of the other computers in my 10Gb network. Thankfully, my 10Gb network is small and only contains 3 computers: my primary desktop PC, my NAS, and my homelab server. Each computer would be connecting to two other computers, so I’d need a total of six 10Gbe network interfaces and 3 SFP+ Direct Attach Copper cables.

What I Bought

For my desktop PC, I wound up buying a pair of Mellanox MNPA19-XTR ConnectX-2 NICs for just under $30 on eBay. I chose the Mellanox MNPA19-XTR on the recommendation from a friend who had used them in building his own 10Gbe network and said that they worked well under Windows 10. Throughout the writing of this blog, I routinely found dozens of these cards listed on eBay with many of those listings being under twenty dollars, and I was also able to find the MNPA19-XTR on Amazon at roughly the same price.

I wound up choosing a different network card for my NAS for a couple of different reasons. For starters, room is an issue inside the NAS; there’s a bunch of hardware crammed into a little tiny space, and because of that, there’s only room in the case for one PCI-e card. I couldn’t go with the inexpensive single-port Mellanox MNPA19-XTR ConnectX-2 cards which seem to be abundant on eBay. Additionally, my research (Google-fu) on popular 10Gb SFP+ cards for use in FreeNAS wound up pointing me to a particular family of cards: the Chelsio T3. Other intrepid FreeNAS fans have had good experiences with cards from that family, so I decided to start looking for affordable network cards in that family. In particular, I wound up buying a lot of 3 dual-port Chelsio S320E cards for around $90. At the time I bought mine, I could get the lot of three for roughly the same price as buying two individually. Having a spare here at the house without spending any additional money seemed to make sense.

Finally, I sought out the SFP+ cables that I needed to interconnect the three different computers. Both my FreeNAS box and my homelab server are sitting in the same place, so I was able to use a short 1-meter SFP+ cable to connect between them. My desktop computer isn’t that far away but my cable management adds a bit of extra distance, so I picked up a pair of 3-meter SFP+ cables to connect my desktop to the FreeNAS machine and to the homelab server. Both lengths of cable, one and three meters, seem to be priced regularly at around $10 on eBay.

In total, I spent about $120 to connect my three computers: $90 on network cards ($15 each for two Mellanox MNPA19-XTR ConnectX-2 and $30 each for the two Chelsio S320Es) and $30 on the SFP+ cables needed to connect the computers together. This is hundreds of dollars cheaper than if I had gone with CAT6 unshielded twisted pair. By my calculations, I would’ve spent anywhere around $750 to $1300 more trying to build out a comparable CAT6 10Gbe network.

Assembly and Configuration

Because I’d decided to go without buying a switch and interconnecting each of the three machines with 10Gb SFP+ cables, I needed to be what I consider a bit crafty. Saving hundreds to thousands of dollars still did have an opportunity cost associated to it. I’m a network neophyte and what I had to do completely blew my simple little mind even though it wound up being a relatively simple task.

My first challenge wound up being that each cable had to plug into the appropriate 10Gbe network interface on each machine. For each end of every cable, there was only one correct network interface (out of 5 others) to plug the cable into. I solved this problem with my label machine. I labeled each network interface on each of the computers and then labeled each cable on each end, identifying the machine name and the interface it needed to be plugged in to.

In configuring the 10Gb links, it was only important to me that each machine could talk to the other two machines over a dedicated 10Gb link. Each of those machines already had existing connectivity to my Gigabit network that went out to the Internet via our FiOS service. Each time Pat made suggestions on how this would work, I scratched my head and stared at him in a quizzical fashion. I am not ashamed to admit that I didn’t have enough of a background in networking to comprehend what Pat was describing. He patiently described the same thing over and over while I continued to stare at him blankly and ask ridiculously stupid questions. As he usually does when I’m not following along, Pat drew a picture on his huge DIY whiteboards, snapped a photo of it, and sent it to me. As the light-bulb above my head began to brighten from “off” to “dim”, I crudely edited that photo to come up with this:

Essentially, each of the 3 different 10Gb links would be its own separate network. There’d be no connectivity back to the DHCP server on my FiOS router, so I’d have to manually assign each of the network cards IP addresses manually. I opted to be lazy and used the entire private network for my all of my home networking. I assigned a Class C subnet for use on my gigabit and WiFi network, and I assigned additional unique Class C subnets to each of my three 10Gbe connections. Because I’m lazy and I hate memorizing IP addresses and I didn’t want to come up with unique names for each of the three machines’ numerous different network interfaces, I edited the hosts file on each machine so that the server name resolved back to the appropriate IP address of the 10Gb interface.

At the end of my efforts, I put together this basic diagram outlining my entire network here at home:


The entire impetus for this project was in order to see my NAS out-perform a server grade (15,000 rpm) hard-disk drive over the network while using Samba. In a recent article on Tom’s Hardware benchmarking various Enterprise Hard-Disk Drives, the highest average sequential read speed for any of the HDDs was 223.4 MB/sec. That number was attained by a relatively small hard drive, only 600GB. This isn’t surprising, since hard-drive speeds are impacted by the size of the platter and smaller drives tend to have smaller platters. Nonetheless, I set 223.4 MB/sec as my goal.

First off, I wanted to see some raw throughput numbers for the network itself. Because FreeNAS includes iperf, I decided to go ahead and grab the Windows binaries for the matching iperf version (2.08b) and fired up the iperf server on my NAS and tinkered with the client from my desktop. In a 2-minute span, iperf was able to push 74.5 Gigabytes across my network, which measured in at 5.34 Gb/sec or roughly 53% of my total throughput.

Having a crude understanding of how iperf worked, I wanted to see the 10Gbe link saturated. I wound up launching numerous command windows and running iperf concurrently in each, something I learned I could’ve easily done from a single command-line had I bothered to do a little more reading. I lost count of the exact number of iperf sessions I had running at once, but in somewhere around 8 to 10 simultaneous iperf tests I was seeing 95-98% utilization on the appropriate Mellanox MNPA19-XTR ConnectX-2 network interface on my desktop computer. I must admit that, seeing that hit 9.6Gbps was pretty exciting, and I started to look forward to my next steps.

Nearly full utilization via iperf was great, but it’s nowhere near a real-world test. The hardware in my NAS is very similar to the FreeNAS Mini. Out of curiosity, I dug into quite a few reviews of the FreeNAS Mini to compare the Mini’s Samba performance to my own. Surprisingly, I’d found that their results were quite faster than my own (250MB/sec to 70MB/sec), which led me to discover that there are some issues with how I’ve been benchmarking my NAS performance to date, a topic I’m sure to tackle in a future blog so that I can remember how to test it better.

First off, I went ahead and used IOMeter to try and capture the fastest possible throughput. This is the equivalent of running downhill with a brisk wind behind you. I performed a sequential read test using a block-size of 512KB. In that dream scenario, I was able to sustain 300MB/sec for the entire duration of the IOMeter test. I was really excited about this result, as it had surpassed my original goal by 34%.

Sequential reads are a great way to find out maximum throughput of a drive, but like most benchmarks, it’s not much of an actual real-world test. Due to the fact that my NAS was able to surpass my original goal by such a large margin, I began to get hopeful that I would beat that throughput in both directions: reading a file from my NAS and then writing a file to the NAS. For my test, I decided to use an Ubuntu ISO as my test file and started off by moving it from my ISOs folder (on my NAS) to a temporary folder on my desktop. According to the Windows file copy dialog, the speed it measured on the file copy ranged between 260MB/sec and 294MB/sec. Afterwards, I moved that file back from my desktop’s temporary folder and into the ISOs folder on my NAS. In these file copies, I saw speeds between 220MB/sec and 260MB/sec.

In an actual real-world scenario, the NAS outperformed the enterprise HDD in both read operations as well as write operations, which was a pleasant surprise. Before the test, I would’ve guessed that the write speed would’ve been a bit slower, since there’s more work for the NAS to do on a write.


I’m having a hard time deciding what I’m more excited about, the fact that I was able to build this 10Gb Ethernet network between 3 computers for roughly $120, or the fact that my NAS now outperforms a 15,000 rpm drive over a Samba file share. Now that it’s all said and done, I think it’s the fact that the throughput to my NAS across my network is fast enough to beat an enterprise hard-disk drive. In the near term, this means that I can confidently use my NAS as the primary storage for the virtual machines that I’ll be hosting on my homelab machine. Furthermore, it also means that I could mount an iSCI drive on one of my desktop computers and it’d work as a more-than-adequate replacement for local storage—this is an interesting alternative in the event of a catastrophic failure on one of our computers if we can’t wait for replacement hardware to show up.

But don’t let my preference diminish the other startling discovery from this little project. I think what might be even more exciting to the general public is that a 10Gb Ethernet network can be built for under $40 and connect two computers together. In my case, it cost an additional $80 to add a third computer. A fourth computer would be even more expensive (8 total network interfaces, 6 total cables), so at this point it probably starts to make more sense to consider getting a switch.

When it was all said and done, I was pretty pleased with myself. I was able to easily exceed my performance goals, and the icing on the cake is that it only cost me about $120 in order to build 10Gb Ethernet links between each of the most important machines in my household.

Nitrogenated Cold-Brew Coffee

| Comments

The first time I attended TheLab.ms’s monthly home brewing group, I just observed and sampled the prior month’s creations— from that point on, I was hooked. Based on the group’s suggestions, I decided to build a keezer for serving my beer from and a fermentation refrigerator, aka “The Brewterus”. Among my criteria for the keezer was my ability to use both carbon dioxide and nitrogen in order to serve beers. Most beers are carbonated, but a few beers (particularly Guinness) are nitrogenated. Nitrogenated beers tend to have what is described as a creamier and smoother feeling in your mouth as well as a less bitter taste, since carbon dioxide is acidic.

Because I planned to serve nitrogenated brews from time to time, Pat suggested that when I don’t have a home-brewed nitrogen beer around, I should consider nitrogenating a cold-brew coffee and serve it on tap. As an experiment, we brewed a small one-gallon batch of cold-brew coffee and tried it out of the keezer and it was delicious! In fact, it was so delicious that I further modified the keezer so that I could add a dedicated cold-brew coffee tap.

What is Cold-Brew Coffee?

Essentially, a cold brew coffee is coffee brewed using water that’s at room temperature or cooler over a longer period of time, usually at least 12 hours. What’s the big deal in that? To me, the biggest difference is the fact that the cold-brew coffee is less acidic than traditional coffee. I personally find cold brew-coffee quite a bit easier and more enjoyable to drink. Without pretending to have a doctorate in food chemistry, it appears that coffee’s fatty acids are much more water-soluble at higher temperatures.

Cold-brew coffee should not be confused with iced coffee. Iced coffee is brewed hot and then poured over ice to crash-cool it. Depending on the amount of coffee brewed and the amount of ice in the cup, this could also result in a drink that’s a bit watered down. But the same acidic taste that hot coffee has would also be present in iced coffee.

Beans from Craft Coffee

Pat is my local coffee expert, and a few years ago for Christmas, we bought him a subscription to Craft Coffee. Not really knowing anything about coffee, we were a bit concerned that the gift would miss its mark, but we’ve been pleasantly surprised to find that Pat’s continued his coffee subscription all this time. The beauty of Craft Coffee is that you answer a questionnaire about what kinds of coffee you like to drink and their properties, and then have a variety of options which include shipping you small bags of different coffees that align to your preferences monthly (or on some other duration of your choosing). In Pat’s various blogs about coffee he’s always spoken highly of the coffees, he has received as a result of his Craft Coffee subscription.

Based on their options, I wound up going with the Single Origin – Roaster’s Choice coffee. The advantage of a single-origin coffee is that all of the beans come from the same source instead of a blend of different beans as selected by the roaster. It’s my understanding that the geographic subtleties of a particular coffee bean are more pronounced with single-origin coffees. Single-origin beans tend to be roasted lightly, which also suit a personal preference of mine.

Our first shipment arrived on a Friday; in the box we found 72 ounces of coffee divvied up in six different twelve-ounce bags. Opening the box set free quite a bit of coffee-laced aroma, filling our kitchen with its pleasant smell. The Craft Coffee bags have a small hole that allow you to smell the coffee after a gentle squeeze on the bag. I smelled the bag first and tried to pick out the different subtle scents I could identify. I’ve always been a sucker for the way coffee smells, but this was quite a bit better. Firstly, it smelled quite fresh, which shouldn’t be surprising to me as I’ve probably almost always had stale coffee. The coffee also smelled a bit sweet with an undertone of something tangy. I couldn’t quite put my finger on what the scents reminded me of, but it definitely smelled fruity and quite citrus-like.

Craft Coffee, Brooklyn, NY
ProducerBebes washing station
OriginObura Wanonara, Papua New Guinea
VarietyTypica, Bourbon Caturra
Elevation1,500-1,700 meters above sealevel
Sweet, fruited and floral with notes of apricot, allspice, green tea, mild currant and lemon curd with grapefruit-like acidity.

Want to give Craft Coffee a try? I certainly recommend it! Using the code of ‘brian1544’ will get you 15% off of your order! Even better? It might even help supplement my own cold-brew coffee addiction!

Materials Used

  1. 52 ounces of Craft Coffee
  2. 6 gallons of Crystal Geyser spring water
  3. 6-gallon Glass Carboy
  4. Cornelius Keg
  5. Auto-Siphon
  6. Cheesecloth
  7. 3-piece airlock


Ultimately, what we decided to do was to use 52 ounces of the coffee to go with 5 gallons of water. Because I’m impatient and didn’t want to spend the afternoon dispensing water from our refrigerator, I went ahead and bought 6 gallons of Crystal Geyser Spring water which was on sale at our local grocery store for $0.89 a gallon. Spring water was the choice because it seems that it’s the superior choice for coffee brewing due to its mineral content.

First we dumped all of the coffee grounds into the glass carboy and filled it up with 4 gallons of the spring water and capped the carboy off with a threepiece airlock, although I think the use of the airlock was probably overkill on our part. Most cold-brew coffee recipes simply refer to covering the concoction while it rests. I hoisted the carboy into the Brewterus, which I had set at 52 degrees Fahrenheit. The Brewterus was set at that temperature for the final stages of fermentation of Das DoppelGanger, my most recent home-brewed beer. My understanding of cold-brewing coffee is that the brewing happens at any temperature which isn’t as hot as the ideal temperature of 205 degrees Fahrenheit. Most cold-brew recipes indicate that room temperature is a satisfactory temperature, which is what led me to believe that the 52 degrees in the Brewterus would be quite fine.

Roughly a day and a half later, I used my siphon to begin transferring the cold-brew syrup into the Cornelius Keg. I used the cheesecloth to strain out any of the coffee grounds that got sucked up by the siphon. I was a bit surprised when I was only able to siphon 3 gallons’ worth of cold-brew coffee syrup out of the carboy. I was prepared for the fact that a large amount of water would be retained forever by the coffee grounds, but I was a bit startled when those 52 ounces of coffee grounds wound up retaining a quarter of the water we added to the carboy.

This is where I’d worried that I made a pretty sizable mistake. Rather than taste the syrup and then dilute it down to my preference, I simply emptied my two remaining gallons of spring water into the keg. It wasn’t until just after the water drained from the last bottle that I thought to myself; I wonder if that’s too much water to add? My concern at this point was that I’d overly diluted my cold-brew syrup with the spring water. In the future, I plan to taste test more frequently as I add water to the syrup.

During the cold-brew process, Pat had used his French press and brewed us a couple cups of the month’s Craft Coffee. Prior to nitrogenating the cold-brew coffee syrup, I used a ladle to scoop up a glass of the cold-brew coffee. In my clear glass, my cold brew coffee appeared to be a bit more opaque than what had been in the French press but in drinking the two, I found that they tasted quite different but that difference in taste could be expected due to the difference in the brewing method. I decided that I’d go ahead and hook it up to the nitrogen gas and do a taste test again in a few days.

After giving Pat a sneak preview a day or two later, most of my fears were assuaged when Pat said that he found the cold-brew coffee itself to be every bit as drinkable as the cups of French press coffee we’d drank while preparing the cold-brew concoction. This is especially exciting because Pat had not been very keen on neither of our earlier cold brew experiments.

All the cold brew ingredients and supplies Coffee grounds poured into the carboy #1 Coffee grounds poured into the carboy #2 Coffee grounds poured into the carboy #3 Coffee grounds poured into the carboy #4 Coffee and water added to Carboy #1 Coffee and water added to Carboy #2 Coffee and water added to Carboy #3

First Impression

Why wait for that final taste test until later on? It takes a while for the pressure of the Nitrogen gas to be absorbed into the contents of the keg. Normally for my beers, I crank up the pressure and wait a couple days, and that has typically involved using carbon dioxide, which is much more soluble in liquids than nitrogen is. I keep my nitrogen at a much higher pressure (~50psi) in part to try and account for that solubility and to increase the amount of gas in the coffee when dispensing. At any rate, it takes a few days under pressure for the nitrogen to infiltrate the coffee and create that awesome cascading effect and wonderful mouthfeel.

My first conclusion? This coffee from Craft Coffee is every bit as delicious as Pat told me it’d be and that he’s been writing about in his blogs. The entire time I’ve been considering cold-brewing coffee and serving it out of my keezer, Pat’s been encouraging me to get my own subscription from Craft Coffee, and boy am I glad for that recommendation! I’ve tried this month’s coffee through a plain old drip coffee pot, brewed via a French press, and in cold-brew form. In every single form, no matter how badly I might’ve accidentally made it, I’ve enjoyed the coffee. I’m not sure how quickly I can drink five gallons of cold-brew coffee, but once it’s gone I’ll certainly be excited for whatever Craft Coffee sends my way next. My favorite feature of the Craft Coffee subscription is the variety of beans they’re capable of sending out and that every month will be different. I’m excited to see what comes next month. Want to give Craft Coffee a shot? Use my code brian1544 and get 15% off!

My most import conclusion from my first impression? Cold brew coffee is tasty and different! Because of the colder brew temperature, the final product is very much different than either hot coffee or iced coffee. It’s quite a bit smoother and tastes less bitter or acidic. Brewing a gallon of your own cold-brew coffee would be pretty easy. Buy a gallon of spring water and pour off some room for the grounds (save the poured-off water). Then put 10.4 ounces of coarsely ground coffee beans into your gallon of water and fill it back up to the top. Let the grounds and water set between 24 and 36 hours in the fridge. Finally, use some cheesecloth and another pitcher and carefully pour your cold-brew syrup out of the container through the cheesecloth to filter out the grounds. Get as much syrup out of the gallon of water as possible and then taste your brew—add additional water to taste in case it is too strong. Voila! Your own concoction of cold-brew coffee! It should keep in your fridge for roughly two weeks without problems.

Final Thoughts

In addition to everything I said above, nitrogenating the cold-brew coffee puts the whole thing over the top; it was enjoyable by itself, but once it was finished being nitrogenated, it became delicious! Watching the nitrogen cascade up the glass to build the frothy head is mesmerizing. Secondly, the crema from that head and the nitrogen that’s infiltrated the cold brew creates a very cream-like texture and mouthfeel that is quite similar to the crema formed by milk in an espresso. It’s pretty awesome that it takes me about 20 seconds in the morning to pour myself a cold-brew coffee before I begin my adventures.

Depending on how quickly we can drink the cold-brew coffee, I expect to turn this into a running series of blogs. For each new coffee that Craft Coffee sends me, I intend to whip up a keg of cold brew coffee out of what they provide. Considering that the warmest months are sneaking up on us, it’ll be a nice treat to have on hand!

My 2016 DIY NAS Upgrade

| Comments

I spend a good chunk of every year researching, building, and writing about different NAS blogs. While I’m doing this work, every now and then I get bit with a temporary onset of jealousy and selfishness. Each of these NAS builds have been incrementally better than my own DIY NAS machine and each time that urge to keep the new NAS for myself has grown stronger!

Shortly after publishing the 2015 EconoNAS, I decided that the upcoming DIY NAS: 2016 Edition would serve a bit as a prototype for my own NAS upgrade. During the process of building and writing about the DIY NAS: 2016 Edition, I wound up learning a few lessons and made a few tweaks to suit my own needs a bit better.

What’s the same?

Case and Power Supply

I stayed with the U-NAS NSC-800 (specs). I do absolutely love the features of this case, most of all the case’s eight removable drive bays and its incredibly small footprint. But as much as I love this case, I hated working inside it, especially getting the motherboard finally mounted. Check out my timelapse video assembling the DIY NAS: 2016 Edition into the same case to get an idea of how much fun I had. If you’re building a DIY NAS and you’re tight for space, the U-NAS NSC-800 is worth its price and the effort of getting it into the case!

Along with the case, I also stuck with the Athena Power AP-U1ATX30A (specs) to provide the power. It was essentially the best deal on a 1U power supply that I could find which didn’t change in the weeks between ordering components for the two different NAS builds. I initially intended to use Pat’s Spacer Bracket for a 1U Power Supply to provide a bit of (unnecessary?) support to the backside of the power supply, but I actually wound up needing that object redesigned with new features to help solve a challenge unique to my own new requirements. More on that challenge below!

Storage Drives

Ultimately, my hard-drive configuration wound up the same as the DIY NAS: 2016 Edition, but this is purely coincidence. A few years ago I bought new hard drives, an additional SATA controller card, and rebuilt my ZFS zpool to hold seven 2TB hard drives in RAIDZ2 configuration. In the last four years, I’ve had 3 drives fail and get replaced with 4TB drives. For my upgrade, I wound up buying replacements for each of the four remaining 2TB hard-drives; a pair of Western Digital Red 4TB NAS hard drives (specs) and a pair of HGST Deskstar NAS 4TB hard drives (specs).

ZIL and L2ARC Cache Drives

Speaking of storage devices, I ultimately decided to stick with a pair of Samsung 850 EVO 120GB SSD and use them both as ZIL and L2ARC cache devices. Those of you who read the DIY NAS: 2016 Edition may recall I was a bit disappointed with the performance of the NAS with the ZIL and L2ARC cache devices compared to without. Ultimately, I decided that my usage of the NAS at the time didn’t really line up with the benefits that the ZIL and L2ARC provide. It’s also possible that my own gigabit network is the primary bottleneck. If you’ve been keeping up with me on Twitter then you’ve probably observed that I plan to be using my NAS a bit differently in the upcoming few months.

What’s Different?

FreeNAS Flash Drive

Starting off with differences between my NAS and the DIY NAS: 2016 Edition is how I handled the FreeNAS OS drive. As I have for almost every NAS build, I stuck with the low-profile 16GB SanDisk Cruzer Fit USB flash drive (specs). But for my own NAS, I added a second flash drive to mirror the OS on. The SanDisk Cruzer Fit flash drives are inexpensive enough that I’ve slowly acquired quite a collection of them, so it made sense to use one of those extras to add a little bit of additional redundancy to my own NAS.


Much like the flash drive, I’m still using the same RAM, but instead of just one 16GB kit (2x8GB) of Unbuffered DDR3 PC3-12800 (specs) I opted for two in order to bring the total amount of RAM up to 32GB. Among the things I learned as part of my understanding of ZIL and L2ARC is that I would’ve seen more performance benefit had I spent those same dollars on more RAM instead of cache devices. For this build, I toyed with 16GB sticks and even potentially 64GB of RAM, but the cost of the suggested 16GB DIMMS (over $300!!!) wound up making it way more pragmatic to buy 32GB (4x8GB) of RAM and also use the ZIL/L2ARC SSDs to supplement performance.

CPU and Motherboard

For my own NAS upgrade, I wound up going back to the motherboard from the DIY NAS: 2015 Edition, the ASRock C2550D4I (specs), which is essentially the quad-core little brother of the ASRock C27450D4I that was used in the DIY NAS: 2016 Edition. Originally I had picked the ASRock C2750D4I because I’d wanted to use those additional four CPU cores in order to add a bit more functionality to machine beyond storage. I was hoping that the extra CPU power would enable me to use the NAS to house a few virtual machines.

But then I re-re-re-read Pat’s Homelab Server build blog and rethought my approach. I wound up deciding that an additional machine to host my virtual machines made a bit more sense, hopefully something that I could build with considerable performance for a reasonable price. I hadn’t planned on building that machine until much later this year, but then this article about an affordable dual-Xeon machine got my attention. I finished ordering parts for my own homelab server as I worked on this blog.

I eventually decided that I could go with the ASRock C2550D4I in order to save some money. At the time of purchase, the ASRock C2550D4I was $150 less than the ASRock C2750D4I (specs). I used that money in part to increase the amount of RAM to 32GB and set what little was remaining aside for the parts needed for my homelab server buildout.

Intel recently (2/9/16) disclosed a hardware flaw with the Avoton C2000 family of CPUs as part of their 2016 Q4 earningss call. The flaw is going to require a change in how the Avoton C2000 CPUs are manufactured and probably explains the recent chatter of people having to RMA their C2000-based motherboards. I posted an update in the DIY NAS: 2016 Edition with my thoughts on the matter.


The process of the building, using, and testing the DIY NAS: 2016 Edition led me down the path of feeling I’d reached a point where my Gigabit had potentially become a limiting factor. On top of that, I am also planning on using my NAS for the storage of virtual machines hosted on my homelab machine. Because of this, I decided to build a small 10Gbe SFP+ network between my primary desktop, my NAS, and my homelab server by using either dual-port or multiple NICs and interconnecting each of the machines with twin-axial copper cable. My small little 10Gbe network and how it blew my little network-neophyte mind is a topic of its own blog. Due to the expense of 10Gbe network gear, I wound up trolling eBay for used NICs. I wound up finding that dual-port Chelsio S320e (specs) network cards could be found relatively inexpensively and I bought a lot of 3 cards for $90.

Power Supply Bracket

Unfortunately, the footprint of that inexpensive dual-port 10Gbe network card was pretty large, large enough that the backside of the network card was bumping into the stack of two Samsung 850 EVO 120GB SSDs mounted in the U-NAS NSC-800. The default mounting method of these SSDs in the NSC-800 wound up preventing me from adding the Chelsio S320e NIC. I wrestled with the case for a few hours trying to find alternative ways of mounting the SSDs to make room, but the NSC-800 is a challenge in this regard since there’s not a whole lot of space to work with.

Ultimately I concluded that I could mount the SSDs and install the NIC in roughly the same spot, but not by using the mounting hardware that came with the NSC-800. Essentially, I decided that the best solution was to make a sandwich out of the NIC, mounting one SSD below it and another above it, but the stocking mounting hardware was insufficient for that goal. In the process of listening to me complain, Pat had a brainstorm—modify the power supply bracket used in the DIY NAS: 2016 Edition by adding some sleeves that the SSD would squeeze into to be held in place.

If you have access to a 3D printer then you can download and print Pat’s Spacer Bracket for a 1U Power Supply yourself from Thingiverse. Don’t have access to a 3D printer? No problem! Pat’s got the Spacer Bracket for a 1U Power Supply listed on the Patshead.com Store on Tindie.

FreeNAS Configuration

Since I imported my previous configuration, my configuration should’ve been identical in both the before and after state of my configuration. This is roughly the same configuration that I would’ve made with the DIY NAS: 2016 Edition. However, after a disappointing initial run of benchmarks, I decided to give the FreeNAS Autotune feature a try. Here’s what it says in the FreeNAS documentation, FreeNAS® provides an autotune script which attempts to optimize the system depending upon the hardware which is installed. Because the hardware had changed significantly, I thought it was a good idea to go ahead and enable this feature. As a result, FreeNAS created a few tunables:

I won’t pretend to have expertise in all of those tweaks that the Autotune made on my behalf, but I suspect that it’s a list of things that a few Google searches will give me a decent idea of why the changes were made and how it benefits the performance.

Parts List

Component Part Name         Count
Motherboard ASRock C2550D4I specs 1
Memory Crucial 16GB Kit (8GBx2) DDR3 ECC specs 2
Case U-NAS NSC-800 Server Chassis specs 1
Power Supply Athena Power AP-U1ATX30A specs 1
SATA Cables Monoprice 18-Inch SATA III 6.0 Gbps (Pkg of 5) N/A 2
OS Drive SanDisk Cruzer 16GB USB Flash Drive specs 2
Cache Drives Samsung 850 EVO 120GB SSD specs 2
Storage HDDs Various 4TB HDD Models N/A 7

Burning in CPU, Motherboard & RAM before assembly #1 Burning in CPU, Motherboard & RAM before assembly #2 Off the charts anal-retentive SATA Cable labeling SATA Cable Installation and management #1 SATA Cable Installation and management #2 SATA Cable Installation and management #3 SATA Cable Installation and management #4 SATA Cable Installation and management #5 SATA Cable Installation and management #6 SSDs mounted in stock location #1 SSDs mounted in stock location #2 SSDs mounted in stock location #3 SSDs mounted in stock location #4 Experimenting w/ alternate SSD mounting location. #1 Experimenting w/ alternate SSD mounting location. #2 Test fitting SSDs in Pat's 3D printed Bracket SSDs mounted using Pat's 3D printed bracket #1 SSDs mounted using Pat's 3D printed bracket #2 SSDs mounted using Pat's 3D printed bracket #3 SSDs mounted using Pat's 3D printed bracket #4 Brian's NAS mounted in his media cart #1 Brian's NAS mounted in his media cart #2

How Does it Measure up to the DIY NAS: 2016 Edition?

Out of curiosity, I executed the same IOMeter tests as I did in the DIY NAS: 2016 Edition to see exactly how my own NAS measured up performance-wise to the DIY NAS: 2016 Edition, and I also wanted to see the impact of the Autotune as well.



Overall, I had been expecting that my own NAS would be pretty comparable to the DIY NAS: 2016 Edition, and for the most part, I was right. Surprisingly, my NAS outperformed the DIY NAS: 2016 Edition in sequential writes by a good margin in both IOPS as well as MB/sec. However, for my uses, sequential writes (or reads) isn’t really a very real-world test. IOMeter’s “All Tests” mimics my real-world usage much better than the sequential read or sequential write tests. Within the “All Tests” my NAS benchmarked at about 87% of what the DIY NAS: 2016 Edition scored. I was hoping to be within 10%, but I was close enough that I am pleased with the outcome once you also factor in the additional money I was able to save by going with the ASRock C2550D4I.

What’s Next?

My ultimate goal for the upgrade to my FreeNAS machine is to create a box capable of serving as the disk storage for my yet-to-be-built homelab machine. As far as I’m concerned, I’m pretty certain that my upgraded NAS is up to that task. But I’ve got a couple projects to finish first: building out my poor man’s 10Gbe network and assembling my homelab server.

I’m pretty happy with both the performance of my NAS after all of its upgrades as well as its cost. In comparison to my prior NAS, its performance is light years ahead of where I was at prior to the upgrade. Depending on the test, IOPS and MB/sec for the benchmarks I performed ranged from 60% better to 4500% better. And while its performance lagged behind the DIY NAS: 2016 Edition, it was only by a half-step, and it even managed to out perform the DIY NAS: 2016 Edition in one test.

Hopefully, it’ll be at least another 4 years before I’m upgrading components again except for replacing/upgrading any hard-disk drives which manage to fail between now and the next major upgrade!

Mirroring the FreeNAS USB Boot Device

| Comments

One of the things that I like best about FreeNAS is the fact that you have the option to run it off an inexpensive USB flash drive; in fact, that seems to be the preferred option and is the most encouraged by the FreeNAS community. Consequently, that means you have an additional SATA port available for fulfilling the primary function of your NAS—additional storage. Almost as beneficial is the fact that USB drives are quite inexpensive. However, it’s not been unusual for me to receive some incredulous comments, questions, and other reactions when I explain that I entrust my data to an operating system which is hosted on a USB flash drive.

Usually, after listing out the benefits of having the OS on a USB flash drive, most people will come around and appreciate those same benefits. However, a minority of those people are a bit more skeptical, citing reasons like they’ve had bad experience with faulty USB drives in the past or that they simply don’t think that a USB drive can be counted on to be responsible for any kind of operating system.

Typically, what I’ve told the remaining skeptics was that losing your OS drive just isn’t that big of a deal in FreeNAS. In the event that the USB flash drive died, it’d be pretty easy to recover. First you’d need a bootable copy of the FreeNAS installation ISO, a replacement USB flash drive, and a few minutes of your time. FreeNAS would get installed on the new USB drive, then the existing zpool could be imported from the data drives, and finally the system configuration database could be restored from a daily backup that FreeNAS does automatically each morning. As part of an upgrade to my own NAS (a future blog topic), I went through these same exact steps just to see how long it’d take and how difficult it was. From start to finish, it took me about 30 minutes and it was not complicated at all.

Personally, I think 30 minutes of downtime is more than acceptable for the overwhelming majority of builders of DIY NAS machines, but that’s just my opinion. I certainly wouldn’t blame someone for saying that it isn’t acceptable for their own NAS. Thankfully, for people with standards a little bit higher than mine, FreeNAS will make a mirror out of your USB boot device. Even better? It’s really simple to set up. FreeNAS even wrote the exact steps in their user documentation (5.3.1. Mirroring the Boot Device):

How to Mirror the FreeNAS Boot Device

  1. Open your FreeNAS UI in a browser.
  2. From the System tab, select Boot
  3. Click the Status button
  4. Select either freenas-boot or stripe
  5. Click the the Attach button
  6. Select the appropriate device from the Member Disk drop down and click Attach Disk

From this point, the freenas-boot zpool will be converted into a mirror (from a stripe) and the new device will be added to that zpool. Once that action completes, ZFS will begin re-slivering and duplicate your data from your existing USB flash drive to the new one. Because it re-slivers the zpool, you will get a system alert about how the freenas-boot is degraded. However, this is temporary and clears up once the re-sliver is complete. On my machine, that took just a few minutes.

You can create this mirror from the get-go during the installation too. All that you have to do during the installation is to have your two USB drives connected and then to select them both as targets for the installation. The FreeNAS installer will then create your mirrored boot devices as part of its initial setup.

FreeNAS System tab FreeNAS Boot Device Info FreeNAS Boot Device Status FreeNAS Boot Device Status w/ Added Mirror Re-slivering freenas-boot zpool Re-sliver complete on freenas-boot


The FreeNAS user documentation features this suggestion very prominently:

Note: When adding another boot device, it must be the same size (or larger) as the existing boot device. Different models of USB devices which advertise the same size may not necessarily be the same size. For this reason, it is recommended to use the same model of USB drive.

This warning neither surprised me nor worried me. I’ve been using the SanDisk Cruzer Fit line of USB drives now for years. In fact, before building the DIY NAS: 2016 Edition I even bought a handful of these devices just to have a few extra around the house. When I decided to add a USB Flash Drive mirror on my own NAS, I decided I’d buy a couple more. I had enough USB flash drives from the same manufacturer and of the model that I didn’t think anything of this notice when I made my first attempt. Imagine my surprise when this error message was the result: Error: Failed to attach disk: cannot attach da1p2 to gptid/b2be8286-f11e-a058-00074306bdff: device is too small

Apparently, there have been variations to the 16GB SanDisk Cruzer Fit over time. The drives that I had purchased previously were ever-so-slightly bigger than the ones I bought just this week. How could I work around this? I had a couple options:

  1. Manually back up the system configuration and reinstall FreeNAS while choosing to specify both USB devices. As a result, FreeNAS would size the mirror to the smaller of the two USB drives. Then boot from that new mirrored installation and restore the system configuration.
  2. Dig through my collection of 16GB SanDisk Cruzer Fit drives and try them one by one while hoping that at least one of them is the same size or bigger than the one in my own NAS.

Thankfully, after trying 3—4 different 16GB flash drives, I found one that was the same size or larger.

Final Thoughts

Assuming you’re a bit more meticulous than I have been, you may want some sort of redundancy for your FreeNAS boot device. It’s wonderfully simple to do as part of the initial installation; just insert your two USB flash drives and select them both as destinations for the installation. If you miss it during the initial setup, it’s almost as easy to do through the FreeNAS user interface as is outlined in the user documentation to mirror the boot device. About the only wrinkle is that when doing it after the fact, you need to be careful that the new device is the same size or larger as your existing boot device. The complicated part of this is that you can’t necessarily count on the fact that two different USB drives are the same size, even if they are the same model!

What do you think? Have any of you been holding off because you don’t have much faith in USB flash drives? Does the FreeNAS feature to easily mirror multiple flash drives help with your concerns at all?

I (grudgingly) Realized that I Wanted a Smartwatch

| Comments

Update (12/9/16): In a recent announcement, Pebble announced that they were shutting their doors and selling off their intellectual property to Fitbit. As such, I probably need to retract any nice things that I said about Pebble’s products down below. My new recommendation to everyone is: don’t buy Pebble smartwatches. The watch might work for now but nobody’s going to honor any kind of warranty, provide any support, or further the platform. I’m sure retailers are going to purge their inventories at rock-bottom pricing, but considering what Pebble’s said lies in store for their products it seems foolhardy to buy at any price. You’ve been warned—you’re almost certain to get far less than what you paid for.

Moreover, don’t buy anything from Fitbit either. While I commend their business acumen in acquiring the intellectual property but none of Pebble’s debt or obligations (for example: supporting the existing users), I think it’s a crummy move on their part to turn their backs on all of the existing Pebble users. I already had a frustrating experience with the Fitbit Force when Fitbit “voluntarily” recalled the Force before fulfilling a pending order that I had waited quite some time for. I hope they do amazing things with the pieces of Pebble that they acquired, but I’ll never buy any of their products again after they disappointed me both directly and indirectly.

I’ll enjoy my Pebble Time Steel as long as I can but it will quit working at some point. When that happens, will I replace it with another smartwatch? I’m not so certain.

For the longest time, the entire smartwatch craze befuddled me. I spent the last twenty years or so being very anti-watch. I spent the ’90s and the decade after wishing that my mobile phone would shrink down to a small enough size that it’d easily double as a pocket watch while liberating my wrist. In fact, when I eventually replaced my watch with my Nokia 8260, I was quite prideful in my ability to predict the future. For the next fifteen years, I scoffed at the notion of needing a watch at any point in the future.

Then a couple weekends ago I was at the hospital, precariously holding my newborn son, when my phone chirped at me as a text message came in, then a few moments later a phone call came in, and then immediately after that another phone call, ultimately all of this followed by a voice-mail notification! My brother, Jeff, was trying to get in touch with me in order to find out where he needed to go in order to come see his nephew for the first time (and to also bring the delicious pork he’d smoked.) But I was both unable and unwilling to reach into my pocket and answer his call. As my Nexus 6 rang and vibrated in vain from the depths of my pocket, I asked myself, “Oh crap. Am I going to need to get a smartwatch now?”

To be fair, I’ve been wearing something on my wrist for a couple years now. I have had a Fitbit Flex for quite some time now, so it’s not like my wrist has been completely naked since banishing watches sometime near the beginning of this millennium. But it’s still a pretty surprising 180-degree reversal on my part, especially when you consider my stubborn nature. I grudgingly resigned myself to the fact that I’d be shopping for a smartwatch in the near future and began to think about the features that I wanted to see in my smartwatch.

Smartwatch Requirements

  1. Battery Life: It seems these days I’m always in search of a charger for some piece of electronics that I’m carrying around. I’d really like to see at least 3 days’ worth of battery life and I’d be willing to pay more or sacrifice other features for a longer battery life.
  2. Fitness Tracking: I’m not an especially active guy, but I like the data that I get to see from my Fitbit Flex, especially its ability to keep count of steps and sleep tracking. In a perfect world, I wouldn’t have to move off from Fitbit as my fitness platform of choice.
  3. Mobile Platform Independent: I’m pretty much a devoted Android guy, but there’s a remote possibility that someday that might change. I’d prefer not to be shackled to any particular mobile operating system just because I happen to own one device from their ecosystem. There’s nothing special about a smartwatch’s functionality that would prevent it from working in numerous environments. If a manufacturer disagrees and sees the smartwatch as an opportunity to further their grip on my household, they’re going to be disappointed.
  4. Color Display: Even though it may consume more battery power than a black-and-white display, I’d still prefer a color display on my watch. My days of a monochromatic watch experience ended with whatever watch I was wearing at the end of the last millennium.
  5. Reasonably Priced: I wasn’t quite sure what dollar figure to place on this, but the smaller the amount the better. The rate at which mobile electronics become obsolete is way too high for me to spend much money on them. For the purposes of my shopping, I set my limit at around $300. I’d consider watches over that price, but they’d really need to blow my socks off.
  6. Chronometer: Oh yeah, it’s a watch—might as well make sure it can perform its primary function.

Determining my requirements didn’t really help me pick a watch at all, but it certainly did help eliminate the Apple Watch. The Apple Watch’s battery would need to be charged at least on a daily basis, ashamedly requires an iPhone to work, and its cost starts above what I consider to be reasonable. Even if I had an iPhone, I would still be inclined to buy a different smartwatch than what Apple’s currently offering.

The Contenders

There are quite a few choices on the smartwatch market, which was a bit surprising. In fact, there are so many out there that I’m relatively certain I’ve overlooked quite a few products that might fit my needs. Ultimately, I narrowed down the list to the following watches.

Among my criteria, my battery requirement eliminated a number of watches. The Huwaei Watch (1 to 2 days), LG Watch Urbane Wearable Smart Watch (1 to 2 days), Motorola 360 (~1 day), and Fossil Men’s FTW2001 (1 to 1.5 days) each failed to meet my minimum of 3 days’ use on a single charge. Furthermore, I was a bit disappointed to find out that each of these watches required that the screen go to sleep in order to reach those “maximum” charge times. Considering the size of the batteries and the displays these watches have, this isn’t a surprising factoid, but it doesn’t stop it from being a disappointing one. I expected that the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch also has a similar battery limitation, but it does have a tremendous advantage—price! At around ten bucks you could buy one for every day of the month before you got to the price of the Huawei, LG, Fossil, or Motorola watches.

The Pebble Time Steel and Pebble Time both meet my battery criteria thanks to their e-paper displays. The best part about the e-paper display is that it only requires power to update the display, so not only does it use a fraction of the power other smartwatches’ displays take, but it also means things can like the time can be presented on the display and remain there without consuming any power until they require an update. I’ve owned a Kindle Paperwhite e-reader for a while, and I’ve enjoyed using it quite a bit, which gives me a measure of confidence in e-paper displays.

The Decision

Who says you can’t have your cake and eat it too? The Pebble Time, Pebble Time Steel, and Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch all met most, if not all, of my criteria. Both of the options from Pebble offerings actually met all of my criteria. By the time I was done shopping, I had made up my mind to buy the Pebble Time Steel, mostly due to its larger battery. But at only $10, it also seemed like a no-brainer to also buy the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch too!

Both the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch and the Pebble Time Steel showed up on the same day, so what did I do? Put them both on, naturally! I actually expected that this would cause problems, but wound up being pleasantly surprised to see that notifications were getting sent to both of my phones. It wound up being a bit difficult to use either phone with both on my left wrist, so I wound up wearing one on each of my wrists. Thank goodness I’ve been housebound with fatherly duties, as I looked like a much bigger dork than usual!


The Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch was really surprising to me, consider its price of around $10. Because of the price, I had pretty low expectations. However, the smartwatch was quite capable and exceeded my expectations. It instructed me to download an app, BT Notification, to manage which notifications would get passed on to the phone. One of the features present on the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch but missing on the Pebble Time Steel is the fact that it includes the functionality of a Bluetooth headset. I was able to successfully call Pat and leave him a voice-mail despite his well-stated position on voicemail. Speaking of Pat, when I gave him the smartwatch to play around with, he discovered a feature that I overlooked—the smartwatch also has a the ability to access your phone’s camera remotely. We couldn’t think of many uses for having access to a remote camera on our wrists, but Pat pointed out that it’d come in handy if you had to see behind something that you couldn’t quite fit your head behind. As expected, you’re able to control (and listen) to your phone’s music, place a call to someone from your phone’s contacts, and read your text messages. Atop of that, the smartwatch also contained a bushel of other miscellaneous built-in apps, including; a calculator, stopwatch, alarm clock, pedometer, calendar (unique from your phone’s calendar app and data), sleep tracker, and a couple others.

I wound up not caring much for the interface of this smartwatch—the touchscreen is just a bit too difficult to use precisely and the design of the user interface is both basic and lacking. The act of acknowledging and dismissing a notification on the smartwatch was difficult enough that I probably would prefer doing it from my Nexus 6 instead. I also found that the smartwatch’s configuration options left quite a bit to be desired. YYou can change the notification and ring tones, but the choices all are pretty crummy and there are only 2-3 for each. The battery life is also pretty lacking—I immediately charged the Fixin_DIY watch and within a few hours of heavy use it was needing another charge, and that was a letdown. The poor battery life gave me doubts about whether or not it could survive an entire day. The size of the watch was also a bit bigger than I would’ve liked, and noticeably bigger in all three dimensions than the Pebble Time Steel.

There were quite a few things about the smartwatch that I liked, especially the price and its handling of the phone’s notifications, but there were also things that I disliked: the touchscreen, size, the user interface, and the battery life. All that being said, I think the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch is a great value at $10. It ticks off quite a few of my “must-have” features for a smartwatch and does it all for less than the price of a movie ticket. I think that this smartwatch would be an excellent investment for anyone who isn’t quite sure if they want a smartwatch and aren’t willing to spend hundreds of dollars just to satisfy their curiosity.

Dead-on shot with Display Active Laying on its side, with left side up Laying on its side, with right side up Connected to charging cable Shallow(er) viewing angle #1 Shallow(er) viewing angle #2

Pebble Time Steel

At the price of roughly 19 Fixin_DIY watches, I had pretty lofty expectations for the Pebble Time Steel, although in its defense the price was quite more reasonable than the offerings from Apple, Samsung, and Motorola. The Pebble Time Steel surprised me in that it was quite a bit smaller than I expected. A friend of mine has the original Pebble Watch and I wound up being surprised to find out that the Pebble Time Steel was a bit smaller than her Pebble Watch. In fact, I’d wager to say that the Pebble Time Steel took up just about as much of my wrist as my beloved calculator-watch that I had back in the ‘80s, but my wrists are a bit bigger now than they were back then.

So what does the extra $180 get you when comparing the Pebble Time Steel to the Fixin_DIY watch? Quite a bit! First and foremost is battery life. When I first received it I never charged the Pebble Time Steel, and on its initial charge under pretty heavy use the battery lasted five days. And thanks to the properties of the e-paper display, the watch was always on. It was interesting to me how frustrated I got with having to hit a button on the Fixin_DIY watch in order to wake up the display just to see the time. Another exciting feature of the Pebble Time Steel was its plethora of apps and watchfaces. I definitely have a desire to display data from my Continuous Glucose Monitoring system as well as some of the data from my web-analytics platform, Piwik. While I couldn’t find exactly what I was looking for in Pebble’s app store, in looking at some of Pebble’s development material, I’m reasonably confident that I can construct it myself. Lastly, the Pebble Time Steel has water resistance up to 30 meters with some limitations, which means all of my watery day-to-day activities (shower, washing dishes, getting peed on by the baby) aren’t likely to cause any ill effects.

Dead-on shot with Display Active Laying on its side, with left side up Laying on its side, with right side up Watch and Magnetic Charging Cable Connected to charging cable Shallow(er) viewing angle


I’m spending quite a bit more time these days with my hands full, completely unable to pull my smartphone from my pocket. I thought a smartwatch would help out in those situations, and I was mostly correct. But things that I assumed I could do one-handed all actually require two hands: one hand is tied up wearing the watch, and the other hand makes selections via the touchscreen or buttons, which I found a bit disappointing. On the flipside, I was already wearing something on my wrist, and I’d caught myself wishing a few times that it had a watch face and that I could somehow use it with my smartphone.

I’m actually pretty pleased that I bought the smartwatch, but a tiny bit disappointed it didn’t solve the exact problem that I purchased it for. I’m excited because it’s a fun little gadget that I get to tinker around with. The Pebble Time Steel wound up meeting all of my smartwatch criteria, and I truly am appreciating that all of the notifications I care for are getting forwarded to my watch. In fact, I’m tempted to mute the notification tone and vibration on my Nexus 6 as a result of buying a smartwatch.

It may not have been the perfect solution to the problem I was hoping it would solve, but overall I’m pleased with owning a smartwatch. There are a number of things that I wouldn’t mind incorporating into my smartwatch: keeping track of my traffic on my blog, keeping track of my blood-sugar data from my continuous glucose meter, and incorporating the watch into my own home automation. If I can accomplish those tasks then the smartwatch will wind up being a very useful addition to my arsenal of gadgets. Otherwise? Then it’s an expensive toy, but not the kind of toy I expect I’ll outgrow too soon.

How about you guys? What purposes do you have for smartwatches that I’m overlooking? And on the flip side, what concerns do you have that might be stopping you from seriously considering a smartwatch?

Home Brew: Das DoppelGanger

| Comments

As I’ve mentioned in past blogs, one of the big reasons I decided to join our local makerspace, TheLab.ms, was their Brew of the Month program. I had always been interested in the prospect of brewing my own beer, but it took a group of other enthusiasts to finally act on my curiosity.

The last Brew of the Month that I attended was at the end of February when we brewed TheLab DoppleGanger. The DoppleGanger is a doppelbock imitating a Chocolate Stout. Of the beers that I’ve participated in brewing, this was by far the most complicated. As I understand it, it’s the first time that TheLab’s brewers have attempted to brew a beer that included a triple infusion mash. Our brew master, Richard, warned us that we had a long night ahead of us when he shared the details of the month’s brew at TheLab’s Monthly members’ meeting. Richard wasn’t exaggerating; the night we brewed I didn’t make it home until well after two a.m.—much to the chagrin of my dogs, Crockett and Zoe. On top of the complicated brewing process, it was going to be a doppelbock, which meant that it was going to wind up fermenting in the Brewterus for twice the normal time. Which meant it would be two months before we could all enjoy the fruits of our labor.

Over Easter weekend I kegged the DoppelGanger, and because I’m married to a German, I began referring to it as “Das DoppelGanger.” The smell of the chocolate malt really stood out as I transferred the beer from my carboy into the Cornelius keg. A pleasant chocolatey-beer aroma permeated the room where my keezer is located. If not for the power of Pine-Sol, that wondrous smell would’ve quickly enveloped the entire house due to the enormous mess I made while putting the brew into the keg.

Of the few beers I’ve brewed, I had the hardest time with Das DoppelGanger. The fermentation was a bit more complicated and wasn’t the set-it-and-forget-it that I’d done with previous beers. As a result of some of the excitement around our newborn son’s arrival, I wound up not exactly adhering to the recipe. Richard assured me that I was fine long before I kegged the beer, but I had that inkling of a doubt.

I really wish that I had a sophisticated palate and an armory of descriptive adjectives to describe what I taste, but sadly I think I lack some of the tools and definitely the experience to effectively describe what I’m tasting. Firstly, the DoppelGanger is a quite dark beer, reminding me quite a bit of a cup of coffee. I believe this is primarily thanks to the combination of the Munich Malt, Carafa III, and Chocolate Malt. Because it’s a doppelbock it’s got a pretty considerable amount of heartiness to it and a higher alcohol content than the beers I most typically drink.

Prepping the carboy and keg for transfer. The DoppelGanger is dark and quite coffee-like in appearance. It looks even darker with the camera flash active. Slowly being siphoned from the carboy into the keg. Updating the label on the beer tap handle DoppleGanger handle installed and ready to go #1 DoppleGanger handle installed and ready to go #2 DoppleGanger handle installed and ready to go #3 The first poured glass of the DoppelGanger

What did I think?

Historically, I’ve mostly enjoyed the lighter side of beers. I’ve always had a real bias towards beers that are crisp and smooth. The darker, heartier beers never really captured my fancy. That being said, I’m slowly coming around to the dark side. A few helpful bartenders have helped me identify and like quite a few darker beers.

If the DoppelGanger was on tap at one of my favorite watering holes, I’d definitely order a glass or two. But that being said, I’d probably prefer to do it on an empty stomach—it’s quite a filling beer. However, it’s really quite smooth and enjoyable. I worked on this blog while sipping on my very first glass of the DoppelGanger, finishing up both the glass and the first draft of the blog almost simultaneously. I’m going to enjoy drinking (and sharing……….maybe) all five gallons of the DoppelGanger currently on tap in my keezer!

DIY NAS: 2016 Edition

| Comments

Update (3/20/2017): I recently published the DIY NAS: 2017 Edition which features an Intel Xeon-D 1541 CPU, 64 GB of RAM, 40TB (5x8 TB) of gross storage, and more! The DIY NAS: 2016 Edition is still a fine platform to build your own do-it-yourself NAS around, but if you’re more interested in what my latest build is, I suggest that you go and check out this newest build instead. Unless a major development occurs, this’ll be the last update that I make to this particular blog.

A few years ago, I asked myself, “Can I build my own DIY NAS?” And ever since then, I’ve been answering that question in the form of a couple different build blogs each year. Each build has a bit of a theme: how I would rebuild my own NAS and what parts I’d select for a more economical build. For 2016, I’m varying from that theme ever so slightly. The DIY NAS: 2016 Edition was specifically written with my own NAS in mind.

In the past 4 years, I’ve added additional drives to my NAS and I’ve also replaced a couple failed drives. Today there are 7 HDDs in my NAS: 3x4TB and 4x2TB drives. But I’ve also had some odd communication errors writing to my HDDs. After replacing all the SATA cables, I’ve become convinced that the drive cage in my Lian Li PC-Q25B is the root cause. Because of this realization, I’ve decided that’s enough reason to go ahead and upgrade my own NAS; it just didn’t make any sense to me to take a 4+-year-old motherboard and put it into a brand-new case! I decided that the DIY NAS: 2016 Edition would be an ideal sandbox for me to go ahead and figure out exactly which hardware I’d wind up buying for my own upgrade.

Unfortunately, my appendix had other ideas—right when I was ready to put all the hardware together it became inflamed and required a trip to the emergency room, and ultimately to the operating room. Instead of spending the holidays working on this NAS blog, I wound up being busy getting better. Curse you, vestigial organs!

With all of that behind me, what exactly did I have in mind for upgrading my existing NAS to? My biggest motivating factor was the incorporation of bhyve into FreeBSD 10, which once it is incorporated into a future version of FreeNAS, would also allow for the potential of virtual machines being hosted on my NAS.

CPU & Motherboard

In my NAS-building experience the selection of the motherboard is the most important and therefore most time-consuming decision made when planning your NAS build. I have a set of criteria that’s incredibly important to me that I work from for each build:

  1. Small form factor: Real estate in our home office is very valuable for two reasons: it’s difficult to find and it’s full of important devices. Because of these factors, I like picking diminutive motherboards that don’t require full-sized computer cases. This usually narrows my search down to browsing through the various available Mini-ITX or Micro ATX motherboards.
  2. Low-power CPU support: Because I leave my NAS running 24/7 the costs savings of a power-sipping CPU justify the premium that gets charged for the low-powered CPUs. Over the life of the device, the low-power CPU will more than pay for its price premium.
  3. 6 or more SATA Ports: 6 SATA ports are enough to build out a pretty decently sized array while also including a couple drives’ worth of parity for the sake of fault tolerance.
  4. Onboard Gigabit: This is mostly because I wired up my house with CAT5e and wanted to make sure I could make use of it. But because transfer speeds to your NAS are going to depend on the speed of the network interface, it makes sense to try and ensure that the fastest possible is included on the motherboard. Because Mini-ITX motherboards usually only have one PCI-e slot, I like to keep it free for a future SATA controller card rather than occupy it with a network card, which is why I prefer the network card to be built onto the motherboard.
  5. Integrated and Passively Cooled CPU: There’s no real requirement here that the CPU is integrated, but I’d rather have a motherboard with an integrated CPU just because I’m a bit lazy and appreciate the simplified installation. But what’s really important here is that the CPU can be passively cooled without an added fan. I’m not a big fan of sitting in a room of noisy computers.

In my research for the DIY NAS: 2015 Edition, I discovered the ASRock C2550D4I motherboard which seemed to be designed entirely for a DIY NAS server in mind. To this day, I’m still impressed with its size, its fan-less design, and the number of SATA devices it can support. For the 2016 DIY NAS, I was quite tempted to stick with it for a second year in a row. However, because I have a goal to also run a small virtual machine or two on my own NAS machine, I decided to upgrade to the C2550D4I’s big brother, the ASRock C2750D4I (specs). The two motherboards are virtually identical, with the ASRock C2750D4I’s CPU featuring an additional 4 cores, which should come in handy considering my virtual machine aspirations. There’s a significant price difference between the motherboards—the extra CPU horsepower carries a hefty price difference of an additional $100. Because of that, I think the ASRock C2550D4I is still a fantastic alternative. Both of these motherboards fit all of my ideal NAS-building criteria.

Running Total: $418.04

Update (2/9/17): Atom C2000-family Design Flaw

A couple different readers alerted me to a story that came out of Intel’s Q4 2016 earnings call. Apparently there is a flaw in the Intel Atom C2000 which requires a hardware fix. This almost undoubtedly means it’s going to require replacing the motherboard to obtain that fix, assuming that ASRock manufactures repaired motherboards. The end result of the flaw is that the system becomes unbootable. This is a failure which I have personally have experienced once already and one of the past #FreeNASGiveaway winners also experienced a very similar issue that resulted in submitting the motherboards to the ASRock RMA process. This is pretty bad news for what’s been hands-down my favorite CPU to build DIY NAS machines around.

What’s this mean for DIY NAS builders? Buyer beware! I for one still love the Avoton C2550 and C2750 motherboards that I’ve picked. I’ve had to RMA my NAS’ ASRock C2550D4i motherboard once already and it’s a bit disappointing to me that another motherboard RMA probably awaits my own NAS. But I’m not going to rush out and replace the motherboard with something else. My hope is that ASRock produces new boards with the necessary fix and begins to use them in their RMA process. I’ve contacted ASRock’s support 2-3 times and I’ve always had positive experiences working with them. While my NAS is an important piece of hardware in my house, I can cope with an occasional stretch of downtime as I await the motherboard’s RMA.


Because the motherboard supports it and because it is the better option, I chose to buy ECC RAM despite my confidence in using Non-ECC RAM for my DIY NAS builds. FreeNAS suggests around 1GB of RAM for 1TB of raw storage, but I haven’t personally run into any issues building machines that fall short of that rule of thumb. For this NAS, I decided to go with a 16GB kit (2x8GB) of Unbuffered DDR3 PC3-12800 (specs).

Running Total: $505.03

Case, Power Supply, and Cables

The case is your second most important item when it comes to building a DIY NAS. I typically wind up spending almost as much time looking at different cases as I do motherboards. Mostly, you want to pick a case that’s going to fit the maximum number of drives you can project your NAS containing. Even if you wind up building a smaller NAS (2-4 HDDs total) I suggest that you pick a case that can hold up to 6-8 HDDs. That way, if you wanted to add storage quickly and easily, you have a few empty hard drive bays to work with.

Last year’s case was the Silverstone Tek DS380B and when I was building it, I was envious of the removable drive bays in the case. I think that easy access to the NAS’s hard-disk drives is a very luxurious perk. I’ve been very happy with my Lian Li PC-Q25B but I’d be lying if I said I wasn’t tempted last year to buy that case and use it in my NAS. I was bound and determined to buy another Silverstone Tek DS380B for this year’s NAS (as well as for my upgrade) but then somebody commented on Google+ asking me about the U-NAS cases.

Specifically, I was asked about the U-NAS NSC-400, which I think is a little small. But I was intrigued. If there were bigger version(s) of that same case I thought it’d be a very temping option to my prior favorite cases. I found that as I was hoping an 8-drive version existed, the U-NAS NSC-800 (specs). U-NAS built a great case for their own NAS devices and then wisely decided to sell the same case to others who wanted to build their own DIY NAS. Its most important feature was that it had room for 8 HDDs in removable and hot-swappable drive trays. In addition to that, it has room for a couple 2.5” hard drives. It seemed extremely compact with dimensions of 316mm x 254mm x 180mm and it claimed “Ultra Quiet Operation.”

Of everything I read on their specifications, I was pretty excited and hardly skeptical except for that last item. In addition to claiming it was ultra quiet, it also specified that you needed a power supply designed for use in a 1U server rack. For those of you who’ve never been in a data center or in the vicinity of a 1U server being run, “quiet” is the last word you’d use to describe its operation. Every time that I’ve ever heard a rack-mount server running, it’s sounded a bit like a 747 taxiing for takeoff.

All that being considered, I was hopeful that I could find a 1U power supply that was on the quiet side, hopefully no louder than the number of drives spinning up in the case. I’d actually picked an entirely different power supply, which is what you’ll see in all of the parts photos, but I found out that my original choice wouldn’t work. I instead picked out an Athena Power AP-U1ATX30A (specs) to go in the case.

Learning from one of my past mistakes, I assumed that I wouldn’t have anywhere near the SATA cables I’d ultimately need, so I decided to pick up two packs of (5) 18” SATA 3.0 cables. In my NAS-building experience I’ve found that even though the motherboards are designed to support a large number of drives, the manufacturers are keeping their costs low and only including 1-2 total SATA Cables. My suggestion to other DIY NAS builders is to make sure you have more SATA cables than you actually need.

Running Total: $761.15


FreeNAS Flash Drive

What’s impressed me most these past few years of building NAS machines is that there’s really only one component which hasn’t changed from year to year: the USB drive responsible for running FreeNAS. I continue to recommend the SanDisk Cruzer Fit USB drives (specs). The FreeNAS hardware requirements say that you need a drive that’s at least 8GB and their suggested size is at 16GB, which is what I picked out for this NAS. I’m a big fan of this USB drive because of its low profile. It can fit in the USB ports on both the front or the back of the case and doesn’t protrude excessively from where it’s inserted. I think it’s ideally suited for the back of the case. Because I continue to have good luck with these drives, I’m pretty certain I’ll be using them again in future builds.

Cache SSDs

I’ve been teasing a few surprises on Twitter, Facebook, and Google+ pretty frequently and this is the first of those surprises. For my own NAS upgrade, I wanted to implement both a read cache and a write cache to sit in front of the HDDs. In order to accomplish that, I picked out the Samsung 850 EVO 120GB (specs). Everything that I’ve read about the Samsung EVO 850s is that they perform pretty well and more importantly are pretty durable. I picked a pair of SSDs because it’s imperative that your write cache is redundant. In order to achieve that I’ll end up creating a partition on each of the SSDs and then mirror those two partitions. The rest (or the appropriate remaining amount) of the SSDs will be used to create a striped read cache.

NAS Hard Disk Drives

The hard-disk drives that you wind up using for storage in your NAS should always account for most of your expense. If your HDDs don’t account for at least 50% of your total expenditures then you’re probably spending too much money on the wrong components! In building various NAS machines over the years, I’ve come to believe that it’s quite a bit better to buy more drives instead of buying bigger drives. The tempting advantage of buying bigger drives is that they’re almost always more cost efficient; the larger the drive, the better the dollars-to-gigabytes ratio is.

If you were buying one hard drive for your new desktop computer, I’d tell you to buy the biggest drive you can afford and to make sure you back up all of your critical data. But in this case, you’re not buying just one drive, you’re buying a number of drives, so the same advice doesn’t work out near as well. Let’s consider a couple different theoretical arrays both of 24TB of total raw storage using 6TB (4 HDDs), 4TB (6 HDDs), or 2TB (12 HDDs) drives and two different levels of redundancy: one HDD for redundancy and two HDDs for redundancy:

Size Quantity Raw Storage Useable Storage w/
1 Redundant HDD
Useable Storage w/
2 Redundant HDDs
6 4 24 18 12
4 6 24 20 16
*2 12 24 22 20

* Note: This is an example, I’m not suggesting a 12x2TB array is the optimal configuration.

I think what’s most important here is the “I” in RAID, Redundant Array of “INEXPENSIVE” Disks. The greater the number of HDDs you can squeeze into your budget, the more configuration options you’re going to have. The more configuration options that you’re going to have, your array is going to wind up bigger and/or more fault tolerant which is a very good thing!

Looking at hard-drive prices right now, I think the 4TB drive is definitely still the best bang for your buck. I was pretty tempted by the 6TB drive prices, but they’re still a bit too expensive to compete with the 4TB drives. However, the way things are looking, I’d be surprised if I wasn’t incorporating 6TB drives into next year’s DIY NAS blogs. Here are the two drives that I wound up going with:

2016 NAS HDDs
Seagate 4TB ST4000VN000
4 TB
4 TB

I typically wind up picking drives from 2-3 different manufacturers for a couple reasons:

  1. Avoid Bad Batches: When buying drives in bulk from the same vendor, you’re extremely likely (but not guaranteed) to get drives that were all manufactured in the same batch. Typically defects in hard-drive manufacturing result in the same issue across the same batch. So if you had 7 disks in your NAS that you bought all from the same vendor at the same time and those drives came from a bad batch, you might see similar issues start popping up at the same time on each of your drives.
  2. It Enables me to Buy Cheap Drives: There are inexpensive HDDs out there that are quite a bit of a good deal compared to their contemporaries. You may think that the price is “too good to be true,” but this is a good way to save quite a few dollars and count on the redundancy within your array to protect you just in case it turns out to be too good to be true. This usually applies more to the EconoNAS builds that I do, but it’s still a great way to trim some of the price down off your own DIY NAS build. That being said—be careful, sometimes you get what you pay for!

And the finishing flourish on this year’s NAS was an additional 3D-printed piece: a “case badge” that we designed and printed on the 3D printers at TheLab.ms. I liked the final product so much, that I printed a handful more. I’ve got enough for at least a couple more years’ worth of NAS giveaways.

The DIY NAS: 2016 Edition nearly broke the bank, literally. I wound up spending nearly $2,000 in total and almost more than I spent building my latest gaming rig. In the future, I’d prefer not to get anywhere near this price point for a NAS build. However, because I’m intending to upgrade my existing NAS I won’t be spending that entire price all at once. In fact, when I do decide to upgrade, I’ll probably do it gradually over a few months: first slowly upgrading the remaining HDDs from 2TB to 4TB and then by upgrading the remaining components. However, that being said, the 2016 NAS is a fantastic little machine which packs quite a punch. Here’s a breakdown of all the parts and their costs:

Final Parts List

Component Part Name Count Cost
Motherboard ASRock C2750D4I specs 1 $418.04
Memory Crucial 16GB Kit (8GBx2) DDR3 ECC specs 1 $86.99
Case U-NAS NSC-800 Server Chassis specs 1 $199.00
Power Supply Athena Power AP-U1ATX30A specs 1 $43.14
SATA Cables Monoprice 18-Inch SATA III 6.0 Gbps (Pkg of 5) N/A 2 $6.99
OS Drive SanDisk Cruzer 16GB USB Flash Drive specs 1 $7.31
Cache Drives Samsung 850 EVO 120GB SSD specs 2 $67.99
Storage HDD 1 WD Red 4TB NAS - WD40EFRX specs 3 $149.49
Storage HDD 2 Seagate NAS HDD 4TB (ST4000VN000) specs 4 $139.00
TOTAL: $1,894.93

U-NAS NSC-800 U-NAS NSC-800 Drive Sleds U-NAS NSC-800 Miscellaneous Parts U-NAS NSC-800 Drive Cage Innards #1 U-NAS NSC-800 Drive Cage Innards #2 U-NAS NSC-800 Drive Cage Innards #3 U-NAS NSC-800 Inside Top – PSU and SSDs U-NAS NSC-800 Backside U-NAS NSC-800 Right Side A few of the many SATA Cables ASRock C2750D4I #1 ASRock C2750D4I #2 Samsung 850 EVO 120GB SSDs 16GB of Crucial ECC DDR3 RAM Seagate NAS 4TB HDDs Western Digital Red 4TB HDDs Almost all of the NAS parts (ignore the red cables and PSU!)

Hardware Assembly, Configuration, and Burn-In


No matter how much research I do, there are always one or two things that I still goof up. The DIY NAS: 2016 Edition is definitely no exception. First of all, I had a small power supply that I’d tried to use in a previous NAS build (coincidentally, a goof-up from an even earlier NAS build) that appeared to be the size and shape of a 1U power supply, but apparently I was mistaken. When I first attempted to fit it inside the U-NAS NSC-800, it wasn’t even close. It was too skinny to line up with the screw holes on the back of the case. I was tempted to see what kind of creative solutions I could come up with to use that power supply and the case together, but I’d prefer if everybody was able to build the same exact thing as I did by ordering parts from their favorite vendors. I wound up ordering a real 1U server power supply instead.

But when the new power supply showed up I was aghast to discover it was too short! There are two posts towards the front of the case that I assume are intended for the power supply to sit on. Because the power supply wasn’t long enough to reach those posts, it essentially was “floating” in midair parallel to the top of the drive cage. I was tempted to order another, longer power supply but I thought that was stupid. I was pretty confident that there would not be many ill effects of the power supply hanging in midair like it was. However, I did think of one worrisome scenario—shipping. Since I plan to be shipping this NAS to a lucky winner in a month or so, I knew that the NAS would get jostled around quite a bit between here and there. I was worried that it might not survive the trip.

So I called on Pat and his seemingly infinite 3D-printing and modeling expertise. I asked Pat how hard it’d be to design some sort of spacer to slide around the power supply and provide the missing vertical support to the other side of the power supply. Pat laughed at me like an all-knowing father laughs at his young child, grabbed my caliper to take some measurements, and by the next day he designed this: Spacer Bracket for a 1U Power Supply which we subsequently printed during our next trip to the TheLab.ms, a Plano-area Makerspace, on one of their 3D printers.

Don’t have access to a 3D printer? No problem! I talked Pat into listing the bracket on Tindie so that people who want to follow this build out 100% had the option of having that same bracket. Check out PSU Bracket for U-NAS NSC-800 NAS Server Chassis on Tindie today!

I don’t think that this spacer is required at all. So there’s no need to start searching wildly for a 3D printer that you can borrow or to join a Makerspace like TheLab.ms (although I’d highly recommend joining a Makerspace!) to print this spacer. It might come in handy if you plan to move your NAS around frequently. Removing the power supply is probably a better option if the NAS isn’t going to be moving around frequently. I would’ve removed the power supply prior to shipping it, but I didn’t want the lucky winner to have to reassemble the NAS before being able to use it.

The next goof-up was my worst of all; at least it was for this NAS. The U-NAS NSC-800 came with its drive cage already cabled up with something I’d never seen before. The drive cage was already cabled up with SAS/SATA cables, each of the 4 cables consolidating down into one single great big connector that I learned was a Mini-SAS connector. Being the neophyte that I am, I simply assumed that I’d need a “reverse breakout” cable to hook into that Mini-SAS connector and then plug the SATA ends into the motherboard, and I was wrong, oh so very wrong. In order to use the cables that came inside the U-NAS NSC-800 there would need to be some sort of SAS controller for it to plug into. What I had to do instead was remove the back of the case in order to access the drive cage’s cabling, remove the existing cables, and then replace them with the glut of SATA cables that I have been maintaining since running short during the building of a previous NAS.

I had hoped that overcoming my own knuckleheadedness (is that even a word?) would be my only obstacle in assembling the case, however there was one remaining obstacle: space. The U-NAS NSC-800 has very little room for you to work with. Once I took the cover off the case, I knew I was going to hate working inside that case, and boy was I right! The motherboard actually mounts on the left side of the case to the inside of the case’s frame and it mounts rather unconventionally. There are four total stand offs which line up with the Mini-ITX mounting points. However you screw into them from different directions on the different sides of the case. At the top of the case, you screw down into the motherboard and standoff, on the bottom of the case you screw into the motherboard from the reverse side. I can honestly say I’ve never installed a motherboard quite like that, or even seen one mounted like that. The other peculiar part of this install is that a thin plastic sheet, a little bit bigger than the motherboard, is included with the case. The motherboard actually sits atop that sheet. I assume this is to protect the motherboard’s circuitry on the bottom from accidentally shorting out on the sides of the case.

I have two pieces of advice for anyone who wants to build a similar machine around the U-NAS NSC-800:

  1. Do as much testing of components as you can outside of the case.
  2. Hook up everything on the motherboard before installing it.

When you consider everything the motherboard hooks into, especially the 10 SATA cables and ATX power cable, the motherboard actually gets pretty tricky to move around inside the case. This is exactly why I prefer to mount the motherboard first and then hook cables up, but that is impossible with this case. To help illustrate some of my difficulties and challenges assembling this computer, I decided to record it all on video and share it on Youtube:

As you can see from the video, there were some points that I absolutely hated working in this case. Take, for instance, the number of times I installed and removed the SSDs, or the times I struggled putting the case’s cover back on. And the kitchen was definitely rated NC-17 as I carefully maneuvered and worked on installing the motherboard. But that being said, I was pretty excited when I slipped the cover on, booted it up for the first time and saw that all of the RAM, the two SSDs, and all seven of the HDDs were recognized. All I needed at this point was a tiny bit more good luck to survive the burn-in test and I’d have the most difficult part of the build behind me. I may have hated working in the case, but I loved the final product quite a bit more!

Hardware Configuration

This year’s hardware configuration was pretty much the same as last year’s, considering the similarity between the two motherboards. The ASRock C2750D4I features a pretty straightforward BIOS. And I was already expecting the only curve ball: because of the number of SATA controllers in there (Intel and Marvell), it’s a bit overwhelming looking at all the different SATA options. That being said, I validated the same items and made effectively the same changes in the BIOS as I did last year:

  1. Enabled S.M.A.R.T. for the hard-disk drives.
  2. Quadruple-checked that ECC was enabled and that the installed RAM was detected as ECC.
  3. Configured the Boot Options so that the USB is the first device it would attempt to boot from.
  4. Set the Primary Graphics Adapter to Onboard.


The weekend I finished putting the hardware together I began to put the hardware through its paces. There weren’t a whole lot of different things to stress test because there are essentially 3 components to the machine: motherboard, RAM, and disk drives. In order to test the sticks of RAM, I stuck Memtest86 on a spare USB flash drive and booted the machine. Using the default values, I let Memtest86 run overnight. I checked the machine in the morning, ensuring that it had completed at least three full passes, which gave me confidence in the quality of the memory.

After the successful tests, I booted off a different flash drive with Stresslinux on it and ran the same stress tests but using two different durations: a two-hour test run and an eight-hour test run. For those of you interested in the exact parameters that I used, except for the duration I didn’t vary far from what the stress man page offers as an example.

Had I run into problems during the two-hour test, I might have had reason to log on to a new console and monitor some of the various system temperatures. But since the two-hour test went through without a hitch, I had confidence that I would wind up seeing the same results at the end of the eight-hour test, and it passed both the 2-hour and 8-hour tests with flying colors.

FreeNAS Configuration

You’d think I’d have this memorized, having done it twice a year for at least a couple years, but that’s not the case. I typically wind up referring back to my own blogs to make sure I remember how I set things up in the prior year’s DIY NAS machines. I suspect that some of this is due to the fact that there have been new FreeNAS releases between these builds which results in things moving around in the user interface a bit, but primarily it’s straightforward and easy enough that I never have had enough difficulty to justify me etching the appropriate steps into memory.

However, in this build I’m experimenting with a new feature: cache SSDs. So I thought I’d break up the “typical” configuration steps and the new steps that I had to go through in order to use the SSDs for read and write cache.

Typical Configuration

Upon the initial boot, you’re asked to update the root user’s password. Once you’ve done that, you’re free to login to the FreeNAS web interface, which is where all of my typical configuration is done. The newer versions of FreeNAS kick off a setup “wizard,” and being the arrogant techno-blogger that I am, I exited right out of that wizard and begin configuring things manually by myself. The first two items I updated were the hostname and the time zone.

Moving on, I set up users and groups. Firstly, I created a user whose credentials match the credentials I use locally on my desktop (and at my other computer(s) in case it’s needed). After that, I created a group named shareusers and added my new user account into that group.

Having created the users, I got into the creation of the FreeNAS volume (zpool): I added all seven of the 4TB hard-disk drives to a single array. I picked RaidZ2 as my RAID level, which allows for the failure of up to two of your array’s hard-disk drives. Once the FreeNAS volume was created, I added a FreeNAS dataset to the volume. I named the dataset “data” and then manipulated the permissions so that the Owner(group) of that dataset was the shareusers group I created earlier.

Next up, I enabled both the S.M.A.R.T. service and the CIFS service for hard-drive monitoring and filesharing with Windows computers respectively. I configured the S.M.A.R.T. service by providing it an email address that it could send reports to. With the S.M.A.R.T. configured, I turned my attention to the CIFS service. I updated the NetBIOS, Workgroup, and Description to what was appropriate for my home network. Then I went in and created a new CIFS share, sharing the “data” dataset (/mnt/vol1/data). Finally, I used the Windows File Explorer on my desktop to browse to the new share and to make sure I could read, write, and delete files in the share.

The easy setup Wizard,  I exited it! Updating the NAS' hostname. Setting the machine's time zone. Adding myself as a user matching my desktop credentials. Adding a group for use in the file share. Adding my user to the share user group. Adding the new 7x4TB RAIDZ3 Array FreeNAS at work creating the Volume Adding a dataset to the array. Setting permissions on the dataset for the share user group. Enabling S.M.A.R.T. and CIFS Services Configuring the CIFS Service Configuring the S.M.A.R.T. service Creatng a CIFS share. Validating the share is functional. FreeNAS Autotune

But wait, there’s more! Because I’m basing this build off what I’m likely to upgrade on my own NAS too, it simply wasn’t good enough that the CPU, RAM, storage capacity, and network were all substantial upgrades. I really wanted to drive this one out of the park by attempting to add some SSDs for use as a read and write cache.


Among the things I’ve been curious about is adding some sort of cache to sit in front of my hard drives. Mostly for no other reason than theoretically it should be much faster and it seemed like something neat to play with. The smaller-sized SSDs have become relatively inexpensive, so it seemed worthwhile to see if it’d boost the throughput of the NAS. The other (and primary) reason I was interested in the read/write caches was my eventual plan to use the NAS for some virtualization. Ramping up the speed of local file operations would pay dividends when I started hosting virtual machines on my FreeNAS machines.

In my research, I found the steps that I needed to follow already laid out for me in this excellent blog: Using one pair of SSDs for both ZIL and L2ARC in FreeNAS. For my build, I picked out two Samsung 850 EVO 120GB SSDs to act house both the write cache (ZIL) and read cache (L2ARC). Ultimately, what I wound up doing is creating a 30-gigabyte partition on each of the SSDs and then mirrored those two partitions together for the write cache. Mirroring the two partitions is critical to the data integrity of the writes. The remaining 90GB of space on each drive went into a striped array for the read cache.


Power Consumption

I hooked the DIY NAS: 2016 Edition to my Kill-a-Watt and monitored how many watts it used each time I booted it up. As it was booting, the highest it hit was 126 watts. I left the NAS plugged into the Kill-a-Watt for the duration of the NAS benchmarking. During the most intensive write tests the highest wattage I observed was 95 to 97 watts. And while the machine was idle it settled down to around 70 watts. I left the NAS running on the Kill-a-Watt for 3 days, 2 hours and 45 minutes, and during that time it used 6.53 kWh.


To benchmark the DIY NAS: 2016 Edition I used IOMeter and a somewhat scientific (me with a stopwatch) measurement of some file copies across from my computer to the NAS. I did the Windows file copy test because it is a pretty decent approximation of a real-world test. As a baseline, I first benchmarked my NAS from 2012, I then ran all the same tests on the DIY NAS: 2016 Edition. I kind of expected it, but my little NAS got trounced! Not that I’m making excuses, but he had a bit of a handicap. I continued using my NAS as I normally do, so our regular day-to-day use of the NAS might have hindered it a little bit, but I highly doubt that reason is why it got so badly demolished in the benchmarks.

Here are the tests I performed and how the DIY NAS: 2016 Edition fared in each test:


  • 12 workers, 4K, 100% Read, 0% Random: 17349.16 IOPS and 67.77 MB/sec
  • 12 workers, 4K, 0% Read, 0% Random: 12898.3 IOPS and 50.8 MB/sec
  • 12 workers, All Tests: 9501.66 IOPS and 121.81 MB/sec

Timed Windows File Copy

  • 1 40GB file (40GB total) both to and from the NAS:
    • To: 7:15.66
    • From: 12:06.32
  • 31,250 128KB files (~4B total) both to and from the NAS
    • To: 28:04.08
    • From: 12:09.82

As an aside, I also grabbed the same benchmarks for the DIY NAS: 2016 Edition before I added the ZIL and L2ARC because I was curious what kind of performance bump I might see from it. Suffice it to say I did not see a performance boost when using the ZIL and L2ARC when running the same tests on this NAS. I’ll be digging into those benchmarks between now and the end of the giveaway and potentially use that data for a future blog. But for the time being, I’m chalking this up to the fact that neither my home network nor my usage (either typical day-to-day use or my benchmarks) simply don’t tax the NAS enough to see the benefits of using an SSD for a read/write cache.


First and foremost, I spent a ton of money. I honestly had a real hard time pulling the trigger and buying all of the parts when I saw how much they’d add up to. Spending this much money on the NAS puts you up into the neighborhood of many of the commercial NAS machines from QNAP, Synology, iXsystems, etc. I’m still quite confident that the specifications and features of the DIY NAS are favorable when comparing them to those other products, but the sticker price makes it much less of a no-brainer than it has been in the years past. That being said, my objective was actually to upgrade my own NAS, which is showing a bit of age. Because I won’t need to replace quite a few of the hard drives, the price tag becomes quite a bit easier for me to swallow. Don’t want to spend nearly $2,000 building your own DIY NAS? I don’t blame you! Make sure you check out the DIY NAS: 2016 EconoNAS blog too, it’s a very comparable build that makes a few compromises in total storage and its footprint but at a fraction of the price!

The most disappointing part of the build wound up being the pair of Samsung 850 EVO 120GB SSDs to use as both a ZIL and L2ARC in the NAS. This was a feature that I was pretty excited to add to the NAS; in theory it seemed like it’d be a great way to accelerate the performance of the NAS. But ultimately I believe that my network and my usage simply don’t justify the addition of these two caches. Additionally, the machine isn’t exactly whisper quiet like I’d prefer it to be. The one drawback of drive sleds is there’s little to no material around them to dampen the sound of the spinning drives. The noise of the seven spinning HDDs escapes the front of the case and accounts for a bit of hum. But the ability to access the drives and swap them out without opening the case is a nice feature and makes living with that extra noise a fair trade-off.

My favorite part of the DIY NAS: 2016 Edition almost wound up my least favorite as well. I have a strong dislike for small cases, and the U-NAS NSC-800 is certainly a very small case; I can’t imagine cramming more components into a smaller area than what’s in the NSC-800. That being said, I do actually love how small the case is even after working inside it for what seemed like an eternity. I also really like the quality of the drive sleds; they remind me the most of the drive sleds found in rack-mount servers. My experience with prior cases has been that the removable drive sleds usually wind up feeling pretty chintzy and cheap. Even though I hate working inside a small case, the finished product was worth it to me. Of all the components I used for the DIY NAS: 2016 Edition this is the most likely to wind up being part of my own eventual NAS upgrade.

Don’t want to spend almost $2,000 building your own NAS? I don’t blame you! If you’re balking at the price, I suggest the following:

  1. Go with the ASRock C2550D4I (~$100 cheaper)
  2. Ditch the SSDs for the ZIL / L2Arc (~$140 cheaper)
  3. Different hard-drive configuration(s) (Varies)

Altogether, I’m pretty pleased with this machine even if it’s way beyond what my own usage seems to require. When it comes time to upgrade my own NAS, these parts are going to get heavy consideration, and I wouldn’t be surprised at all to find that the same case, motherboard, and RAM all wind up in my own NAS by the end of the year.


Update (2/22/16): Congratulations to Dusten Snodgrass of Google+ for winning the DIY NAS: 2016 Edition #FreeNASGiveaway! This year’s #FreeNASGiveaway was by far the most successful; pretty much guaranteeing that I’ll continue on with this tradition in a few months when I build the 2016 EconoNAS and on into the future. There were over 1300+ entries to the giveaway in roughly three weeks! At times (especially when it was posted to /r/plex) it was nearly overwhelming to keep track of, which may lead to a tweak or two to future giveaways. Thanks everyone for making the #FreeNASGiveaway a success, I look forward to the next one!

Like with the DIY NAS: 2014 Econonas, the DIY NAS: 2015 Edition, and the DIY NAS: 2015 EconoNAS, I’ll be giving the DIY NAS: 2016 Edition away to a lucky reader. The giveaway works like this:

  1. You follow my blog and myself on Twitter, the blog’s Facebook page, and the blog’s Google+ page.
  2. You retweet or share the promotional posts from these social networks (links below) with your own friends and followers. (Note: Make sure that your share is public, otherwise I won’t be able to see it and give you credit!)
  3. Your name gets entered up to three times (once per social network) in a drawing.
  4. After a month or so, I’ll pick a winner at random and announce it here.

Here’s a link to the best posts to promote for each social network:

If there are any questions, please go read the #FreeNASGiveaway rules page, I explain it in a bit more detail there. Please keep in mind, it’s more about the “spirit” of these rules, rather than the letter of the law. If you go to the trouble of helping promote my blog, I’ll do whatever I can to make sure you get an entry into the giveaway. The best way to make sure you get your entry is to follow the steps above.