DIY NAS: EconoNAS 2016

| Comments

Giveaway Update (11/21/16): If you can forgive a Thanksgiving pun, it looks like @chrisgonyea has a little bit more to be thankful for in 2016—due to winning this edition of the #FreeNASGiveaway! Chris won in an unconventional fashion, the original winner failed to respond to my numerous attempts after the first drawing. Undaunted and determined to give this EconoNAS away to a reader, I picked another number from the hat out of the 450-something entrants into the #FreeNASGiveaway and pulled Chris’ number and thankfully I had no issues at all contacting him! I appreciate everyone’s interest and participation, while you may not have won this particular giveaway, your ensures there’ll be EconoNAS giveaway in 2017 too!

Have a happy Thanksgiving, everybody!

Quite a few years ago, I decided I wanted to build my own DIY NAS, primarily for the purpose of backing up my Windows PCs. But Google let me down—I wasn’t able to find a good build blog to get me started. So I decided to set out and build my own NAS and blog about it along the way. Much to my surprise, I quickly found there were a number of other people asking Google the same kinds of questions, and my DIY NAS category of blogs has seen the bulk of my traffic over the years.

In an attempt to defend my turf at (or near) the top of the Google search results related to building your DIY NAS, I’ve been publishing new NAS builds every year. As I’ve gotten more interested in one-upping myself, I quickly found that I was spending and suggesting parts that far exceeded the budget of what my original NAS wound up being. Consequently, I’ve been publishing two very different NAS builds every year: a large, powerful, and expensive DIY NAS and something more budget-friendly, which I coined “the EconoNAS.” Of the two, the EconoNAS is what most closely resembled my very first DIY NAS, a machine built from some inexpensive parts that I could find in an effort to add as much redundant storage that my limited budget could offer. Each year, I do my best to set a budget of around $500 and then ultimately go over it. The 2015 EconoNAS missed that mark quite badly, so this year I doubled down my efforts and tried really hard to both exceed the specifications of last year’s version but to also bring the price down considerably closer to my goal.

CPU & Motherboard

As you might expect, the component that I find to be the most important is the motherboard. Ideally, it’d be inexpensive (under $100), a small form factor, have 6 or more SATA ports on it, onboard Gigabit network controller, and have some sort of onboard video. Most years, I wind up having to compromise on some of the bits of criteria. Typically the compromise has always been made on the size of the motherboard. The smaller the motherboard is, the more expensive it tends to be, especially when it has a sufficient number of SATA ports to be used in a NAS.

You can imagine my delight when I found that the ASUS B150M-K D3 (specs) was in my price range. The smallish MicroATX motherboard supports CPUs from the Intel Skylake CPU family and features the Intel B150 chipset. The B150M-K D3 has six SATA III (6.0 Gbps) ports. If additional storage were needed, the board includes one PCIe (x16) expansion slot and a pair of PCIe (x1) expansion slots. To cap it off, the motherboard also features a built-in Realtek RTL8111H Gigabit LAN. Normally when shopping for DIY NAS components, I agonize over the motherboard and pour over options for what seems like an eternity, but when I saw the list of features and the price tag on this motherboard, I immediately purchased the motherboard.

My budget ultimately made my CPU choice for me. I picked the Intel Celeron G3920 CPU (specs) largely because it was the least expensive CPU that I could find that was supported by the ASUS B150M-K D3. However, while the G3920 might not have the performance and sex appeal of its bigger siblings, it is a very capable CPU. Last year’s EconoNAS featured the Intel Pentium G3220 and in comparison the G3920 scores quite a bit higher on the PassMark benchmarks. The icing on the cake of that comparison is that the G3920 also is more power efficient. More computing ability and a lower power consumption is a significant upgrade over the 2015 EconoNAS.

RAM

Because I intend to use FreeNAS, the most controversial part of this build will be the RAM. The controversy being that I’m an advocate of using Non-ECC RAM with FreeNAS/ZFS, especially on cost-conscious builds like this EconoNAS. Many, especially a vocal majority of the FreeNAS forum, don’t agree with this sentiment and think that ECC RAM is an absolute requirement for use with ZFS. Considering that cost is a driving factor in the EconoNAS, non-ECC RAM is an ideal option. Furthermore, my selection of the ASUS B150M-K D3 motherboard eliminated ECC RAM from contention. All that being said, RAM is important, especially with the ZFS file system. The 2015 EconoNAS featured 8GB of RAM, so for this year’s build I decided to up it to 16GB by purchasing the Crucial Ballistix Sport 16GB Kit. The kit features two 8GB DDR3 DIMMs running at a 1600MHz clock speed. Doubling the amount of ram found on 2015’s EconoNAS is a very nice upgrade for the current build.

Update (10/15/2016): If it sounds too good to be true, it usually is. The motherboard mentioned below does indeed support (I use this term very loosely) ECC RAM, but only if you’re willing to run it as non-ECC RAM. In other words, the RAM fits, the machine will run, but it’ll never do any kind of error checking and correction—you’ll never get the benefit of the ECC feature.

But What if You Want ECC RAM? It’s going to cost you!

Typically, buying ECC RAM meant buying a whole different grade of motherboard to support it—and economical was not a word you’d use to describe the prices of those motherboards. However, thanks to [@comfreak][comfreak] from Twitter, I learned that’s not the case with the Skylake-generation of Intel CPUs. Buying a MSI B150M Pro-VDH (specs) motherboard and pair of Kingston Technology 8GB DDR4-2133MHz Unbuffered ECC DIMMs (KTH-PL421E/8G) would cost roughly an additional $25 (4-5%). The option of being able to add ECC RAM to this build for an additional $25 is a reasonable value, and I certainly wouldn’t fault anyone for choosing that route. Having learned this I’d be tempted to go the ECC route, but still think that I would end up choosing my non-ECC approach for this EconoNAS build and most likely others like it in the future.

Case, Power Supply, and Cables

For my regular DIY NAS builds, I spare no expense on the cases and typically purchase the best NAS case that I can find: something small, compact, loaded with easy-access drive bays (preferably hot-swappable), and ultimately rather expensive. The EconoNAS budget doesn’t allow for such an extravagance. Regardless, I’m pretty excited about the case I chose. The Cooler Master Elite 342 (specs) includes a 400watt power supply, mini tower MicroATX case. The included power supply made the case an absolute bargain at around $55. Out of the box, there are enough drive bays to fit six 3.5” drives (five internal and one external), and depending on what 5.25” to 3.5” adapter you buy, there’s room for 2-3 more drives in the two external 5.25” bays. In terms of drive capacity, the Cooler Master Elite 342 meets or exceeds my favorite DIY NAS case so far, the U-NAS NSC-800. My favorite unexpected feature of this case is the removable drive “cage” (more like a bracket) which contains four of the internal 3.5” drive bays.

The ASUS B150M-K D3 may have six SATA ports, but as is standard these days, only included 2 SATA cables with the motherboard. What’s even worse is that one of the cables has the aggravating 90-degree bend that I absolutely hate. I had to dig into my surplus SATA cables to hook up all the drives; if you don’t have any extras of your own, a pack of 5 Mudder 18” SATA III cables is probably a good idea. The included power supply has 4 SATA power cables necessitating an additional 1 or 2 cables to adapt a standard molex connector and split it into two SATA power connectors, which my supply of excess parts was also able to provide me with.

Storage

FreeNAS Flash Drive

Of all the parts and pieces in my DIY NAS builds, this is where there’s been the least amount of variation. I have been extremely loyal to the SanDisk Cruzer Fit, using the 8GB and 16GB versions in every single one of my DIY NAS builds except my very first one. If you’re doing your shopping on Amazon, the 16GB version is currently one of their “Add-On” items that you can get added to a qualified order for free. From a budget perspective, it’s still perfect. For this year’s EconoNAS, I ultimately went with the Cruzer Fit’s bigger brother, the SanDisk Ultra Fit 16GB.

Why the change? It’s priced competitively, at about $0.50 more than the Cruzer Fit, and it’s USB 3.0. For a long time FreeNAS had not yet adopted USB 3.0 support, and ever since they included USB 3.0 support I have pondered an upgrade to a USB 3.0 flash drive. That being said, I doubt that the improvement in USB 3.0’s faster throughput is going to have much (if any) impact on day-to-day operations of the NAS itself. Ultimately, the upgrade to the SanDisk Ultra Fit 16GB has to do with the eventuality that I just won’t be able to find the prior generation at competitive prices.

NAS Hard Disk Drives

Ahhh, the meat and potatoes of every single NAS build. When building a budget-based NAS, my recommendation is to buy as many small drives as your budget allows—that is assuming that their price per terabyte is at least in the same neighborhood as larger drives. Bigger drives almost always have a cheaper price per terabyte, but one detriment of bigger drives is the net storage lost due to your redundancy requirements. If you’re trying to build an economical NAS, instead of using the raw storage to calculate your price per terabyte, add everything up together, factor in storage used for redundancy, and figure out your price per terabyte on the overall net storage. Here’s an example, using some Western Digital Red Hard Drives of varying size to build a 12TB NAS with 2 drives worth of redundancy:

HDD HDDs Needed
for 12TB
Price Per
HDD
Price Per
HDD TB
Total Cost
for HDDs
Net
Storage
Price per
Net TB
WD Red 1TB
12
$60.99 $60.99 $731.88
10 TB
$73.19
WD Red 2TB
6
$89.99 $45.00 $539.54
8 TB
$67.49
WD Red 3TB
4
$109.00 $36.33 $436.00
6 TB
$72.67
WD Red 4TB
3
$147.83 $36.96 $443.49
4 TB
$110.87

Across the Western Digital Red Hard Drives, when building a 12TB NAS, you’re going to get the most net storage using 1TB HDDs, but get the second-worst net price per terabyte due to the fact that 1TB drives have gotten so expensive. Old drives get expensive when they’re scarce because people still need them for their like-for-like replacements. Of all the drives, the 3TB drive has the best price per terabyte, but that doesn’t carry through to the best price per net terabyte across all the drives due to using 6TB for the redundancy. In the end, the 2TB drive winds up being the best deal despite having almost the worst price per terabyte for each drive. When building an economical NAS, use your budget, redundancy requirements, and capacity requirements to calculate out the net price per terabyte of all options. Then pick the configuration which meets your needs the best.

An added benefit of building the biggest array possible out of smaller drives is that it’s a simpler upgrade path when using FreeNAS—especially for small NAS builds like the ones I do. For the DIYer, adding a drive to an existing zpool is not impossible, but it’s very difficult and it takes planning in advance. For a NAS of this size, it is much easier to simply swap out each drive with bigger ones as they fail or go on really good sales; once all of the drives have been upgraded, ZFS will automatically use as much of the added space as is available on each of the drives.

I debated back and forth between 2TB and 3TB HDDs for quite some time and ultimately arrived at the decision to continue using 2TB hard drives for this year’s EconoNAS. And I found a good deal on the 2TB HGST Deskstar 2TB hard drive at $48.50—I generally budget around $60 per drive for the EconoNAS. Because the 2015 EconoNAS featured 5x2TB drives in total storage, I decided to surpass it by adding a sixth drive to this year’s EconoNAS. Had I found a motherboard with capacity for a 7th or 8th SATA drive, I would’ve been tempted to add an additional drive or two. Typically in my DIY NAS builds, I like to avoid buying all of the same drive, especially all from the same vendors. Typically I do that to avoid issues with a particular model of a hard drive, or even a bad batch of hard drives. However, Backblaze’s ongoing hard drive reliability reports indicate that this particular drive has a very low failure rate: 1.57% of the 4,264 drives have failed in over 3 years. That low failure rate emboldened me to capitalize on the inexpensive 2TB HGST Deskstar 2TB hard drives.

All of the Boxed Parts ASUS B150M-K D3 Motherboard Intel CPU BX80662G3920 Celeron G3920 2.90Ghz Crucial Ballistix Sport 16GB Kit 1600MHz DDR3 Cooler Master Elite 342 – Case and Accesories Cooler Master Elite 342 – Inside the Case 6 x HGST Deskstar 3.5-Inch 2TB 7200RPM SanDisk Ultra Fit 16GB Motherboard, CPU, CPU Heatsink & Fan and RAM All parts ready for assembly

Final Parts List

Component Part Name Count Cost
Motherboard ASUS B150M-K D3 specs 1 $87.99
CPU Intel Celeron G3920 specs 1 $73.05
Memory Crucial Ballistix Sport 16GB Kit DDR3 PC3-12800 specs 1 $74.99
Case and Power Supply Cooler Master Elite 342 specs 1 $63.65
SATA Cables Mudder 18 Inch SATA III Cable (Pkg of 5)” N/A 1 $7.99
Power Splitter LP4 to 2x SATA Power Y-Cable N/A 1 $5.14
OS Drive SanDisk Ultra Fit 16GB specs 1 $7.45
Storage HDD HGST Deskstar 2TB 7200RPM HDD (0F10311) specs 6 $49.95
TOTAL: $619.96

Hardware Assembly, Configuration, and Burn-In

Assembly

If you’ve seen the time-lapse video of me putting together the DIY NAS: 2016 Edition then you know it was a challenge to assemble the DIY NAS: 2016 Edition. The U-NAS NSC-800 fits a ton of features into a very small case, which was a pain to work inside. Comparatively, assembling this year’s EconoNAS was a breeze. Even though MicroATX is considered a smaller form factor, working inside the Coolermaster Elite 342 was much roomier than the other two machines that I put together this year.

Even though I didn’t have to work too hard, I did run into a couple wrinkles. Firstly, half of the brass standoffs included with the case were for wider screws than are used for mounting motherboards. The screws included (and screws from my excess-parts stash) wouldn’t bite and would pull right back out of the standoffs. Thankfully, I have a number of extras that I was able to raid and replace the defective standoffs with. The second wrinkle was the thumbscrew provided to help mount the drive “cage” inside the case. At the bottom of the case, the drive cage had 4 standard case screws that fastened it to the bottom of the case, and at the top of the cage was a single thumbscrew that attached it to the part of the case the holds the sixth internal 3.5” drive bay and one external 3.5” drive bay. The thumbscrew provided was just a bit too tall, and I wound up having clearance issues trying to get a hard drive installed in there. It’s possible that I wouldn’t have had these clearance issues if I’d installed the drives before the motherboard, but I’m skeptical. Thankfully I was able to use one of the extra case screws and a small stubby screwdriver to replace the problematic thumbscrew.

Lastly, I discovered that the 18-inch SATA cables were probably longer than I necessarily needed. 12-inch cables would’ve probably been good enough. As a result, there was quite a bit of excess slack in the cables to try and manage. Back when I worked on or fixed friends’ computers more often, I hated finding lots of zip ties inside computers. Much to my chagrin, I used nearly all of the zip ties that came with the Coolermaster Elite 342 to bundle up the extra slack in the SATA cables.


All things considered, it was an incredibly simple assembly, especially when you compare it to with what I went through when I assembled both the DIY NAS: 2016 Edition and my own NAS this year. I was actually a bit disappointed that it worked out so well: I was hoping to have to design an object to print with my 3D printer to include as a component in this year’s EconoNAS.

Hardware Configuration

Back when I built my first NAS, there were all sorts of machinations that you had to do in the BIOS in order to get it working just right—or at least it felt that way. Now? It’s just a matter of making sure that the USB devices are the only devices the machine is allowed to boot from. While I was tinkering around the BIOS and looking at the motherboard’s support page, I learned I was on the original BIOS and that there’d been a few stability and performance updates in subsequent BIOS releases. So for no particular reason at all, I went ahead and updated the ASUS B150M-K D3 to the latest available BIOS.

Burn-In

I typically burn-in my NAS focusing primarily on the motherboard, CPU, and RAM. Of all the components that go into a NAS, these are the most difficult to get replaced, so they get the bulk of my attention. If I have a bad motherboard, CPU, or RAM, then I want to know about it right away, not down the road.

Quite a few people have asked in the past why I don’t do any kind of burn-in on the drives, but I’m not too concerned about the bad drives for a couple reasons. Firstly, the Backblaze drive quality reports typically have me pretty confident in whichever drives I’ve selected for the NAS. Secondly, the hard drives are the only components that have some redundancy. Thirdly, the hard drives are much, much easier to replace. For these reasons I typically choose not to do any kind of burn-in on the HDDs.

For burning in the memory, I run Memtest86+. If there are no errors found after three passes, then you’re typically in good shape. But usually in my tests, I’ve gone way, way past 3 passes. That’s usually because I get busy working on the blog while I do the various burn-in tests. Between blogging, my day job, and sleep, I’ve been known to let Memtest86+ run continuously for several days! But those first 3 passes are the only ones I ever care about.


I’ll also use some sort of load tests, like the ones found on the StressLinux, Ultimate Boot CD, and Hiren’s BootCD. Particularly, what I want to do is to put the system under heavy load for a few minutes and keep an eye on temperatures and such. If everything goes well, then I repeat the tests and leave it running for around an hour, and finally running a third test and leave it running for a duration of a few hours (3-4). Assuming there’s no random lockups or reboots during any of those tests, then I consider the hardware sufficiently burned in.

FreeNAS Configuration

In setting up FreeNAS for the purposes of these blogs, I usually take a pretty short path towards getting it functional. On your first login, you’re asked to set the root account’s password and then you’re put into the Initial Wizard, which is quite handy and will help you set things up from scratch, but I always exit out of it and manually set up everything I need.

When manually setting everything up, I first update the hostname (EconoNAS) and domain (lan) to match the rest of my computers. After doing this, I like to reboot and then log back in using the new hostname to make sure it worked. Then I enable the services I’m going to need: CIFS (for Windows file sharing), SSH (for remote access) and S.M.A.R.T. (for drive monitoring). Then I work through each service and configure them:

  • CIFS: I update the NetBIOS Name, Workgroup, and Description to match what I picked for the hostname and domain name.
  • SSH: I use the suggested default settings.
  • S.M.A.R.T.: I update the Email to report field and set it to my email address.

Using the Volume Manager, I then go create the volume (ZFS pool) named Storage. I added all 6 of the 2TB HGST Drives to the pool and pick RaidZ2 (the ZFS equivalent of RAID 6), which will result in my two drives’ worth of redundant data. Once I’ve created my volume, I add a dataset to that volume, I name that dataset share, and accept the remaining default values. Then I set permissions on the dataset, changing the owner to the user, nobody (more on this below), and making sure that the owner has Read/Write/Execute permissions to the dataset.

At that point, I drill into sharing and create a CIFS share pointed at the new dataset that I just created. For this year’s EconoNAS, I set up the permissions to be wide open by allowing guest access. No password is then required to access the share, and the privileges of the guest account (“nobody” from above) are used when accessing the share. At this point, I pull the share up from another machine and ensure I’m able to read, write, and delete files on the share.


Initial Login Exiting the Initial Wizard Update Hostname and Domain Enable Services Configure CIFS Service Configure SSH Service Configure S.M.A.R.T. Service Create FreeNAS Volume Create FreeNAS Dataset Set Dataset Permissions Create CIFS Share Testing newly created CIFS Share

Please keep in mind this is a very basic and very wide-open setup for the purposes of keeping things brief in this blog. I have a list of a few other tips that you might want to delve deeper into if you’re following along:

  1. Create users whose credentials match the credentials used on your network’s PCs. Tighten down the share(s) so that only those users have access.
  2. Set up a monthly scrub of the Volume (aka ZFS Pool)
  3. Set up some periodic S.M.A.R.T. tests of the hard drives (long and short tests)
  4. Others: Leave your tips in the comments below!

Benchmarks

Power Consumption

One of the sneaky costs of a NAS is power consumption, so when building a NAS, I’ll typically have it plugged into a Kill-a-Watt to see how much power it is consuming at any given moment. Usually, I use the numbers to come up with a best-case and worst-case scenario for power consumption, then use my most recent power bill to try and figure out my monthly costs to keep it running. I tend to take a look at the power being consumed at first boot, when the machine is in an idle state, during my CPU burn-in tests, and lastly during a write speed throughput test.

Boot Idle CPU
Burn-In
Disk
Throughput
175 watts
73.9 watts
96.6 watts
85.2 watts

Throughput

For the DIY NAS builder, the most likely bottleneck for you to hit is the speed of your network. In building the EconoNAS, my goal is to hit that bottleneck. I won’t begin to predict what the most common network speed is for DIY NAS builders, but I’m going to guess it’s Gigabit. My preferred throughput-testing tool is IOMeter. I was able to saturate my desktop computer’s Gigabit interface easily with a sequential read test. And to my surprise, a sequential write test was in the same ballpark but a few MB/sec slower. I’m rather pleased that the EconoNAS can pretty much monopolize a Gigabit network connection in both read tests and write tests.

Sequential Write Throughput Sequential Write Results Sequential Read Throughput Sequential Read Results

Conclusion

When last year’s EconoNAS was first published the, price tag was roughly $675. My biggest regret in last year’s EconoNAS was missing my budget so badly—it was 35% over budget. I’m really excited to say this is a regret that I’ve rectified this year. At around $620, I’ve exceeded the budgetary goal by only 10%. This is way more worthy of the EconoNAS label than last year’s attempt. If you were to buy and build the 2015 EconoNAS right now using current prices, it’d still cost you in the neighborhood of $530. Building the 2016 EconoNAS costs an additional $90, but gets you the following upgrades:

  • A more powerful CPU
  • Twice the RAM
  • Improved power efficiency
  • An additional 2TB of storage.

All of that for an additional $90? I’m sold! The extra 2TB of HDD space is $50 by itself. It’s a bit unfair comparing last year’s NAS against this year’s NAS, but that’s not even the most outrageous comparison that I was able to come up with. I took the key attributes of a NAS machine (number of available drive bays, CPU, RAM, and network interface speed) and did some searching online to compare the 2016 EconoNAS with other popular NAS solutions and even compared it to the DIY NAS: 2016 Edition. Here’s what I found:

NAS Price # of Bays CPU Passmark
Score
RAM Network
2016 EconoNAS $264.81
6
Intel Celeron G3920 3760 16 1xGigabit
Seagate WSS STEE100 $449.99
6
??? ??? ??? 1xGigabit
NETGEAR ReadyNAS 316 $599.00
6
Intel Atom D2700 844 2 1xGigabit
Synology DS1515+ $699.00
5
Intel Atom C2538 ~2329 2 4xGigabit
Brian’s 2016 DIY NAS $768.46
8
Intel Atom C2750 3831 16 2xGigabit
QNAP TS653A $939.00
6
Intel Celeron N3150 1706 8 4xGigabit

In comparing the most important features of each NAS, it’s my opinion that the 2016 EconoNAS is a tremendous value. It compares favorably to every single one of the off-the-shelf NAS systems, and I picked the ones that were the most price-competitive. In my opinion, the 2016 EconoNAS even compares favorably to the two other NAS machines that I built this year: DIY NAS 2016: Edition and my own NAS upgrade.

That being said, these other NAS systems do have their own unique advantages: they’re all smaller, they all have nice purpose-designed NAS cases with easy access to the hard drives, they almost all have CPUs which are more power-efficient, and for the most part they all have support teams standing behind them. These features carry a pretty hefty price tag, but I wouldn’t fault anyone for thinking that they were a better option. If you’re willing to put it together and support it yourself, there are considerable savings to be had in building your own DIY NAS. If you can live without the really nice NAS cases and easy drive access, you can build the EconoNAS and get even more considerable savings!

Giveaway

Like with the DIY NAS: 2014 Econonas, the DIY NAS: 2015 Edition, the DIY NAS: 2015 EconoNAS, and the DIY NAS: 2016 Edition, I will be giving the DIY NAS: 2016 EconoNAS away to a lucky reader. Here’s how this giveaway works:

  1. You follow my blog and myself on Twitter, the blog’s Facebook page, and the blog’s Google+ page.
  2. You retweet or share the promotional posts from these social networks (links below) with your own friends and followers. (Note: Make sure that your share is public, otherwise I won’t be able to see it and give you credit!)
  3. Your name gets entered up to three times (once per social network) in a drawing.
  4. After a month or so, I’ll pick a winner at random and announce it.

Here’s a link to the best posts to promote for each social network:

If there are any questions, please go read the #FreeNASGiveaway rules page, I explain it in additional detail there. Please keep in mind, it’s more about the “spirit” of these rules, rather than the letter of the law. If you go to the trouble of helping promote my blog, I’ll do whatever I can to make sure you get an entry into the giveaway. But the best way to make sure you get your entry is to follow the steps above.

I Bought a 3D Printer Too!

| Comments

For the past three to four years, Pat and I have been talking about 3D printers. For a long time, we mostly just discussed them and eventually arrived at reasons for why we weren’t buying the 3D printers… yet. Each time, the tone of the conversations was the same: 3D printers were incredibly neat and opened an entire new realm of possibilities, but we couldn’t quite come up with the justification to make the purchase. Over the years we’ve tossed out quite a few reasons for not being ready to buy a 3D printer, but they all essentially boiled down to these three reasons:

  1. 3D printers are expensive.
  2. We couldn’t think of problems that we could solve with 3D printers.
  3. The utter lack of the creative skill needed to work with 3D-modeling software

For the longest time, we used these three reasons as excuses to not buy a 3D printer. But then Pat abandoned our ideology and bought a 3D Printer, and not too long afterward he convinced our local makerspace, TheLab.ms, into buying two of their own 3D printers. For a few months, I lived vicariously through Pat’s adventures at home and in watching as he helped members at our makerspace start designing and printing their own 3D models.

Every once in awhile, I would identify problems that I encountered and we’d come up with a solution for the problem that involved designing and printing something. Most famously, my last couple DIY NAS server builds featured a 3D-printed bracket to add support to the power supply. A variation of that object was designed for my own NAS to include a couple of brackets to hold a pair of SSDs that I couldn’t quite cram into a tight space. Pat wound up selling those brackets in his Tindie Store to other DIY NAS builders who used my build as their own DIY NAS blueprint.

Each time that I thought of a problem that could be solved with a self-designed and printed object, it became clearer and clearer that my prior reasoning was invalid. It was nice that Pat was willing and able to design objects and then print them to solve my problems, but in observing the process he was going through, I began to realize that I was missing out on some challenging fun that could provide hours of enjoyment.

About a month ago, Pat told me that he was shopping for a new 3D printer because he was considering upgrading his own 3D-printing capabilities. He eventually sent me a link to a printer that he’d seen on Craigslist that he thought was a good deal, but was a bit of a sideways upgrade for him. The price of that printer had eliminated my last remaining excuse—I was going to buy a 3D Printer.

New vs. Used

Any time I plan to buy something that I consider expensive, I almost always begin my search looking for a deal on a used one. Since I also had a little bit of insider knowledge and knew that 3D printing is a bit more difficult than most people assume it is, I felt that I could find a printer that someone perhaps got frustrated with and was willing to cut their losses and hopefully save me a few bucks in the long run.

The printers at our makerspace, TheLab.ms, are both Flash Forge Creator Pros. They are dual-extruder MakerBot clones with a nice full metal enclosure. Pat has labored for the last year fine-tuning the printers and training the makerspace’s members interested in their use. Our familiarity with these printers lead me to search pretty exclusively for similar MakerBot clones.

Ultimately, I wound up buying the same used QIDI Tech printer that Pat had found on Craigslist. It’s also a MakerBot Replicator Dual Extruder clone, extremely similar to the Flash Forge Creator Pro printer. I wound up picking up the printer for about $450.


But What About New Printers?

The good news is that new printers are not expensive enough to change my opinion on getting into 3D printing. New versions of the same printer that I bought can be found starting around $650. The question I wound up asking myself was: “What does that extra $200 buy me?” The answer to that question was: all the bonuses that come from a new product, like support and warranties; newer firmware on the printer; and, in the case of my specific printer, a newer generation of hardware for the printer.

I had budgeted around $750 to buy a 3D printer, so the new versions were well within my budget. But I wound up deciding to go with the used printer, forgo the benefits of buying a brand-new product, and use the remaining budget ($300) in order to upgrade the printer hardware further. Specifically, I’m interested in upgrading the build surface to something larger and swapping in improved hot ends for the two extruders.

I think there’s value in spending that extra $200 to buy the brand-new printer; I just happen to value the upgrades a bit more. However, I certainly wouldn’t have any objections if someone had the opposite view—3D printing is complicated enough that there’s a lot of value in being able to get support from the manufacturer.

My First Few 3D Prints

A common suggestion for your first few prints is to print things to supplement the printer itself. Thingiverse is literally full of objects that people have designed, shared, and tweaked for their own printers. Many of these objects greatly improve the function and the usability of the 3D printers.

Magnetic Door Latch

The first difference that I noticed between my QIDI Technology Dual Extruder Desktop 3D Printer and the FlashForge Creator Pros that we use at TheLab.ms is that the FlashForge printers’ doors have a magnetic latch to hold the door shut. On the QIDI Tech printer, the door hung loose without any kind of latch and oftentimes swung inside the printer, much to my chagrin. While surfing Thingiverse, I found an object, the QIDI Tech 1 – magnetic doorstop , which I modified to fit my own smaller magnets. My magnetic door latch does a fantastic job of preventing the door from swinging inside the printer, and the neodymium magnets that I used hold the door firmly shut.


Filament-Alignment Bracket

In addition, I decided to add an alignment bracket for the filament to the printer. The bracket restricts much of the travel of the two filaments and acts as a guide for the filament as it goes up through the tubing towards the extruders. The bracket ends up reducing the likelihood of tangled filaments during a print. At TheLab.ms, we had a couple occasions where the filaments became entangled because of how far they traveled throughout the various print jobs. On at least one occasion the result was a failed print. We haven’t had any similar failures since using the alignment bracket.


Glass Build Surface Retention Clips and Knobs

The best upgrade that I decided to pursue required a pair of objects. Rather than printing to the build surface of the printer, I wanted to be able to print on inexpensive picture-frame glass that I picked up at Lowe’s. The advantage of printing to glass is better adhesion of the filament to the heated surface, especially once aided by some Garnier Fructis Style Full Control Non-Aerosol Hairspray. I printed Pat’s Knobs for M3 Brass Standoffs (for FlashForge Creator Glass Clips) and the FlashForge Creator Pro – Corner Glass Clips +3mm that he designed for use with TheLab.ms’s two printers. The clips and knobs have done an excellent job at holding my glass in place atop of the heated build plate.


3D Design: Not Exactly my Strong Suit

My biggest concern in 3D Printing was my absolute lack of ability with anything creative. I don’t have an ounce of artistic or creative ability in my body. It’s just not something that I’m skilled at doing. Truly creative people are creating fantastically detailed, amazing 3D models and printing them on a daily basis. Before I decided to buy the 3D printer, I knew I’d never be able to do that.

Thanks to Thingiverse, that’s a bit of a moot point. For all the objects that I know I’d never be able to model on my own, somebody’s created and shared their 3D model of the same thing. Considering how many objects are available on Thingiverse, I think it’d be very likely that I would be able to find that someone else has already designed the object or figurine that I’m searching for to print.

Even better news—I learned that I could actually build 3D models of my own. OpenSCAD calls itself “The programmers’ 3D Modeler.” While I don’t really consider myself much of a computer programmer, OpenSCAD introduces elements of coding and uses that code to render your 3D models. I found that using logic, equations, variables, functions, etc. to build an object to be right up my alley.

Magnetic Webcam Mount

I designed a magnetic webcam mount so that I could attach a Logitech C270 near the print surface for the purpose of monitoring my prints and hopefully capturing some time-lapse video. Using the same neodymium magnets that I used in the printer’s door latch, I built a two-piece object whose base attached to the bottom of the frame that the heated build plate was mounted to. The second piece was an arm that fit into that base that the Logitech C270 mounted to just above the build surface. Ultimately, it didn’t work out because the webcam needed to be much further away from the print surface in order to get decent images of the entire build surface, but as far as being able to design an object with a specific purpose in mind, it was a rousing success for my first try.


Bottle-Drying Rack Shelf Support

My second attempt at designing a 3D part to solve a problem was both fruitful and successful. We have a pair of adjustable bottle drying racks that we use to dry out the numerous bottles we’ve been hand-washing daily for our five-month-old son. What we’ve found is that the upper shelf collapses down to the lower shelf under the weight of all the things that we were trying to load on top of it. Rather than load fewer things, I designed a Shelf Support for the Munchkin High-Capacity Drying Rack. The object slides down over the center spindle and holds up the top shelf at exactly the height we were wanting.


What’s Next?

I bought a printer and I’ve managed to even 3D model some of my own designs, so what’s next? LOTS of 3D printing, of course! But don’t let that rather obvious and simple answer distract you from the fact that I managed to save roughly $300 of my budgeted dollars on my printer. Do I apply that $300 to a different project, like the 2016 EconoNAS, or do I upgrade the 3D printer? Ultimately, I’ll wind up doing both, but I’ll spend that extra $300 working on upgrading the printer. Here are the upgrades I’m most likely to do:

  1. Upgrade to a current version of the Sailfish Firmware: The firmware that came with my 3D printer is one of the very early Makerbot Creator firmwares. There is a laundry list of new features available in the latest Sailfish firmware that should improve the function of the 3D printer.
  2. Micro Swiss MK10 All Metal-Hotend Kit with .4mm Nozzle: Upgrading the hot end of the printer should help out the consistency of the prints. The current extruders include some plastic tubing. The plastic tubing results in some variation in the temperature of the filament as it works through the extruder. Worst of all, this plastic tubing tends to get clogged up with filament. I’ve got one extruder which I think is partially clogged with this exact problem. Most importantly, the net effect of the all-metal hot ends are that print speed can be increased. At TheLab.ms, we’ve been able to increase print speed by 50% via this same upgrade.
  3. Removable Heated Build Plate Upgrade: The upgrade that I want the most is to increase the amount of print surface inside my printer. The stock build plate on the printer is 9” x 6”. Equivalent printers with larger print surfaces are the ones that tend to become quite expensive. The build plate upgrade measures at 11” x 6”. Those added two inches increase the printable area from 324 cu/in to 396 cu/in, which is a gain of right around 22% .

Between now and when I upgrade, I’m quite content to continue both working on my own 3D Models and printing things that I like off Thingiverse. If you’re interested in what I’ve been up to, feel free to follow me over on Thingiverse. I imagine I’ll be pretty social with the things that I’m printing. For now, I’m going to start wrestling with putting together a few more copies of the Velociraptor Business Card which I printed over the course of last weekend. What about you guys? What kinds of projects would you use a 3D printer for?

Nextion Enhanced HMI Touch Display (NX4024K032) Review

| Comments

Earlier this year, I published a blog reviewing the Nextion HMI Display from ITEAD and I was really excited by the product. So naturally when ITEAD released the next iteration, the Nextion Enhanced HMI Display, I wanted to get my hands on one and think about building a project around it.

I wound up coming up with an idea for a project that would wind up tying a few blog topics together, including some yet-to-be written blogs about my “new” 3D Printer. A very early and rudimentary prototype of this project to help me review the capabilities of the 3.2” Nextion Enhanced HMI Display.

Nextion Enhanced HMI Display

I received the NX4024K032 from ITEAD. It’s key features are:

  • A 3.2” TFT display w/ a resolution of 400 x 240
  • Battery powered real time clock (RTC)
  • 16MB Flash Storage space
  • 1024 Byte EEPROM
  • 3584 byte RAM

The Nextion Enhanced HMI Displays appear to be similar enough to the earlier Nextion HMI Displays. The resolutions seem to be mostly the same. The most exciting feature that I found on the Nextion HMI Enhanced Display was the fact that it had 16 MB of flash storage for storing the interfaces that you built inside the Nextion Editor. This is a quadruple the amount of flash that was on the earlier model.

         

Also, there appears to be some sort of additional connector on the Nextion HMI Enhanced Display that wasn’t on the prior models at all. Its pins are labeled Ground, IO_0 through IO_7, and +5V. I’m assuming this is some sort of interface that could potentially be used with other hardware, like the expansion board for the Nextion Enhanced Display — I/O Extended


Nextion Editor

The Nextion Editor continues to be available as a free download for building the interfaces uploaded to the flash storage on the Nextion Enhanced HMI Display. It’s possible to transfer the interface via serial directly to the device, or via the on board MicroSD card reader. The Nextion Editor apparently has also experienced some progress and revisions since the end of last year. Please keep in mind that I haven’t been a prodigious user of the editor the past year, but the newer version is much easier to use than I remember the older version.


Brian’s Server Monitor Project Prototype

As you may know, I’ve blogged about building my own DIY NAS server as well as building my own homelab server. The idea I came up with for my Nextion Enhanced HMI Display project was a simple little server monitor. I decided I wanted to pull together a number of different blogs all into one project: my ESP8266, my DIY NAS, my homelab server, and my 3D Printer (blogs coming soon!). What I decided to do was build a little “server monitor” that sits here on my desktop by my computer whose purpose was to keep an eye on my NAS, my homelab machine, and my website.

For my prototype, I decided that I’d start off simple and develop some code for the ESP8266 that would ping the server. Based on the responses for each server, it’d display a page on the Nextion HMI Enhanced Display that indicates which servers are up and which servers are down. And to cap things off, I’d design and 3D print some sort of case to retain all the hardware and prop up the project.

At this point, the prototype is more of a proof of concept than anything else. It’s a long ways off from being a finished product, and there’s a laundry list of features that I’d like to incorporate into it. However, for the sake of demonstrating the 3.2” Nextion Enhanced HMI Display, I mocked up a few screens and loaded them up.

All told, it took me maybe a couple of hours to create the screens and get them loaded onto the Nextion Enhanced HMI Display. And to be honest, most of that time was spent staring at images and getting them resized to all fit the way that I wanted to on the display.

Conclusion

I was pretty excited about the Nextion HMI Displays at the beginning of this year. Nothing about the new Nextion Enhanced HMI Displays has tempered that excitement. The displays are both low-cost and easy to develop solutions on. They are capable enough to run standalone code displays that you create in the Nextion Editor. But what really has me excited is the ability to incorporate other SoC hardware like the Arduino and RaspberryPi in order to create more complicated devices.

Regardless of your expertise level and interest level, the Nextion Enhanced HMI Displays offers something for most tinkerers. You could build a little device like a smart picture frame or a touchscreen menu using only the display and the Nextion Editor. Or if you wanted to get more complicated, you could easily add an interface to your Arduino and RaspberryPi projects.

Working inside the Nextion Editor was pretty simple. It was quite easy to throw together a few screens full of images, buttons, text, gauges, etc. The Nextion Editor “compiled” the entire thing into a single file that I copied to a MicroSD card, then plunked it into the Nextion Enhanced NX4024K032 display, and powered the unit back up. Once it booted up, it copied down the new file. At that point, all that was left was to power off the display and remove the card. The next time it boots up, it was running the new interface!

I said nice things about the Nextion HMI Display at the beginning of the year and that is also the case for the newest Nextion Enhanced HMI Displays. I’m especially pleased with the progress that ITEAD Studio has made in developing the Nextion Editor, which I found much easier to use this time around. I’m also pretty excited that the new enhanced displays tout more flash storage. I’m pretty intrigued about this new possible input/output interface and hoping I can find some additional documentation or examples of how to put it to use. But above all else, I’m really jazzed at how affordable this product is. The exact unit that I’m reviewing is currently listed for $24.50 on the ITEAD Studio website. The other displays range from 2.4” all the way up to 7.0” and the price range on those products is about $18 to $82. For what you can do, they all seem to be priced very competitively.

I’m also pretty jazzed about my little “server monitor” project that’ll feature this Nextion Enhanced HMI Display — NX4024K032. It’s going to be a fun little project to touch on a few of my blog topics. I’ll be using my 3D printer to design a case to hold one of my ESP8266, perhaps a movement sensor, hopefully an LED, and also this Nextion Enhanced HMI Display. With some luck, I’ll write an Arduino application that can monitor both the web interfaces as well as the ping responses from my blog out on the Internet, my homelab server, and finally my DIY NAS system. I love when half a dozen or so blog topics all converge into another topic! What kinds of things would you use the Nextion HMI Enhanced Display for?

Building a Homelab Server

| Comments

A few years back, I built my first NAS, and just this past spring, I upgraded my NAS to bring it up-to-date. In between building those two machines, I started a habit of building a new NAS every 6 months (or so) because I continue to find it to be an interesting project to keep repeating and is also rewarding to write about.

One of the things I always lamented about my NAS machines is that I wasn’t really thoroughly utilizing them. There’s plenty of free storage space that’s slowly being nibbled away by my backups of my Windows machines, but I don’t really have any dramatic need for storage beyond backups of a few PCs. No staggeringly large collections of media, or games, or of anything else that I imagine starts to take up quite a bit of space. In discussing this unused storage space, Pat convinced me that I should get off my butt and build a homelab server like he did ages ago, but in my case leverage my FreeNAS box for storage.

What’s a Homelab Server for, Anyways?

I’m probably not the best guy to ask to define what a homelab server is, but I’ll still take a stab at it. Nearly twenty years ago, I remember being envious of a friend’s home office. He had quite the collection of second hand computers from his office fulfilling a variety of purposes. He even had all of his networking equipment set up in something very similar to a Lack Rack. What’d he do with these computers? God only knows! If I recall correctly, he was working on numerous different certifications, and he used all of that hardware to practice and prepare for his tests.

Fast-forward to today, and we have the computing power to do all that on a single machine thanks to virtualization, and this purpose is at the core of what a homelab server is. Effectively, what people are doing is using a single machine to emulate all those secondhand servers that my friend had in his spare bedroom.

Technically my DIY NAS machine could be used as a homelab server; the latest version of FreeNAS is running atop FreeBSD 10, which features the bhyve hypervisor for hosting virtual machines. Right up until I upgraded my NAS this year, I was quite interested in the possibility of running my various virtual machines along-side FreeNAS. Ultimately, Pat wound up convincing me that separate hardware was the better direction for me to go in.


Important Features and Functionality

So, what exactly did I need a homelab server for in the first place? My initial reason is pretty silly—I wanted to show off by using my NAS as the primary storage of other machines! I built a series of three two-node 10Gbe networks here at the house which interconnect my primary desktop PC, my NAS, and now my homelab server. Just for the sake of doing it, I’ve wanted to host a machine’s (virtual or otherwise) primary storage on my NAS and then get faster performance than your typical platter hard-disk drive. The fact that I can do that affordably at my house is a bit mind blowing, and I really wanted to see it in action.

On top of that, I had some practical uses that I want want to dedicate virtual machines to:

  • Dedicated OctoPrint machine for my “new” 3D printer (a future blog topic)
  • A better test web server for working on my blog
  • A multimedia server that pushes content to my different FireTV and Chromecast
  • Home Automation using openHAB

I’m not unfamiliar with virtual machines. I’ve personally tinkered with a number of different virtualization packages over the years: VMWare, VirtualBox, Kernel Virtual Machine, etc. And professionally, it’s been over a decade since I worked directly with machines that weren’t being virtualized.

I cobbled together a few key requirements that I wanted my homelab server to have.

  • Free or Open Source: Seems pretty straightforward. Who doesn’t like free things?
  • Manageable via some Web Front-end: FreeNAS spoiled me by mostly making it unnecessary to spend effort at the command-line. I’d really like to be able to manage my Virtual Machines much like my NAS, via some sort of web front end.
  • Enterprise-quality Hardware: I mostly wanted this for bragging rights, but I’d also like the platform to be rock-solid stable.
  • Intelligent Platform Management Interface (IPMI): This goes hand-in-hand with the above requirement but it’s way more practical. I’ve enjoyed being able to manage my NAS via the IPMI interface on the ASRock C2550d4i motherboard and I think an IPMI interface is also a must-have for my homelab machine.

Hardware

CPU

For the CPU, I picked out a pair of Intel® Xeon® Processor E5-2670 CPUs (specs). The inspiration for this selection came from an article I’d read recently: Building a 32-Thread Xeon Monster PC for Less Than the Price of a Haswell-E Core i7. In this article, I learned that the market is flooded with inexpensive used Intel® Xeon® Processor E5-2670 CPUs. The premise of the article is that you could build a very robust primary workstation of the Xeon E5-2670, but after researching the CPU prices on eBay, I knew I’d found the right CPU for my homelab machine—it made “two” (Haha! Dual-CPU pun!) much sense to build a dual-Xeon machine. Having 16 cores, capable of running up to 32 threads up at 3.3GHz for around $100, it was an incredible value and perfectly suited for my homelab server. To cool each of the Xeon E5-2670 CPUs, I picked out a Cooler Master Hyper 212 EVO (specs). It’s a CPU cooling solution that I’ve been happily using now for quite some time which also had my utmost confidence for this build.

Motherboard

The CPU might have been extremely affordable, but dual-CPU motherboards that accepted it are still quite expensive. I tinkered around eBay, hoping that I could find a good source for inexpensive motherboards that’d run the CPUs I picked, but I didn’t have much luck. Instead, I opted for a new motherboard. Using the criteria above, I eventually decided on the Supermicro X9DRL-IF (specs). Aside from the dual LGA-2011 sockets and support for my inexpensive Xeon CPUs, I was also pretty excited about the fact that there were 8 total DIMM slots supporting up to 512GB of memory, numerous PCI-e slots, 10 total SATA ports, and dual Intel Gigabit network onboard.

Memory

Memory wound up being my second largest expense, coming in just over $200. I wound up picking 4 Crucial 8GB DDR3-1600 ECC RDIMMs. I’m guessing that 32GB is a pretty good starting-off point for my adventures with different virtual machines. There are an additional 4 slots empty on the Supermicro X9DRL-IF motherboard, so adding additional RAM in the future would be quite easy. Hopefully some day the market will be flooded with inexpensive DDR3-1600 ECC DIMMs like it was with Xeon E5-2670s. If that happens, I’ll look to push my total amount of RAM towards the maximum supported by the Supermicro X9DRL-IF motherboard and CPU.

Network

I planned my homelab server, my NAS upgrade, and my inexpensive 10Gb Ethernet network all simultaneously. In addition to the two onboard Intel Gigabit connections on the Supermicro X9DRL-IF, I also wound up buying a dual-port Chelsio S320e (specs) network card. I talk about it in quite a bit more detail in my cost-conscious faster than Gigabit network blog, but each of the ports on the card are plugged into my NAS or my primary desktop computer.

Storage

The bulk of my storage is ultimately going to come from my FreeNAS machine, but for the sake of simplicity and a bit of a performance boost, I decided to put a pair of Samsung SSD 850 EVO 120GB SSDs (specs) into the machine and placed them in a RAID-1 mirror.

Case, Power Supply, and Adapters

As I have many times when being frugal in the past, I decided to use the NZXT Source 210 (specs) for my case. The Source 210 is getting harder and harder to find at the great prices I’ve grown accustomed to finding it at, but I was able to find it at a reasonable price for this build. It’s inexpensive, well made, fits all of the components, and has lots of empty room for future expansion.

Of all the praises that I heap on the NZXT Source 210, I discovered it had one shortcoming that I didn’t account for—it lacked 2.5” drive mounting solutions. I was briefly tempted to break out my black duct tape and tape my two Samsung SSD 850 EVO 120GB SSDs inside the case, but I eventually decided to just pick up a 2.5” to 3.5” adapter tray that could hold both SSDs instead. Perhaps if I’d been willing to spend a few more dollars on a case, I would have found something that had some built-in 2.5” drive mounts for my SSDs, but I’m still quite happy with the Source 210.

Choosing a power supply was an interesting decision. My gut said I’d need a humongous power supply to power the two Intel® Xeon® Processor E5-2670 CPUs. But at 115W TDP for each CPU and hardly any other components inside the homelab server, I began to reconsider. Based on some guesswork and a little bit of elementary-school-level arithmetic, I was expecting to be using no more than 250-275 watts of power. Ultimately, I wound up deciding that the Antec EarthWatts EA-380D Green (specs) would be able to provide more than enough power for my homelab server.

The one flaw in my selection of the Antec EarthWatts EA-380D Green is that it lacked the dual 8-Pin 12-volt power connectors required by the Supermicro X9DRL-IF motherboard. When shopping for power supplies, I couldn’t find a reasonably priced or reasonably sized power supply which came with two of the 8-pin 12-volt connectors. Instead of paying too much money for a grossly over-sized power supply, I wound up buying a power cable that adapted the 6-pin PCI Express connector to the additional 8-pin connector that I needed. The existence of this cable is ultimately what allowed me to save quite a few dollars on my power supply by going with the Antec EarthWatts EA-380D Green.

Final Parts List


Component Part Name         Count Price
CPUs Intel® Xeon® Processor E5-2670 specs 2 $99.98
Motherboard Supermicro X9DRL-IF specs 1 $341.55
Memory Crucial 8GB DDR3 ECC specs 4 $211.96
Network Card Chelsio S320E specs 1 $29.99
Case NZXT Source 210 specs 1 $41.46
OS Drives Samsung 850 EVO 120GB SSD specs 2 $135.98
Power Supply Antec EarthWatts EA-380D Green specs 1 $43.85
CPU Cooling Cooler Master Hyper 212 EVO specs 2 $58.98
GPU to Motherboard Power Adapter Cable PCI Express 6-pin (male) to EPS ATX 12V 8-pin (4+4-pin) female N/A 1 $7.49
SSD Mounting Adapter 2.5” to 3.5” Drive Adapter N/A 1 $3.98
Total: $975.22

Software

Operating System

For my homelab machine’s operating system, I wound up choosing the server distribution of Ubuntu 16.04 (aka Xenial Xerus). I chose this version largely because it includes the ZFS file-system among its many features. The inclusion of ZFS interests me because I’d like to start using ZFS snapshots and ZFS Send in order to act as a backup for my NAS. I’m always keeping an eye on hard drive prices, so the next time I see a good deal on some large drives, I may add three or four of them to my homelab server for this purpose.

Virtual Machine Management

Hypervisor

My experience managing virtual machines is pretty limited. In the past, I’ve used Virtual Box and VMWare on Windows machines to host virtual machines mostly out of curiosity. In my various professional positions, I’ve used plenty of virtual machines, but I’ve never been on the teams that have to support and maintain them.

When it came time to pick what I’d be running on my homelab server, I deferred to Pat’s endless wisdom from his own homelab experience and I wound up electing to use KVM (Kernel Virtual Machine). I thoroughly appreciate that it is open source, that it has the ability to make use of either the Intel VT or AMD-V CPU instruction sets, and that’s it capable of running both Linux and Windows virtual machines. But ultimately, I wound up picking KVM because I have easy access to plenty of subject-matter expertise—as long as I can bribe him with coffee and/or pizza.

Virtual Machine Manager

Because I’m enamored with the ability to do almost all of my management of my NAS via the FreeNAS web-interface, I was really hoping that I could find something similar to act as a front-end to KVM. My expectation is that I’d be able to complete a significant percentage of the tasks required for managing the virtual machines through a browser from any of my computers. And for anything else, I intend to have a Linux virtual machine running that I can remote into and use Virtual Machine Manager to do anything that I can’t do easily through the web interface.

Ultimately, I wound up deciding to give Kimchi a try. Initially, I was pretty excited, since Kimchi was available within Ubuntu’s Advanced Package Tool. However, what I found for the first time ever was that it didn’t “just work” like every other apt package I’d installed before. In fact, it took Pat and I quite some time to get Kimchi up and running using the apt package. And once it was actually running, we found it to be quite slow. Finally, I was a bit bummed that the version in the apt package was decidedly older (version 1.5) than what was out on the Kimchi page (2.10) for download. Instead, I wound up following the directions on the Kimchi download page to install it manually, and to my surprise I was able to pull up the Kimchi interface in a browser and do some management of the virtual machines.


I found the Kimchi web interface to be handy for some basic virtual-machine configuration and remote-access to the virtual machines. However, tricky configuration, like passing a USB device—my 3D printer—through to a virtual-machine just couldn’t be done via the Kimchi interface. For that kind of virtual machine management, I am planning to use something like MobaXterm on my Windows desktop to access an Ubuntu Desktop virtual machine that has virt-manager on it. It’s a tiny bit more complicated than I would’ve liked, but I’m still pretty happy with the amount of functionality that Kimchi provides via the web-interface.

DHCP

I’m a big fan of DHCP servers, primarily because I’m lazy and dislike manually configuring static IP addresses. I already had to manually configure six different network interfaces in building out my inexpensive 10Gb ethernet network, and I wasn’t really looking forward to having to continue doing that for each and every new virtual machine. Setting up a DHCP server to listen on my 10Gbe links between my homelab server would make it a bit easier on me when spinning up new virtual machines.

Conclusion

At the beginning of the year, I really wanted to have a single server at my house to take care of both my NAS and homelab needs. But as I thought about it more, I found that concept to have some constraints I found less than ideal. I’m still very pleased with FreeNAS, but ultimately, I thought there were more options available so that I wasn’t constrained to using a hypervisor that ran on FreeBSD. Furthermore, I’m a big fan of having the ability to do maintenance on one set of hardware without simultaneously impacting both my NAS and my hosted virtual machines.

For just under $1,000, I wound up building a homelab server featuring dual-Xeon E5-2670 CPUs (2.6GHz, octo-core), 32GB of RAM, two dedicated 10Gb links (to my NAS and desktop PC), and a mirrored SSD for the host’s operating system. As it stands right now, this machine is probably overkill for what I need. Pat’s inexpensive and low-power homelab machine is probably more in tune with my actual needs, but I relished the chance to build a cost-effective dual-Xeon machine.


What’s Next?

I need to finish putting together my OctoPrint virtual machine and get working on designing and printing things in the third dimension, which is surely to be a source for many upcoming blogs. After the OctoPrint virtual machine is sorted out, I am going to tackle some sort of media-streaming virtual machine. In the future, I’d like to leverage the fact that Ubuntu 16.04 is now shipping with the ZFS file system. I wouldn’t mind buying a few large HDDs and begin using my homelab hardware as a destination for snapshots from my NAS. If you had 16 cores at your disposal in a homelab server, what other purposes would you have for it? What great idea am I currently overlooking?

Building a Cost-Conscious, Faster-Than-Gigabit Network

| Comments

When we first moved into my house, my first project was to enlist Pat’s help and wire up nearly every room with CAT5e cable so that I had Gigabit throughout my house. At the time we were both quite confident that Gigabit exceeded my needs. Then I built my first do-it-yourself NAS and I remember being a tiny bit disappointed when my new NAS couldn’t fully saturate my Gigabit link on my desktop without opening many, many file copies. At the time, I hadn’t yet learned that I was bottlenecked by the NAS’s CPU, the AMD E-350 APU. But I began thinking about bottlenecks and quickly came to the conclusion that the network is the most probable first bottleneck. After building my first NAS, I began regularly building other DIY NAS machines and thanks to Moore’s Law I was building NAS machines capable of saturating the gigabit link before it even dawned on me that my first NAS’s biggest deficiency was its CPU. Earlier this year, I upgraded my NAS and expectedly arrived at the point where my Gigabit network was my actual bottleneck.

Is a faster-than-Gigabit network really necessary?

Calling my Gigabit network a “bottleneck” is accurate but also a bit disingenuous. The term bottleneck has a negative connotation that implies some sort of deficiency. The Bugatti Veyron is the world’s fastest production car but it has some sort of bottleneck that limits its top speed at 268 miles per hour, but nobody in their right mind would describe 268 mph as slow. I was perfectly happy with file copies across my network that were measuring 105+ MB/sec. In the time that I’ve been using my NAS, I’ve moved all of my pictures and video to the NAS and I’ve never felt that it has lacked the speed to do what I’m wanting.

This begs the question: Why am I even interested in a faster-than-Gigabit network? For a long time, I’ve wanted some hardware here at the house that can house some virtual machines. I’d like to build out a few little virtual servers for interests that have come up in the past, like media streaming, home automation, and a test server for working on my blog. My original plan was to run those VMs on the same hardware that my NAS is running on, but I ultimately wound up deciding that I didn’t want tinkering with my virtual machines to impact the availability of my NAS, especially since I’d started using my NAS for the primary storage of important stuff.

I was lamenting to Pat one day that I had tons of space available on my NAS, but I felt that the 105 MB/sec throughput was not fast enough for being the primary storage of my virtual machines. Furthermore, I didn’t want a bunch of disk activity from my virtual machines to possibly monopolize my network and impact my other uses of the NAS. Pat pointed out that the theoretical limits of a 10Gb network (1250 MB/sec) were well beyond the local max throughput of the ZFS array in my NAS (~580 MB/sec on a sequential read). With a 10Gbe (or faster) network, I’d have enough bandwidth available to use my NAS as the storage for my virtual machines.

Consequently, a seed had been sown; a faster-than-Gigabit network at home would enable me to build my homelab server and use my NAS as the primary storage for my virtual machines. I arbitrarily decided that if my NAS could exceed the read and write speeds of an enterprise hard-disk drive, that it’d be more than adequate for my purposes.

Hardware

I immediately set out and started researching different faster-than-Gigabit networking hardware and reached a conclusion quickly; The majority of this stuff is prohibitively expensive, which makes sense. None of it is really intended for the home office or consumers. It’s intended for connecting much larger networks consisting of far more traffic than takes place on my little network at home. All things considered, I think we’re still a long ways away from seeing people using anything faster-than-Gigabit in their everyday computing. The end result of that is that the price of the equipment is likely to be out of the range of your average consumer’s budget.

What I wound up considering and choosing

Right out of the gates, I was thinking about re-cabling my entire house using CAT6 or running a few extra drops of CAT6 to the computers that needed it. But then I researched the price of both network cards and switches that would do 10Gb over twisted pair copper and quickly concluded that I wasn’t ready to spend hundreds, if not thousands, of dollars to supplement or upgrade my existing Gigabit network.

In talking to Pat, I immediately set off on the path of InfiniBand network hardware. In fact, our ruminating on this topic inspired Pat to build his own faster-than-Gigabit network using InfiniBand. When digging around eBay, there’s no shortage of inexpensive InfiniBand gear. Most shocking to me was routinely finding dual-port 40Gb InfiniBand cards under $20! I was very interested in InfiniBand until I did some research on the FreeNAS forums. Apparently, not many people have had luck getting InfiniBand to work with FreeNAS and my understanding of InfiniBand’s performance in FreeBSD is that it was also a bit disappointing. Without rebuilding my NAS to run on another OS (something I strongly considered) InfiniBand was not going to be the best choice for me.

What ultimately proved to be the best value was 10Gb Ethernet over SFP+ Direct Attach Copper (10GBSFP+Cu). SFP+ Direct Attach Copper works for distances up to 10 meters, and my network cupboard is conveniently located on the other side of the wall that my desk currently sits next to. 10-meter cables would easily reach from my desk to the network cupboard. However, running cables up into my network cupboard wound up being unnecessary due to the expense of switches and my desire to be frugal. There just wasn’t going to be room in my budget for a switch that had enough SFP+ ports to build my 10Gbe network.

Because I decided to forgo a switch, that meant that each computer I wanted a 10Gb link between would need to have a dedicated connection to each and every one of the other computers in my 10Gb network. Thankfully, my 10Gb network is small and only contains 3 computers: my primary desktop PC, my NAS, and my homelab server. Each computer would be connecting to two other computers, so I’d need a total of six 10Gbe network interfaces and 3 SFP+ Direct Attach Copper cables.

What I Bought

For my desktop PC, I wound up buying a pair of Mellanox MNPA19-XTR ConnectX-2 NICs for just under $30 on eBay. I chose the Mellanox MNPA19-XTR on the recommendation from a friend who had used them in building his own 10Gbe network and said that they worked well under Windows 10. Throughout the writing of this blog, I routinely found dozens of these cards listed on eBay with many of those listings being under twenty dollars, and I was also able to find the MNPA19-XTR on Amazon at roughly the same price.

I wound up choosing a different network card for my NAS for a couple of different reasons. For starters, room is an issue inside the NAS; there’s a bunch of hardware crammed into a little tiny space, and because of that, there’s only room in the case for one PCI-e card. I couldn’t go with the inexpensive single-port Mellanox MNPA19-XTR ConnectX-2 cards which seem to be abundant on eBay. Additionally, my research (Google-fu) on popular 10Gb SFP+ cards for use in FreeNAS wound up pointing me to a particular family of cards: the Chelsio T3. Other intrepid FreeNAS fans have had good experiences with cards from that family, so I decided to start looking for affordable network cards in that family. In particular, I wound up buying a lot of 3 dual-port Chelsio S320E cards for around $90. At the time I bought mine, I could get the lot of three for roughly the same price as buying two individually. Having a spare here at the house without spending any additional money seemed to make sense.

Finally, I sought out the SFP+ cables that I needed to interconnect the three different computers. Both my FreeNAS box and my homelab server are sitting in the same place, so I was able to use a short 1-meter SFP+ cable to connect between them. My desktop computer isn’t that far away but my cable management adds a bit of extra distance, so I picked up a pair of 3-meter SFP+ cables to connect my desktop to the FreeNAS machine and to the homelab server. Both lengths of cable, one and three meters, seem to be priced regularly at around $10 on eBay.

In total, I spent about $120 to connect my three computers: $90 on network cards ($15 each for two Mellanox MNPA19-XTR ConnectX-2 and $30 each for the two Chelsio S320Es) and $30 on the SFP+ cables needed to connect the computers together. This is hundreds of dollars cheaper than if I had gone with CAT6 unshielded twisted pair. By my calculations, I would’ve spent anywhere around $750 to $1300 more trying to build out a comparable CAT6 10Gbe network.

Assembly and Configuration

Because I’d decided to go without buying a switch and interconnecting each of the three machines with 10Gb SFP+ cables, I needed to be what I consider a bit crafty. Saving hundreds to thousands of dollars still did have an opportunity cost associated to it. I’m a network neophyte and what I had to do completely blew my simple little mind even though it wound up being a relatively simple task.

My first challenge wound up being that each cable had to plug into the appropriate 10Gbe network interface on each machine. For each end of every cable, there was only one correct network interface (out of 5 others) to plug the cable into. I solved this problem with my label machine. I labeled each network interface on each of the computers and then labeled each cable on each end, identifying the machine name and the interface it needed to be plugged in to.

In configuring the 10Gb links, it was only important to me that each machine could talk to the other two machines over a dedicated 10Gb link. Each of those machines already had existing connectivity to my Gigabit network that went out to the Internet via our FiOS service. Each time Pat made suggestions on how this would work, I scratched my head and stared at him in a quizzical fashion. I am not ashamed to admit that I didn’t have enough of a background in networking to comprehend what Pat was describing. He patiently described the same thing over and over while I continued to stare at him blankly and ask ridiculously stupid questions. As he usually does when I’m not following along, Pat drew a picture on his huge DIY whiteboards, snapped a photo of it, and sent it to me. As the light-bulb above my head began to brighten from “off” to “dim”, I crudely edited that photo to come up with this:


Essentially, each of the 3 different 10Gb links would be its own separate network. There’d be no connectivity back to the DHCP server on my FiOS router, so I’d have to manually assign each of the network cards IP addresses manually. I opted to be lazy and used the entire 10.0.0.0 private network for my all of my home networking. I assigned a Class C subnet for use on my gigabit and WiFi network, and I assigned additional unique Class C subnets to each of my three 10Gbe connections. Because I’m lazy and I hate memorizing IP addresses and I didn’t want to come up with unique names for each of the three machines’ numerous different network interfaces, I edited the hosts file on each machine so that the server name resolved back to the appropriate IP address of the 10Gb interface.

At the end of my efforts, I put together this basic diagram outlining my entire network here at home:


Performance

The entire impetus for this project was in order to see my NAS out-perform a server grade (15,000 rpm) hard-disk drive over the network while using Samba. In a recent article on Tom’s Hardware benchmarking various Enterprise Hard-Disk Drives, the highest average sequential read speed for any of the HDDs was 223.4 MB/sec. That number was attained by a relatively small hard drive, only 600GB. This isn’t surprising, since hard-drive speeds are impacted by the size of the platter and smaller drives tend to have smaller platters. Nonetheless, I set 223.4 MB/sec as my goal.

First off, I wanted to see some raw throughput numbers for the network itself. Because FreeNAS includes iperf, I decided to go ahead and grab the Windows binaries for the matching iperf version (2.08b) and fired up the iperf server on my NAS and tinkered with the client from my desktop. In a 2-minute span, iperf was able to push 74.5 Gigabytes across my network, which measured in at 5.34 Gb/sec or roughly 53% of my total throughput.


Having a crude understanding of how iperf worked, I wanted to see the 10Gbe link saturated. I wound up launching numerous command windows and running iperf concurrently in each, something I learned I could’ve easily done from a single command-line had I bothered to do a little more reading. I lost count of the exact number of iperf sessions I had running at once, but in somewhere around 8 to 10 simultaneous iperf tests I was seeing 95-98% utilization on the appropriate Mellanox MNPA19-XTR ConnectX-2 network interface on my desktop computer. I must admit that, seeing that hit 9.6Gbps was pretty exciting, and I started to look forward to my next steps.


Nearly full utilization via iperf was great, but it’s nowhere near a real-world test. The hardware in my NAS is very similar to the FreeNAS Mini. Out of curiosity, I dug into quite a few reviews of the FreeNAS Mini to compare the Mini’s Samba performance to my own. Surprisingly, I’d found that their results were quite faster than my own (250MB/sec to 70MB/sec), which led me to discover that there are some issues with how I’ve been benchmarking my NAS performance to date, a topic I’m sure to tackle in a future blog so that I can remember how to test it better.

First off, I went ahead and used IOMeter to try and capture the fastest possible throughput. This is the equivalent of running downhill with a brisk wind behind you. I performed a sequential read test using a block-size of 512KB. In that dream scenario, I was able to sustain 300MB/sec for the entire duration of the IOMeter test. I was really excited about this result, as it had surpassed my original goal by 34%.

Sequential reads are a great way to find out maximum throughput of a drive, but like most benchmarks, it’s not much of an actual real-world test. Due to the fact that my NAS was able to surpass my original goal by such a large margin, I began to get hopeful that I would beat that throughput in both directions: reading a file from my NAS and then writing a file to the NAS. For my test, I decided to use an Ubuntu ISO as my test file and started off by moving it from my ISOs folder (on my NAS) to a temporary folder on my desktop. According to the Windows file copy dialog, the speed it measured on the file copy ranged between 260MB/sec and 294MB/sec. Afterwards, I moved that file back from my desktop’s temporary folder and into the ISOs folder on my NAS. In these file copies, I saw speeds between 220MB/sec and 260MB/sec.

In an actual real-world scenario, the NAS outperformed the enterprise HDD in both read operations as well as write operations, which was a pleasant surprise. Before the test, I would’ve guessed that the write speed would’ve been a bit slower, since there’s more work for the NAS to do on a write.

Conclusion

I’m having a hard time deciding what I’m more excited about, the fact that I was able to build this 10Gb Ethernet network between 3 computers for roughly $120, or the fact that my NAS now outperforms a 15,000 rpm drive over a Samba file share. Now that it’s all said and done, I think it’s the fact that the throughput to my NAS across my network is fast enough to beat an enterprise hard-disk drive. In the near term, this means that I can confidently use my NAS as the primary storage for the virtual machines that I’ll be hosting on my homelab machine. Furthermore, it also means that I could mount an iSCI drive on one of my desktop computers and it’d work as a more-than-adequate replacement for local storage—this is an interesting alternative in the event of a catastrophic failure on one of our computers if we can’t wait for replacement hardware to show up.


But don’t let my preference diminish the other startling discovery from this little project. I think what might be even more exciting to the general public is that a 10Gb Ethernet network can be built for under $40 and connect two computers together. In my case, it cost an additional $80 to add a third computer. A fourth computer would be even more expensive (8 total network interfaces, 6 total cables), so at this point it probably starts to make more sense to consider getting a switch.

When it was all said and done, I was pretty pleased with myself. I was able to easily exceed my performance goals, and the icing on the cake is that it only cost me about $120 in order to build 10Gb Ethernet links between each of the most important machines in my household.

Nitrogenated Cold-Brew Coffee

| Comments

The first time I attended TheLab.ms’s monthly home brewing group, I just observed and sampled the prior month’s creations— from that point on, I was hooked. Based on the group’s suggestions, I decided to build a keezer for serving my beer from and a fermentation refrigerator, aka “The Brewterus”. Among my criteria for the keezer was my ability to use both carbon dioxide and nitrogen in order to serve beers. Most beers are carbonated, but a few beers (particularly Guinness) are nitrogenated. Nitrogenated beers tend to have what is described as a creamier and smoother feeling in your mouth as well as a less bitter taste, since carbon dioxide is acidic.

Because I planned to serve nitrogenated brews from time to time, Pat suggested that when I don’t have a home-brewed nitrogen beer around, I should consider nitrogenating a cold-brew coffee and serve it on tap. As an experiment, we brewed a small one-gallon batch of cold-brew coffee and tried it out of the keezer and it was delicious! In fact, it was so delicious that I further modified the keezer so that I could add a dedicated cold-brew coffee tap.

What is Cold-Brew Coffee?

Essentially, a cold brew coffee is coffee brewed using water that’s at room temperature or cooler over a longer period of time, usually at least 12 hours. What’s the big deal in that? To me, the biggest difference is the fact that the cold-brew coffee is less acidic than traditional coffee. I personally find cold brew-coffee quite a bit easier and more enjoyable to drink. Without pretending to have a doctorate in food chemistry, it appears that coffee’s fatty acids are much more water-soluble at higher temperatures.

Cold-brew coffee should not be confused with iced coffee. Iced coffee is brewed hot and then poured over ice to crash-cool it. Depending on the amount of coffee brewed and the amount of ice in the cup, this could also result in a drink that’s a bit watered down. But the same acidic taste that hot coffee has would also be present in iced coffee.

Beans from Craft Coffee

Pat is my local coffee expert, and a few years ago for Christmas, we bought him a subscription to Craft Coffee. Not really knowing anything about coffee, we were a bit concerned that the gift would miss its mark, but we’ve been pleasantly surprised to find that Pat’s continued his coffee subscription all this time. The beauty of Craft Coffee is that you answer a questionnaire about what kinds of coffee you like to drink and their properties, and then have a variety of options which include shipping you small bags of different coffees that align to your preferences monthly (or on some other duration of your choosing). In Pat’s various blogs about coffee he’s always spoken highly of the coffees, he has received as a result of his Craft Coffee subscription.

Based on their options, I wound up going with the Single Origin – Roaster’s Choice coffee. The advantage of a single-origin coffee is that all of the beans come from the same source instead of a blend of different beans as selected by the roaster. It’s my understanding that the geographic subtleties of a particular coffee bean are more pronounced with single-origin coffees. Single-origin beans tend to be roasted lightly, which also suit a personal preference of mine.

Our first shipment arrived on a Friday; in the box we found 72 ounces of coffee divvied up in six different twelve-ounce bags. Opening the box set free quite a bit of coffee-laced aroma, filling our kitchen with its pleasant smell. The Craft Coffee bags have a small hole that allow you to smell the coffee after a gentle squeeze on the bag. I smelled the bag first and tried to pick out the different subtle scents I could identify. I’ve always been a sucker for the way coffee smells, but this was quite a bit better. Firstly, it smelled quite fresh, which shouldn’t be surprising to me as I’ve probably almost always had stale coffee. The coffee also smelled a bit sweet with an undertone of something tangy. I couldn’t quite put my finger on what the scents reminded me of, but it definitely smelled fruity and quite citrus-like.

Craft Coffee, Brooklyn, NY
ProducerBebes washing station
OriginObura Wanonara, Papua New Guinea
VarietyTypica, Bourbon Caturra
Elevation1,500-1,700 meters above sealevel
ProcessWashed
Sweet, fruited and floral with notes of apricot, allspice, green tea, mild currant and lemon curd with grapefruit-like acidity.

Want to give Craft Coffee a try? I certainly recommend it! Using the code of ‘brian1544’ will get you 15% off of your order! Even better? It might even help supplement my own cold-brew coffee addiction!

Materials Used

  1. 52 ounces of Craft Coffee
  2. 6 gallons of Crystal Geyser spring water
  3. 6-gallon Glass Carboy
  4. Cornelius Keg
  5. Auto-Siphon
  6. Cheesecloth
  7. 3-piece airlock

Recipe

Ultimately, what we decided to do was to use 52 ounces of the coffee to go with 5 gallons of water. Because I’m impatient and didn’t want to spend the afternoon dispensing water from our refrigerator, I went ahead and bought 6 gallons of Crystal Geyser Spring water which was on sale at our local grocery store for $0.89 a gallon. Spring water was the choice because it seems that it’s the superior choice for coffee brewing due to its mineral content.

First we dumped all of the coffee grounds into the glass carboy and filled it up with 4 gallons of the spring water and capped the carboy off with a threepiece airlock, although I think the use of the airlock was probably overkill on our part. Most cold-brew coffee recipes simply refer to covering the concoction while it rests. I hoisted the carboy into the Brewterus, which I had set at 52 degrees Fahrenheit. The Brewterus was set at that temperature for the final stages of fermentation of Das DoppelGanger, my most recent home-brewed beer. My understanding of cold-brewing coffee is that the brewing happens at any temperature which isn’t as hot as the ideal temperature of 205 degrees Fahrenheit. Most cold-brew recipes indicate that room temperature is a satisfactory temperature, which is what led me to believe that the 52 degrees in the Brewterus would be quite fine.

Roughly a day and a half later, I used my siphon to begin transferring the cold-brew syrup into the Cornelius Keg. I used the cheesecloth to strain out any of the coffee grounds that got sucked up by the siphon. I was a bit surprised when I was only able to siphon 3 gallons’ worth of cold-brew coffee syrup out of the carboy. I was prepared for the fact that a large amount of water would be retained forever by the coffee grounds, but I was a bit startled when those 52 ounces of coffee grounds wound up retaining a quarter of the water we added to the carboy.

This is where I’d worried that I made a pretty sizable mistake. Rather than taste the syrup and then dilute it down to my preference, I simply emptied my two remaining gallons of spring water into the keg. It wasn’t until just after the water drained from the last bottle that I thought to myself; I wonder if that’s too much water to add? My concern at this point was that I’d overly diluted my cold-brew syrup with the spring water. In the future, I plan to taste test more frequently as I add water to the syrup.

During the cold-brew process, Pat had used his French press and brewed us a couple cups of the month’s Craft Coffee. Prior to nitrogenating the cold-brew coffee syrup, I used a ladle to scoop up a glass of the cold-brew coffee. In my clear glass, my cold brew coffee appeared to be a bit more opaque than what had been in the French press but in drinking the two, I found that they tasted quite different but that difference in taste could be expected due to the difference in the brewing method. I decided that I’d go ahead and hook it up to the nitrogen gas and do a taste test again in a few days.

After giving Pat a sneak preview a day or two later, most of my fears were assuaged when Pat said that he found the cold-brew coffee itself to be every bit as drinkable as the cups of French press coffee we’d drank while preparing the cold-brew concoction. This is especially exciting because Pat had not been very keen on neither of our earlier cold brew experiments.


All the cold brew ingredients and supplies Coffee grounds poured into the carboy #1 Coffee grounds poured into the carboy #2 Coffee grounds poured into the carboy #3 Coffee grounds poured into the carboy #4 Coffee and water added to Carboy #1 Coffee and water added to Carboy #2 Coffee and water added to Carboy #3

First Impression

Why wait for that final taste test until later on? It takes a while for the pressure of the Nitrogen gas to be absorbed into the contents of the keg. Normally for my beers, I crank up the pressure and wait a couple days, and that has typically involved using carbon dioxide, which is much more soluble in liquids than nitrogen is. I keep my nitrogen at a much higher pressure (~50psi) in part to try and account for that solubility and to increase the amount of gas in the coffee when dispensing. At any rate, it takes a few days under pressure for the nitrogen to infiltrate the coffee and create that awesome cascading effect and wonderful mouthfeel.

My first conclusion? This coffee from Craft Coffee is every bit as delicious as Pat told me it’d be and that he’s been writing about in his blogs. The entire time I’ve been considering cold-brewing coffee and serving it out of my keezer, Pat’s been encouraging me to get my own subscription from Craft Coffee, and boy am I glad for that recommendation! I’ve tried this month’s coffee through a plain old drip coffee pot, brewed via a French press, and in cold-brew form. In every single form, no matter how badly I might’ve accidentally made it, I’ve enjoyed the coffee. I’m not sure how quickly I can drink five gallons of cold-brew coffee, but once it’s gone I’ll certainly be excited for whatever Craft Coffee sends my way next. My favorite feature of the Craft Coffee subscription is the variety of beans they’re capable of sending out and that every month will be different. I’m excited to see what comes next month. Want to give Craft Coffee a shot? Use my code brian1544 and get 15% off!

My most import conclusion from my first impression? Cold brew coffee is tasty and different! Because of the colder brew temperature, the final product is very much different than either hot coffee or iced coffee. It’s quite a bit smoother and tastes less bitter or acidic. Brewing a gallon of your own cold-brew coffee would be pretty easy. Buy a gallon of spring water and pour off some room for the grounds (save the poured-off water). Then put 10.4 ounces of coarsely ground coffee beans into your gallon of water and fill it back up to the top. Let the grounds and water set between 24 and 36 hours in the fridge. Finally, use some cheesecloth and another pitcher and carefully pour your cold-brew syrup out of the container through the cheesecloth to filter out the grounds. Get as much syrup out of the gallon of water as possible and then taste your brew—add additional water to taste in case it is too strong. Voila! Your own concoction of cold-brew coffee! It should keep in your fridge for roughly two weeks without problems.

Final Thoughts

In addition to everything I said above, nitrogenating the cold-brew coffee puts the whole thing over the top; it was enjoyable by itself, but once it was finished being nitrogenated, it became delicious! Watching the nitrogen cascade up the glass to build the frothy head is mesmerizing. Secondly, the crema from that head and the nitrogen that’s infiltrated the cold brew creates a very cream-like texture and mouthfeel that is quite similar to the crema formed by milk in an espresso. It’s pretty awesome that it takes me about 20 seconds in the morning to pour myself a cold-brew coffee before I begin my adventures.

Depending on how quickly we can drink the cold-brew coffee, I expect to turn this into a running series of blogs. For each new coffee that Craft Coffee sends me, I intend to whip up a keg of cold brew coffee out of what they provide. Considering that the warmest months are sneaking up on us, it’ll be a nice treat to have on hand!

My 2016 DIY NAS Upgrade

| Comments

I spend a good chunk of every year researching, building, and writing about different NAS blogs. While I’m doing this work, every now and then I get bit with a temporary onset of jealousy and selfishness. Each of these NAS builds have been incrementally better than my own DIY NAS machine and each time that urge to keep the new NAS for myself has grown stronger!

Shortly after publishing the 2015 EconoNAS, I decided that the upcoming DIY NAS: 2016 Edition would serve a bit as a prototype for my own NAS upgrade. During the process of building and writing about the DIY NAS: 2016 Edition, I wound up learning a few lessons and made a few tweaks to suit my own needs a bit better.

What’s the same?

Case and Power Supply

I stayed with the U-NAS NSC-800 (specs). I do absolutely love the features of this case, most of all the case’s eight removable drive bays and its incredibly small footprint. But as much as I love this case, I hated working inside it, especially getting the motherboard finally mounted. Check out my timelapse video assembling the DIY NAS: 2016 Edition into the same case to get an idea of how much fun I had. If you’re building a DIY NAS and you’re tight for space, the U-NAS NSC-800 is worth its price and the effort of getting it into the case!

Along with the case, I also stuck with the Athena Power AP-U1ATX30A (specs) to provide the power. It was essentially the best deal on a 1U power supply that I could find which didn’t change in the weeks between ordering components for the two different NAS builds. I initially intended to use Pat’s Spacer Bracket for a 1U Power Supply to provide a bit of (unnecessary?) support to the backside of the power supply, but I actually wound up needing that object redesigned with new features to help solve a challenge unique to my own new requirements. More on that challenge below!

Storage Drives

Ultimately, my hard-drive configuration wound up the same as the DIY NAS: 2016 Edition, but this is purely coincidence. A few years ago I bought new hard drives, an additional SATA controller card, and rebuilt my ZFS zpool to hold seven 2TB hard drives in RAIDZ2 configuration. In the last four years, I’ve had 3 drives fail and get replaced with 4TB drives. For my upgrade, I wound up buying replacements for each of the four remaining 2TB hard-drives; a pair of Western Digital Red 4TB NAS hard drives (specs) and a pair of HGST Deskstar NAS 4TB hard drives (specs).

ZIL and L2ARC Cache Drives

Speaking of storage devices, I ultimately decided to stick with a pair of Samsung 850 EVO 120GB SSD and use them both as ZIL and L2ARC cache devices. Those of you who read the DIY NAS: 2016 Edition may recall I was a bit disappointed with the performance of the NAS with the ZIL and L2ARC cache devices compared to without. Ultimately, I decided that my usage of the NAS at the time didn’t really line up with the benefits that the ZIL and L2ARC provide. It’s also possible that my own gigabit network is the primary bottleneck. If you’ve been keeping up with me on Twitter then you’ve probably observed that I plan to be using my NAS a bit differently in the upcoming few months.

What’s Different?

FreeNAS Flash Drive

Starting off with differences between my NAS and the DIY NAS: 2016 Edition is how I handled the FreeNAS OS drive. As I have for almost every NAS build, I stuck with the low-profile 16GB SanDisk Cruzer Fit USB flash drive (specs). But for my own NAS, I added a second flash drive to mirror the OS on. The SanDisk Cruzer Fit flash drives are inexpensive enough that I’ve slowly acquired quite a collection of them, so it made sense to use one of those extras to add a little bit of additional redundancy to my own NAS.

RAM

Much like the flash drive, I’m still using the same RAM, but instead of just one 16GB kit (2x8GB) of Unbuffered DDR3 PC3-12800 (specs) I opted for two in order to bring the total amount of RAM up to 32GB. Among the things I learned as part of my understanding of ZIL and L2ARC is that I would’ve seen more performance benefit had I spent those same dollars on more RAM instead of cache devices. For this build, I toyed with 16GB sticks and even potentially 64GB of RAM, but the cost of the suggested 16GB DIMMS (over $300!!!) wound up making it way more pragmatic to buy 32GB (4x8GB) of RAM and also use the ZIL/L2ARC SSDs to supplement performance.

CPU and Motherboard

For my own NAS upgrade, I wound up going back to the motherboard from the DIY NAS: 2015 Edition, the ASRock C2550D4I (specs), which is essentially the quad-core little brother of the ASRock C27450D4I that was used in the DIY NAS: 2016 Edition. Originally I had picked the ASRock C2750D4I because I’d wanted to use those additional four CPU cores in order to add a bit more functionality to machine beyond storage. I was hoping that the extra CPU power would enable me to use the NAS to house a few virtual machines.

But then I re-re-re-read Pat’s Homelab Server build blog and rethought my approach. I wound up deciding that an additional machine to host my virtual machines made a bit more sense, hopefully something that I could build with considerable performance for a reasonable price. I hadn’t planned on building that machine until much later this year, but then this article about an affordable dual-Xeon machine got my attention. I finished ordering parts for my own homelab server as I worked on this blog.

I eventually decided that I could go with the ASRock C2550D4I in order to save some money. At the time of purchase, the ASRock C2550D4I was $150 less than the ASRock C2750D4I (specs). I used that money in part to increase the amount of RAM to 32GB and set what little was remaining aside for the parts needed for my homelab server buildout.

Network

The process of the building, using, and testing the DIY NAS: 2016 Edition led me down the path of feeling I’d reached a point where my Gigabit had potentially become a limiting factor. On top of that, I am also planning on using my NAS for the storage of virtual machines hosted on my homelab machine. Because of this, I decided to build a small 10Gbe SFP+ network between my primary desktop, my NAS, and my homelab server by using either dual-port or multiple NICs and interconnecting each of the machines with twin-axial copper cable. My small little 10Gbe network and how it blew my little network-neophyte mind is a topic of its own blog. Due to the expense of 10Gbe network gear, I wound up trolling eBay for used NICs. I wound up finding that dual-port Chelsio S320e (specs) network cards could be found relatively inexpensively and I bought a lot of 3 cards for $90.

Power Supply Bracket

Unfortunately, the footprint of that inexpensive dual-port 10Gbe network card was pretty large, large enough that the backside of the network card was bumping into the stack of two Samsung 850 EVO 120GB SSDs mounted in the U-NAS NSC-800. The default mounting method of these SSDs in the NSC-800 wound up preventing me from adding the Chelsio S320e NIC. I wrestled with the case for a few hours trying to find alternative ways of mounting the SSDs to make room, but the NSC-800 is a challenge in this regard since there’s not a whole lot of space to work with.

Ultimately I concluded that I could mount the SSDs and install the NIC in roughly the same spot, but not by using the mounting hardware that came with the NSC-800. Essentially, I decided that the best solution was to make a sandwich out of the NIC, mounting one SSD below it and another above it, but the stocking mounting hardware was insufficient for that goal. In the process of listening to me complain, Pat had a brainstorm—modify the power supply bracket used in the DIY NAS: 2016 Edition by adding some sleeves that the SSD would squeeze into to be held in place.

If you have access to a 3D printer then you can download and print Pat’s Spacer Bracket for a 1U Power Supply yourself from Thingiverse. Don’t have access to a 3D printer? No problem! Pat’s got the Spacer Bracket for a 1U Power Supply listed on the Patshead.com Store on Tindie.

FreeNAS Configuration

Since I imported my previous configuration, my configuration should’ve been identical in both the before and after state of my configuration. This is roughly the same configuration that I would’ve made with the DIY NAS: 2016 Edition. However, after a disappointing initial run of benchmarks, I decided to give the FreeNAS Autotune feature a try. Here’s what it says in the FreeNAS documentation, FreeNAS® provides an autotune script which attempts to optimize the system depending upon the hardware which is installed. Because the hardware had changed significantly, I thought it was a good idea to go ahead and enable this feature. As a result, FreeNAS created a few tunables:

I won’t pretend to have expertise in all of those tweaks that the Autotune made on my behalf, but I suspect that it’s a list of things that a few Google searches will give me a decent idea of why the changes were made and how it benefits the performance.

Parts List

Component Part Name         Count
Motherboard ASRock C2550D4I specs 1
Memory Crucial 16GB Kit (8GBx2) DDR3 ECC specs 2
Case U-NAS NSC-800 Server Chassis specs 1
Power Supply Athena Power AP-U1ATX30A specs 1
SATA Cables Monoprice 18-Inch SATA III 6.0 Gbps (Pkg of 5) N/A 2
OS Drive SanDisk Cruzer 16GB USB Flash Drive specs 2
Cache Drives Samsung 850 EVO 120GB SSD specs 2
Storage HDDs Various 4TB HDD Models N/A 7


Burning in CPU, Motherboard & RAM before assembly #1 Burning in CPU, Motherboard & RAM before assembly #2 Off the charts anal-retentive SATA Cable labeling SATA Cable Installation and management #1 SATA Cable Installation and management #2 SATA Cable Installation and management #3 SATA Cable Installation and management #4 SATA Cable Installation and management #5 SATA Cable Installation and management #6 SSDs mounted in stock location #1 SSDs mounted in stock location #2 SSDs mounted in stock location #3 SSDs mounted in stock location #4 Experimenting w/ alternate SSD mounting location. #1 Experimenting w/ alternate SSD mounting location. #2 Test fitting SSDs in Pat's 3D printed Bracket SSDs mounted using Pat's 3D printed bracket #1 SSDs mounted using Pat's 3D printed bracket #2 SSDs mounted using Pat's 3D printed bracket #3 SSDs mounted using Pat's 3D printed bracket #4 Brian's NAS mounted in his media cart #1 Brian's NAS mounted in his media cart #2


How Does it Measure up to the DIY NAS: 2016 Edition?

Out of curiosity, I executed the same IOMeter tests as I did in the DIY NAS: 2016 Edition to see exactly how my own NAS measured up performance-wise to the DIY NAS: 2016 Edition, and I also wanted to see the impact of the Autotune as well.

IOPS

Throughput

Overall, I had been expecting that my own NAS would be pretty comparable to the DIY NAS: 2016 Edition, and for the most part, I was right. Surprisingly, my NAS outperformed the DIY NAS: 2016 Edition in sequential writes by a good margin in both IOPS as well as MB/sec. However, for my uses, sequential writes (or reads) isn’t really a very real-world test. IOMeter’s “All Tests” mimics my real-world usage much better than the sequential read or sequential write tests. Within the “All Tests” my NAS benchmarked at about 87% of what the DIY NAS: 2016 Edition scored. I was hoping to be within 10%, but I was close enough that I am pleased with the outcome once you also factor in the additional money I was able to save by going with the ASRock C2550D4I.


What’s Next?

My ultimate goal for the upgrade to my FreeNAS machine is to create a box capable of serving as the disk storage for my yet-to-be-built homelab machine. As far as I’m concerned, I’m pretty certain that my upgraded NAS is up to that task. But I’ve got a couple projects to finish first: building out my poor man’s 10Gbe network and assembling my homelab server.

I’m pretty happy with both the performance of my NAS after all of its upgrades as well as its cost. In comparison to my prior NAS, its performance is light years ahead of where I was at prior to the upgrade. Depending on the test, IOPS and MB/sec for the benchmarks I performed ranged from 60% better to 4500% better. And while its performance lagged behind the DIY NAS: 2016 Edition, it was only by a half-step, and it even managed to out perform the DIY NAS: 2016 Edition in one test.

Hopefully, it’ll be at least another 4 years before I’m upgrading components again except for replacing/upgrading any hard-disk drives which manage to fail between now and the next major upgrade!

Mirroring the FreeNAS USB Boot Device

| Comments

One of the things that I like best about FreeNAS is the fact that you have the option to run it off an inexpensive USB flash drive; in fact, that seems to be the preferred option and is the most encouraged by the FreeNAS community. Consequently, that means you have an additional SATA port available for fulfilling the primary function of your NAS—additional storage. Almost as beneficial is the fact that USB drives are quite inexpensive. However, it’s not been unusual for me to receive some incredulous comments, questions, and other reactions when I explain that I entrust my data to an operating system which is hosted on a USB flash drive.

Usually, after listing out the benefits of having the OS on a USB flash drive, most people will come around and appreciate those same benefits. However, a minority of those people are a bit more skeptical, citing reasons like they’ve had bad experience with faulty USB drives in the past or that they simply don’t think that a USB drive can be counted on to be responsible for any kind of operating system.

Typically, what I’ve told the remaining skeptics was that losing your OS drive just isn’t that big of a deal in FreeNAS. In the event that the USB flash drive died, it’d be pretty easy to recover. First you’d need a bootable copy of the FreeNAS installation ISO, a replacement USB flash drive, and a few minutes of your time. FreeNAS would get installed on the new USB drive, then the existing zpool could be imported from the data drives, and finally the system configuration database could be restored from a daily backup that FreeNAS does automatically each morning. As part of an upgrade to my own NAS (a future blog topic), I went through these same exact steps just to see how long it’d take and how difficult it was. From start to finish, it took me about 30 minutes and it was not complicated at all.

Personally, I think 30 minutes of downtime is more than acceptable for the overwhelming majority of builders of DIY NAS machines, but that’s just my opinion. I certainly wouldn’t blame someone for saying that it isn’t acceptable for their own NAS. Thankfully, for people with standards a little bit higher than mine, FreeNAS will make a mirror out of your USB boot device. Even better? It’s really simple to set up. FreeNAS even wrote the exact steps in their user documentation (5.3.1. Mirroring the Boot Device):

How to Mirror the FreeNAS Boot Device

  1. Open your FreeNAS UI in a browser.
  2. From the System tab, select Boot
  3. Click the Status button
  4. Select either freenas-boot or stripe
  5. Click the the Attach button
  6. Select the appropriate device from the Member Disk drop down and click Attach Disk

From this point, the freenas-boot zpool will be converted into a mirror (from a stripe) and the new device will be added to that zpool. Once that action completes, ZFS will begin re-slivering and duplicate your data from your existing USB flash drive to the new one. Because it re-slivers the zpool, you will get a system alert about how the freenas-boot is degraded. However, this is temporary and clears up once the re-sliver is complete. On my machine, that took just a few minutes.

You can create this mirror from the get-go during the installation too. All that you have to do during the installation is to have your two USB drives connected and then to select them both as targets for the installation. The FreeNAS installer will then create your mirrored boot devices as part of its initial setup.


FreeNAS System tab FreeNAS Boot Device Info FreeNAS Boot Device Status FreeNAS Boot Device Status w/ Added Mirror Re-slivering freenas-boot zpool Re-sliver complete on freenas-boot

Gotcha!

The FreeNAS user documentation features this suggestion very prominently:

Note: When adding another boot device, it must be the same size (or larger) as the existing boot device. Different models of USB devices which advertise the same size may not necessarily be the same size. For this reason, it is recommended to use the same model of USB drive.

This warning neither surprised me nor worried me. I’ve been using the SanDisk Cruzer Fit line of USB drives now for years. In fact, before building the DIY NAS: 2016 Edition I even bought a handful of these devices just to have a few extra around the house. When I decided to add a USB Flash Drive mirror on my own NAS, I decided I’d buy a couple more. I had enough USB flash drives from the same manufacturer and of the model that I didn’t think anything of this notice when I made my first attempt. Imagine my surprise when this error message was the result: Error: Failed to attach disk: cannot attach da1p2 to gptid/b2be8286-f11e-a058-00074306bdff: device is too small

Apparently, there have been variations to the 16GB SanDisk Cruzer Fit over time. The drives that I had purchased previously were ever-so-slightly bigger than the ones I bought just this week. How could I work around this? I had a couple options:

  1. Manually back up the system configuration and reinstall FreeNAS while choosing to specify both USB devices. As a result, FreeNAS would size the mirror to the smaller of the two USB drives. Then boot from that new mirrored installation and restore the system configuration.
  2. Dig through my collection of 16GB SanDisk Cruzer Fit drives and try them one by one while hoping that at least one of them is the same size or bigger than the one in my own NAS.

Thankfully, after trying 3—4 different 16GB flash drives, I found one that was the same size or larger.

Final Thoughts

Assuming you’re a bit more meticulous than I have been, you may want some sort of redundancy for your FreeNAS boot device. It’s wonderfully simple to do as part of the initial installation; just insert your two USB flash drives and select them both as destinations for the installation. If you miss it during the initial setup, it’s almost as easy to do through the FreeNAS user interface as is outlined in the user documentation to mirror the boot device. About the only wrinkle is that when doing it after the fact, you need to be careful that the new device is the same size or larger as your existing boot device. The complicated part of this is that you can’t necessarily count on the fact that two different USB drives are the same size, even if they are the same model!

What do you think? Have any of you been holding off because you don’t have much faith in USB flash drives? Does the FreeNAS feature to easily mirror multiple flash drives help with your concerns at all?

I (grudgingly) Realized that I Wanted a Smartwatch

| Comments

Update (12/9/16): In a recent announcement, Pebble announced that they were shutting their doors and selling off their intellectual property to Fitbit. As such, I probably need to retract any nice things that I said about Pebble’s products down below. My new recommendation to everyone is: don’t buy Pebble smartwatches. The watch might work for now but nobody’s going to honor any kind of warranty, provide any support, or further the platform. I’m sure retailers are going to purge their inventories at rock-bottom pricing, but considering what Pebble’s said lies in store for their products it seems foolhardy to buy at any price. You’ve been warned—you’re almost certain to get far less than what you paid for.

Moreover, don’t buy anything from Fitbit either. While I commend their business acumen in acquiring the intellectual property but none of Pebble’s debt or obligations (for example: supporting the existing users), I think it’s a crummy move on their part to turn their backs on all of the existing Pebble users. I already had a frustrating experience with the Fitbit Force when Fitbit “voluntarily” recalled the Force before fulfilling a pending order that I had waited quite some time for. I hope they do amazing things with the pieces of Pebble that they acquired, but I’ll never buy any of their products again after they disappointed me both directly and indirectly.

I’ll enjoy my Pebble Time Steel as long as I can but it will quit working at some point. When that happens, will I replace it with another smartwatch? I’m not so certain.

For the longest time, the entire smartwatch craze befuddled me. I spent the last twenty years or so being very anti-watch. I spent the ’90s and the decade after wishing that my mobile phone would shrink down to a small enough size that it’d easily double as a pocket watch while liberating my wrist. In fact, when I eventually replaced my watch with my Nokia 8260, I was quite prideful in my ability to predict the future. For the next fifteen years, I scoffed at the notion of needing a watch at any point in the future.

Then a couple weekends ago I was at the hospital, precariously holding my newborn son, when my phone chirped at me as a text message came in, then a few moments later a phone call came in, and then immediately after that another phone call, ultimately all of this followed by a voice-mail notification! My brother, Jeff, was trying to get in touch with me in order to find out where he needed to go in order to come see his nephew for the first time (and to also bring the delicious pork he’d smoked.) But I was both unable and unwilling to reach into my pocket and answer his call. As my Nexus 6 rang and vibrated in vain from the depths of my pocket, I asked myself, “Oh crap. Am I going to need to get a smartwatch now?”

To be fair, I’ve been wearing something on my wrist for a couple years now. I have had a Fitbit Flex for quite some time now, so it’s not like my wrist has been completely naked since banishing watches sometime near the beginning of this millennium. But it’s still a pretty surprising 180-degree reversal on my part, especially when you consider my stubborn nature. I grudgingly resigned myself to the fact that I’d be shopping for a smartwatch in the near future and began to think about the features that I wanted to see in my smartwatch.

Smartwatch Requirements

  1. Battery Life: It seems these days I’m always in search of a charger for some piece of electronics that I’m carrying around. I’d really like to see at least 3 days’ worth of battery life and I’d be willing to pay more or sacrifice other features for a longer battery life.
  2. Fitness Tracking: I’m not an especially active guy, but I like the data that I get to see from my Fitbit Flex, especially its ability to keep count of steps and sleep tracking. In a perfect world, I wouldn’t have to move off from Fitbit as my fitness platform of choice.
  3. Mobile Platform Independent: I’m pretty much a devoted Android guy, but there’s a remote possibility that someday that might change. I’d prefer not to be shackled to any particular mobile operating system just because I happen to own one device from their ecosystem. There’s nothing special about a smartwatch’s functionality that would prevent it from working in numerous environments. If a manufacturer disagrees and sees the smartwatch as an opportunity to further their grip on my household, they’re going to be disappointed.
  4. Color Display: Even though it may consume more battery power than a black-and-white display, I’d still prefer a color display on my watch. My days of a monochromatic watch experience ended with whatever watch I was wearing at the end of the last millennium.
  5. Reasonably Priced: I wasn’t quite sure what dollar figure to place on this, but the smaller the amount the better. The rate at which mobile electronics become obsolete is way too high for me to spend much money on them. For the purposes of my shopping, I set my limit at around $300. I’d consider watches over that price, but they’d really need to blow my socks off.
  6. Chronometer: Oh yeah, it’s a watch—might as well make sure it can perform its primary function.

Determining my requirements didn’t really help me pick a watch at all, but it certainly did help eliminate the Apple Watch. The Apple Watch’s battery would need to be charged at least on a daily basis, ashamedly requires an iPhone to work, and its cost starts above what I consider to be reasonable. Even if I had an iPhone, I would still be inclined to buy a different smartwatch than what Apple’s currently offering.

The Contenders

There are quite a few choices on the smartwatch market, which was a bit surprising. In fact, there are so many out there that I’m relatively certain I’ve overlooked quite a few products that might fit my needs. Ultimately, I narrowed down the list to the following watches.

Among my criteria, my battery requirement eliminated a number of watches. The Huwaei Watch (1 to 2 days), LG Watch Urbane Wearable Smart Watch (1 to 2 days), Motorola 360 (~1 day), and Fossil Men’s FTW2001 (1 to 1.5 days) each failed to meet my minimum of 3 days’ use on a single charge. Furthermore, I was a bit disappointed to find out that each of these watches required that the screen go to sleep in order to reach those “maximum” charge times. Considering the size of the batteries and the displays these watches have, this isn’t a surprising factoid, but it doesn’t stop it from being a disappointing one. I expected that the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch also has a similar battery limitation, but it does have a tremendous advantage—price! At around ten bucks you could buy one for every day of the month before you got to the price of the Huawei, LG, Fossil, or Motorola watches.

The Pebble Time Steel and Pebble Time both meet my battery criteria thanks to their e-paper displays. The best part about the e-paper display is that it only requires power to update the display, so not only does it use a fraction of the power other smartwatches’ displays take, but it also means things can like the time can be presented on the display and remain there without consuming any power until they require an update. I’ve owned a Kindle Paperwhite e-reader for a while, and I’ve enjoyed using it quite a bit, which gives me a measure of confidence in e-paper displays.

The Decision

Who says you can’t have your cake and eat it too? The Pebble Time, Pebble Time Steel, and Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch all met most, if not all, of my criteria. Both of the options from Pebble offerings actually met all of my criteria. By the time I was done shopping, I had made up my mind to buy the Pebble Time Steel, mostly due to its larger battery. But at only $10, it also seemed like a no-brainer to also buy the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch too!

Both the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch and the Pebble Time Steel showed up on the same day, so what did I do? Put them both on, naturally! I actually expected that this would cause problems, but wound up being pleasantly surprised to see that notifications were getting sent to both of my phones. It wound up being a bit difficult to use either phone with both on my left wrist, so I wound up wearing one on each of my wrists. Thank goodness I’ve been housebound with fatherly duties, as I looked like a much bigger dork than usual!

Fixin_DIY

The Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch was really surprising to me, consider its price of around $10. Because of the price, I had pretty low expectations. However, the smartwatch was quite capable and exceeded my expectations. It instructed me to download an app, BT Notification, to manage which notifications would get passed on to the phone. One of the features present on the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch but missing on the Pebble Time Steel is the fact that it includes the functionality of a Bluetooth headset. I was able to successfully call Pat and leave him a voice-mail despite his well-stated position on voicemail. Speaking of Pat, when I gave him the smartwatch to play around with, he discovered a feature that I overlooked—the smartwatch also has a the ability to access your phone’s camera remotely. We couldn’t think of many uses for having access to a remote camera on our wrists, but Pat pointed out that it’d come in handy if you had to see behind something that you couldn’t quite fit your head behind. As expected, you’re able to control (and listen) to your phone’s music, place a call to someone from your phone’s contacts, and read your text messages. Atop of that, the smartwatch also contained a bushel of other miscellaneous built-in apps, including; a calculator, stopwatch, alarm clock, pedometer, calendar (unique from your phone’s calendar app and data), sleep tracker, and a couple others.

I wound up not caring much for the interface of this smartwatch—the touchscreen is just a bit too difficult to use precisely and the design of the user interface is both basic and lacking. The act of acknowledging and dismissing a notification on the smartwatch was difficult enough that I probably would prefer doing it from my Nexus 6 instead. I also found that the smartwatch’s configuration options left quite a bit to be desired. YYou can change the notification and ring tones, but the choices all are pretty crummy and there are only 2-3 for each. The battery life is also pretty lacking—I immediately charged the Fixin_DIY watch and within a few hours of heavy use it was needing another charge, and that was a letdown. The poor battery life gave me doubts about whether or not it could survive an entire day. The size of the watch was also a bit bigger than I would’ve liked, and noticeably bigger in all three dimensions than the Pebble Time Steel.

There were quite a few things about the smartwatch that I liked, especially the price and its handling of the phone’s notifications, but there were also things that I disliked: the touchscreen, size, the user interface, and the battery life. All that being said, I think the Fixing_DIY Bluetooth Android Smart Mobile Phone U8 Wrist Watch is a great value at $10. It ticks off quite a few of my “must-have” features for a smartwatch and does it all for less than the price of a movie ticket. I think that this smartwatch would be an excellent investment for anyone who isn’t quite sure if they want a smartwatch and aren’t willing to spend hundreds of dollars just to satisfy their curiosity.


Dead-on shot with Display Active Laying on its side, with left side up Laying on its side, with right side up Connected to charging cable Shallow(er) viewing angle #1 Shallow(er) viewing angle #2

Pebble Time Steel

At the price of roughly 19 Fixin_DIY watches, I had pretty lofty expectations for the Pebble Time Steel, although in its defense the price was quite more reasonable than the offerings from Apple, Samsung, and Motorola. The Pebble Time Steel surprised me in that it was quite a bit smaller than I expected. A friend of mine has the original Pebble Watch and I wound up being surprised to find out that the Pebble Time Steel was a bit smaller than her Pebble Watch. In fact, I’d wager to say that the Pebble Time Steel took up just about as much of my wrist as my beloved calculator-watch that I had back in the ‘80s, but my wrists are a bit bigger now than they were back then.

So what does the extra $180 get you when comparing the Pebble Time Steel to the Fixin_DIY watch? Quite a bit! First and foremost is battery life. When I first received it I never charged the Pebble Time Steel, and on its initial charge under pretty heavy use the battery lasted five days. And thanks to the properties of the e-paper display, the watch was always on. It was interesting to me how frustrated I got with having to hit a button on the Fixin_DIY watch in order to wake up the display just to see the time. Another exciting feature of the Pebble Time Steel was its plethora of apps and watchfaces. I definitely have a desire to display data from my Continuous Glucose Monitoring system as well as some of the data from my web-analytics platform, Piwik. While I couldn’t find exactly what I was looking for in Pebble’s app store, in looking at some of Pebble’s development material, I’m reasonably confident that I can construct it myself. Lastly, the Pebble Time Steel has water resistance up to 30 meters with some limitations, which means all of my watery day-to-day activities (shower, washing dishes, getting peed on by the baby) aren’t likely to cause any ill effects.


Dead-on shot with Display Active Laying on its side, with left side up Laying on its side, with right side up Watch and Magnetic Charging Cable Connected to charging cable Shallow(er) viewing angle

Conclusion

I’m spending quite a bit more time these days with my hands full, completely unable to pull my smartphone from my pocket. I thought a smartwatch would help out in those situations, and I was mostly correct. But things that I assumed I could do one-handed all actually require two hands: one hand is tied up wearing the watch, and the other hand makes selections via the touchscreen or buttons, which I found a bit disappointing. On the flipside, I was already wearing something on my wrist, and I’d caught myself wishing a few times that it had a watch face and that I could somehow use it with my smartphone.

I’m actually pretty pleased that I bought the smartwatch, but a tiny bit disappointed it didn’t solve the exact problem that I purchased it for. I’m excited because it’s a fun little gadget that I get to tinker around with. The Pebble Time Steel wound up meeting all of my smartwatch criteria, and I truly am appreciating that all of the notifications I care for are getting forwarded to my watch. In fact, I’m tempted to mute the notification tone and vibration on my Nexus 6 as a result of buying a smartwatch.

It may not have been the perfect solution to the problem I was hoping it would solve, but overall I’m pleased with owning a smartwatch. There are a number of things that I wouldn’t mind incorporating into my smartwatch: keeping track of my traffic on my blog, keeping track of my blood-sugar data from my continuous glucose meter, and incorporating the watch into my own home automation. If I can accomplish those tasks then the smartwatch will wind up being a very useful addition to my arsenal of gadgets. Otherwise? Then it’s an expensive toy, but not the kind of toy I expect I’ll outgrow too soon.

How about you guys? What purposes do you have for smartwatches that I’m overlooking? And on the flip side, what concerns do you have that might be stopping you from seriously considering a smartwatch?

Home Brew: Das DoppelGanger

| Comments

As I’ve mentioned in past blogs, one of the big reasons I decided to join our local makerspace, TheLab.ms, was their Brew of the Month program. I had always been interested in the prospect of brewing my own beer, but it took a group of other enthusiasts to finally act on my curiosity.

The last Brew of the Month that I attended was at the end of February when we brewed TheLab DoppleGanger. The DoppleGanger is a doppelbock imitating a Chocolate Stout. Of the beers that I’ve participated in brewing, this was by far the most complicated. As I understand it, it’s the first time that TheLab’s brewers have attempted to brew a beer that included a triple infusion mash. Our brew master, Richard, warned us that we had a long night ahead of us when he shared the details of the month’s brew at TheLab’s Monthly members’ meeting. Richard wasn’t exaggerating; the night we brewed I didn’t make it home until well after two a.m.—much to the chagrin of my dogs, Crockett and Zoe. On top of the complicated brewing process, it was going to be a doppelbock, which meant that it was going to wind up fermenting in the Brewterus for twice the normal time. Which meant it would be two months before we could all enjoy the fruits of our labor.

Over Easter weekend I kegged the DoppelGanger, and because I’m married to a German, I began referring to it as “Das DoppelGanger.” The smell of the chocolate malt really stood out as I transferred the beer from my carboy into the Cornelius keg. A pleasant chocolatey-beer aroma permeated the room where my keezer is located. If not for the power of Pine-Sol, that wondrous smell would’ve quickly enveloped the entire house due to the enormous mess I made while putting the brew into the keg.

Of the few beers I’ve brewed, I had the hardest time with Das DoppelGanger. The fermentation was a bit more complicated and wasn’t the set-it-and-forget-it that I’d done with previous beers. As a result of some of the excitement around our newborn son’s arrival, I wound up not exactly adhering to the recipe. Richard assured me that I was fine long before I kegged the beer, but I had that inkling of a doubt.

I really wish that I had a sophisticated palate and an armory of descriptive adjectives to describe what I taste, but sadly I think I lack some of the tools and definitely the experience to effectively describe what I’m tasting. Firstly, the DoppelGanger is a quite dark beer, reminding me quite a bit of a cup of coffee. I believe this is primarily thanks to the combination of the Munich Malt, Carafa III, and Chocolate Malt. Because it’s a doppelbock it’s got a pretty considerable amount of heartiness to it and a higher alcohol content than the beers I most typically drink.


Prepping the carboy and keg for transfer. The DoppelGanger is dark and quite coffee-like in appearance. It looks even darker with the camera flash active. Slowly being siphoned from the carboy into the keg. Updating the label on the beer tap handle DoppleGanger handle installed and ready to go #1 DoppleGanger handle installed and ready to go #2 DoppleGanger handle installed and ready to go #3 The first poured glass of the DoppelGanger

What did I think?

Historically, I’ve mostly enjoyed the lighter side of beers. I’ve always had a real bias towards beers that are crisp and smooth. The darker, heartier beers never really captured my fancy. That being said, I’m slowly coming around to the dark side. A few helpful bartenders have helped me identify and like quite a few darker beers.

If the DoppelGanger was on tap at one of my favorite watering holes, I’d definitely order a glass or two. But that being said, I’d probably prefer to do it on an empty stomach—it’s quite a filling beer. However, it’s really quite smooth and enjoyable. I worked on this blog while sipping on my very first glass of the DoppelGanger, finishing up both the glass and the first draft of the blog almost simultaneously. I’m going to enjoy drinking (and sharing……….maybe) all five gallons of the DoppelGanger currently on tap in my keezer!