StackAccel

≡ Menu

project

Building a 2-Node ESXi Cluster with Centralized Storage for $2,500

2nodeEsxiClusterWhat started as a simple goal… replacing my vSphere 4.1 whitebox with something that more closely resembles a production environment, became a design requirement for a multi-node ESXi lab cluster that can do HA, vMotion, DRS, and most of the other good stuff.  But do it without having to resort to using nested ESXi.  And I wanted an iSCSI storage array that was fast.  Not, Synology NAS “kind of” fast under certain conditions… but something with SSD-like performance, and a bunch of space.  And I didn’t want to spend more than $2,500For everything.  In other words, what really I wanted was something akin to a 2-node ESXi cluster with SAN backing it with 10TB of “fast” disk I/O.  Almost something like a Dell VRTX, but for my home lab.  And I wanted to spend less one-tenth of what it might cost with Enterprise-grade gear.   As I iterated through hardware configurations, checking the vSphere whitebox forums, contrasting against the HCL, and running out of budget quickly, it became clear that the only way to do this was to figure out storage piece first.

Key Design Decision

So the question became, do I really want to bother with having a “real” iSCSI storage array.  If not, then  I could opt for multiple SSD drives locally with a hardware RAID, and just reconsider the need for a dedicated storage array entirely –  or making some other compromises.  Sure, I would have had more IOPS than I would have known what to do with, and yeah, maybe I could have resigned myself to living in a nested ESXi world, but no.  As it turned out, someone had already done some good heavy lifting on this topic, and had a fast homemade iSCSI storage array with hardware RAID for $1,500.  No VAAI of course, but still… not bad for $1,500.  More than that through, that left $1,000 to put together a couple of hosts – so plenty, right?

Critical Path item – Inexpensive Storage

If the storage array as a build component was the biggest overall challenge, then raw storage was the critical path item in terms of procurement.  In order to come out with about 11TB of usable capacity, I needed 14 x 1TB drives in a RAID6, and I needed to spend less than $700.    Depending on when you read this article, that may be less of a challenge than it was in early 2014, but at the time $50 per TB on a 7200RPM drive was hard to come by.  Still harder was finding Hitachi Ultrastar 7200pm drives with 32MB of cache – as they’re Enterprise-grade and not always around on Ebay.  After a couple of months bidding on large lots, I eventually found 14 for $50 each.  Given no schedule constraints, I could have perhaps ended up with 2X as much storage, for a reasonable cost premium – but 11TB of usable space in a RAID6 configuration exceeded my need, and kept me on pace to acquire most of the hardware on schedule.

Controller

lsi80416e

The LSI MegaRAID 84016E is, albeit last generation, a SAS/SATAII workhorse of a controller card.  Supporting up to 16 drives, and RAID levels 0, 1, 5, 6, 10, 50, and 60 at 3Gb/s per port, with a battery backup module, and online capacity expansion and it’s dirt cheap on Ebay and nearly always available.  For my use-case, it compared favorably against going the local SSD drive route, or using LSI9260-4i which cost 4X.  If you’re copying the build, be sure and pick-up four of these Mini SAS (SFF-8087) Male to SATA 7-pin Female cables so that you can plug you SATA drives into the LSI 84016E.

Storage Array: Everything Else

This section could almost everything else, because by now we’ve spent about 31% of the budget and many of the remaining decisions are mostly inconsequential.  Still I opted for a Rosewill RSV-L4500, because I knew it would fit all 14 drives in it (and it does, and the design isn’t bad at all – easy enough for me to work in).  I did add a spare SSD drive as an OS and Level -2 Cache for PrimoCache (though the SSD was scavenged from another box).  For CPU, Memory, and Power Supply – I went with the ASRock 970 Extreme 3 – $65, AMD FX8320 8-core CPU (because it was on-sale for about $105), 32GB of the lowest-cost DDR3-1600 ECC RAM I could find, and a 750W Corsaid HX750 power-supply to power  all of these drives. For the network interfaces, I had a spare Intel 2-port NIC lying around that I had picked-up in a lot sale a couple of years ago.

Making the Storage Array Useful

5742_01_rosewill_rsv_l4411_rackmount_server_case_review

There are a number of ways you can go with this.  Nexenta, Microsoft Storage Spaces, OpenIndiana, or FreeNAS to name a few.  But I really wanted to take a look at Starwind’s SAN/iSCSI combined with PrimoCache.  Short version… you install Windows 2008R2/2012R2, configure the LSI controller software (MegaRAID), add Starwind’s SAN/iSCSI software and carve out some storage to expose to ESXi, and PrimoCache and configure it to use as much RAM as you can for the Level-1 Cache (26GB), and as much SSD space as you can for the Level-2 Cache (120GB).  In my usage scenario, this seems to work pretty well – keeping the disks from bogging down I/O.

vSphere ESXi Cluster

With the remaining budget, I still needed to build two vSphere ESXi boxes.  As I started looking, I found it really challenging to come up with a lower-cost and still good build than using the ASRock 970 Extreme 3, AMD-FX-8320 3.6GHz 8-Core processors, 32-GB of RAM, ATI Rage XL 8mb, Logisys PS550E12BK, spare Intel Pro/1000, 16GB USB sticks… I built two more machines – installing vSphere ESXi 5.1 on the USB sticks, and mounting the iSCSI volumes that I exposed in Starwind’s SAN/iSCSI and began building out my vCenter box and templates.

Compromises to hit budget

In order to stay around $2,500 there were a few small compromises that I had to make.  The first was the hard drives… I couldn’t wait for the 2TB Hitachi Ultrastar drives.  While not significant for my use case, it is nonetheless noteworthy. Secondly, I ran out of budget to put a full 32GB of RAM in the both of the hosts, so I have a total of 48GB across the two nodes, instead of the 64GB that I hoped for.  Finally, for the hosts, I bought the lowest cost cases I could find – $25 each.  Aside from scavenging a few parts that I had laying around (an SSD drive, a few Intel Pro/1000), I managed to come in on budget.

Bottom Line

The project met all of my goals – a home lab, multi-node ESXi cluster with a dedicated iSCSI storage array that resembles a production environment – all on a budget of around $2,500.  I’m able to vMotion my VMs around, DRS is functioning, and Veeam Backup and Replication is working.  Better still, I can tear-down and rebuilt the environment pretty quickly now.  I didn’t really run into any show-stoppers per say, or real problems with the build.  If there’s interest, I’ll post some additional information about the lab in the future.  A big thanks to Don over at The Home Server blog for his work on Building a Homemade SAN on the Cheap, particularly in validating that you can actually buy descent drives in large lots on Ebay at a discount, as well as for the Motherboard recommendation which was critical in hitting the budget.

Visit Us On Twitter