≡ Menu


Wi-Fi Guide

The conversation always seems to start with slow Wi-Fi, doesn’t it?

Maybe its employees telling you that they’re getting kicked-off of the Wi-Fi at your office.  Or maybe you’re just tired of having the receptionist reboot your access points every other day (even if that seems to fix things for a bit).  Bottom line… if you can watch Netflix in bed from your iPad, then you at least better be able to check email at the office!  What if the quality of your Wi-Fi connection at work was top-notch?  What if you knew you could support hundreds of connections – phones, laptops, tablets… and you knew that they’d be able to work on-line without headaches, without downtime… and most of all, without complaints every other day?  Would you be interested?

Of course it’s possible, in fact the expectation today should be consistent and reliable Wi-Fi.

In 2015, we’re in what’s called the 5th Generation of Wi-Fi/wireless technology, having finally arrived in the 802.11ac era.  This represents the fifth major iteration of the Wi-Fi as a technology standard.  It’s hard to believe when you look back, but Wi-Fi is now more than 18 years old.  What started off in the late 1990s with 2Mbps data rates, and a platform designed for the occasional use in conference rooms, has evolved into a standard that can support gigabit speeds, dense user populations, and video streaming without breaking a sweat.

In other words, if you have slow Wi-Fi service today, it’s a problem that can be solved.

Why is your current Wi-Fi slow?

A common narrative goes something like this… “We’ve been using these wireless access points for a while, and up until recently we almost never had a problem.  Now?  Everything just seems really slow.  We reboot the APs every so often and that seems to help… for a while… but the problem isn’t going away.

Often times, a quick walkthrough reveals a mix of consumer grade APs, placed at random throughout the environment.  While that’s not necessarily a root cause, it is a warning sign, and at the very least something to investigate.  What can make this especially challenging for the average business owner is that the design of their wireless network (WLAN) probably served them reasonably well up until fairly recently.  The explosion of portable devices and increasing end-user expectations are the major market forces at work, combining to strain older WLANs.

The flavors of 802.11

The Institute of Electrical and Electronics Engineers (IEEE) describes the 802.11 family of protocols, which provide the basis for wireless network products using the Wi-Fi brand.  They whole history of the protocol family is beyond the scope of article but here’s a short summary:

  • 1997 – 802.11 supports up to 2 Mbps
  • 1999 – 802.11b supports up to 11 Mbps
  • 2002 – 802.11a/g supports up to 54 Mbps
  • 2007 – 802.11n supports up to 600 Mbps
  • 2012 – 802.11ac supports up to 3.6 Gbps

What’s a bit misleading is that just because the 802.11ac standard was ratified in 2012, doesn’t mean that mature Enterprise 802.11ac access points were available at the time.  For example, while Quantenna and Broadcom were shipping Wi-Fi chipsets in 2012, and there were a couple of consumer grade 802.11ac access point that year, it wasn’t until over a year later that Cisco shipped 802.11ac in their Aironet product line-up.  It was even later than that for Ruckus.  What you may or may not realize, is that vendors like Cisco, Ruckus, Aruba, HP, Apple, etc. don’t actually make the Wi-Fi chip-sets that go into their products.  Instead, similar to the manner in which Dell buys CPUs from Intel – what the vendors do is contract with a chipset vendor like Atheros, Broadcom, or Marvell.  The chipset vendors take the 802.11 standard specifications, and design chipsets that actually implement the 802.11ac standard.  The chipsets typically include the chips that implement that standard, and may also include software drivers, and maybe an operating system to run the chipsets.  The major integrators of the world then buy the chipset and related components in volume from the upstream vendors, and implement those chipsets in their hardware in the form of a product like Cisco’s Aironet, or Ruckus’s R700 access points.

So if everyone is using the same basic building blocks, what differentiates the products?

The same thing that differentiates say, a Samsung Galaxy Android phone, from an Apple iPhone… it’s about how well the components work together, and the experience they provide end-users.  Starting early in a product lifecycle – say the first generation of 802.11ac, there’s usually relatively little “value-add” that the vendors build into the product.  The first generation is often mostly about integrating the various hardware components effectively.  As they iterate through designs, vendors build their own intellectual property on-top of the chipsets to diversity their offering relative to the competition.  In the case of Ruckus, a major component of their intellectual property, or their value-add comes in the form of the antenna system (more below), which differentiates their offering fundamentally from most of the omnidirectional antenna competition.  But all of this takes time.  Even today, with 802.11ac several years old (at least on paper), 802.11ac Enterprise-class hardware has is still somewhat recent, and is still improving as “wave 2” 802.11ac devices start popping up on the horizon.

So what do I get out of 802.11ac?  

It’s faster, and can support more devices than the last generation of products.

How much faster?  That depends on quite a few variables.  Today, under certain conditions 802.11ac is up to 3x as fast today as 802.11n devices are, and it has the potential to be quite a bit faster down the line with the 802.11ac specification being written to support up to 8 antennas, multi-user MIMO, and up to 6.8 Gbps.

But we’re getting a bit ahead of ourselves.

If all you’re interested in is if it’s worth buying 802.11ac Enterprise class devices today, the answer is yes – buy 802.11ac access points… even if you don’t have enough 802.11ac client devices yet to dictate 802.11ac (and you probably don’t).  Why?   Because 802.11ac is backwards compatible with 802.11n, and APs supporting it are generally compatible with all major 802.11 variations in both the 2.4 GHz, and 5 GHz segments of the spectrum.  Plus, because 802.11ac represents an evolution of the old 802.11n specification, by buying 802.11ac Enterprise class access points (APs) today, what you’re really doing is buying the best of the 802.11n designs, and getting 802.11ac as well.

For the majority of customers, it only makes sense to buy 802.11ac access points.

APs and Controllers? 

Wireless Access Points (APs) are the devices on your network which broadcast your Wi-Fi signals, implement encryption/security and physically bridge your wired Ethernet network with your WLAN.  If you’re coming at the wireless market and have experience with consumer APs, or Wi-Fi at home, the APs are what you’re probably already familiar with.

In addition to the APs themselves, most Enterprise-Class wireless solutions all but require a wireless controller.  For example, Cisco and Ruckus both have controllers in the form of the Cisco Wireless LAN Controller for the Aironet products, and the Ruckus ZoneDirector controller.  In broad terms, the wireless controllers extend the capabilities of the APs, and use data generated by the APs such that they’re able to work together to optimize signal, connection, and performance for the participating wireless clients.

But what do the controllers actually do

In real-world terms, controllers do quite a few things.  First, controller-based APs look for and associate with a parent controller automatically, which enables the APs to self-configure and associate with the controller making them immediately available for remote management.  Once that happens, the controller can automatically optimize the environment and adjust settings on-the-fly in response to real-world measurements and feedback from the APs.  This means adjusting power levels and RF channel assignments dynamically in order to minimize the impact of interference.  If you recall setting up consumer-grade APs and having to manually set the channels to keep them from interfering with each other, then you’ll be pleased to learn that optimizations like that will happen automatically and without administrative burden on an on-going basis, providing the best possible signal quality to your end-user’s devices.  Beyond optimizations, the controller also provides value-added troubleshooting capabilities that can not only consolidate logging information, but also enable an administrator to determine individual client-device performance.  Translated, that means you’ll potentially be able to drill-down on complaints of “My Wi-Fi is slow!” in real-time and do root-cause analysis (instead of just rebooting the AP and hoping).  Beyond the above, controllers also work to enable more advanced security integration capabilities, generally enabling RADIUS, captive portal, ActiveDirectory integration, dynamic VLAN assignment, and a host of other capabilities.

Beamforming and MIMO

If you’re unfamiliar with how radio frequency (RF) energy propagates from an omni-direction antenna (e.g. a regular wifi antenna), the propagation looks like a donut-shaped Wi-Fi bubble of coverage emanating out from your APs in 3-Dimensional space.


When you think about on-chip beamforming as a technology, you might think in terms of it directing RF energy in a beam-like manner toward the client devices that the AP is communicating with.  Unfortunately, the reality falls somewhere short of that.  In the prior standard, 802.11n specified several beamforming methods, but because there were so many options and because implementing them drove cost and complexity, the result was that most vendors didn’t implement anything on the client side.  And when it comes to the on-chip variety (e.g. nearly every implementation) the benefit that beamforming created in terms of increased signal strength was often times negligible, and occasionally destructive.  The problem really comes down to a lack of data.  Without getting into the weeds, in a version of the 802.11n on-chip beamforming variety, a pair (or more) of omni-directional antennas emit identical signals, but because of timing or transmission paths, those signals arrive a slightly different points in time on the receiving end, with the goal being constructive interference at the point of the receiver.  Something like this…


The challenge is that dead-spots and areas of destructive interference combine to minimize the benefit of on-chip beamforming advantage, resulting in coverage patterns with gaps in their effective coverage areas.  There’s a lot more to this topic, but that’s the short version.

802.11ac Beamforming

With 802.11n beamforming iterations generally coming-up short, only one beamforming model was specified in 802.11ac, which works on both sides of the conversation (both the AP, and the client devices).  The specification is an evolution of 802.11n, with support for more Multiple input, multiple output (MIMO) spatial streams (up to 8).  In practical terms, that means up to 8 antennas on both the AP and client side of the conversation.

Big-picture?  This translates into better & faster Wi-Fi.

To elaborate on MIMO briefly, MIMO relies on interference to create signal diversity, which enhances signal quality.  It may sound counter intuitive at first though, so here’s a bit more of how it works.  Instead of using a single carrier transmitting a data set for a short period of time, where temporary interference can damage a signal’s payload, 802.11ac MIMO specifies OFDM.  Essentially, this means multiple data payloads (“symbols”) are transmitted in parallel on the same frequency, on multiple radios for longer periods of time, making it easier to recover the signal if there’s temporary interference.  Essentially, all of the transmitters transmit on the same frequency, but with different data being sent out of each antenna (up to 8 antennas, in the specification).  So the obvious question becomes how can multiple antennas operate at the same time on the same frequency with different data sets, and not create interference? The answer?  A  powerful digital signal processor (DSP) and matrix math, enables either side to recover the actual signals on each antenna.  The receiving antennas are able to differentiate the data, and improve spectrum efficiency to, in real terms, multiply bandwidth by the number of radios (up to 8).  In one application, 3 transmit antennas send 3 different sets of data on the same frequency at the same time, and the three receive antennas are still able to unscramble the data, because the signal diversity causes the signals to arrive at different points in time.  Effectively, the signals bounce off of stuff as they propagate outward, ensuring that the signals all arrive at slightly different points in time, and when combined with a strong DSP, the AP is able to unscramble and recombine the data streams.

(What I’ve presented on MIMO is a simplified version of what’s really going on.  I didn’t touch on the client-end, channel measurements that happen on both the transmit and receive sides, or multi-user MIMO.  So if you’re interested in a more comprehensive answer, including a an explanation of the matrix math involved in unscrambling the data streams, check out 802.11ac: A survival guide, by Matthew S. Gast, and you will not be disappointed).

Put differently… MIMO is all about maximizing the efficient use of the RF spectrum.  Which translates into more, better, faster Wi-Fi for everyone.   Well, for the most part anyway.  For mobile devices, particularly phones and tablets, it’s going to be quite a while before we see 8 antennas implemented.  The reason for this is that each additional radio doubles the power demands, shrinking the battery life of those mobile devices.  So while energy consumption may not matter at the APs, on mobile devices like phones and tablets, every milliwatt counts.

Wouldn’t directional antennas be better than omni-direction antennas?

As I mentioned earlier, most APs use omni-directional antennas that radiate RF energy out indiscriminately in a 3D Wi-Fi coverage bubble.  And even when you add most vendor’s beamforming flavors, you’re not doing anything to increase the size of the coverage zone.  Instead, what you’re really doing is using some clever tricks to take the signal donut and alter the quality (both for good, and ill) within the existing coverage range.  In practice, on-chip beamforming with omnidirectional antennas create coverage areas with improved , as well as reduced coverage, as approximated below.


And we haven’t even discussed polarization yet.

An antenna provides gain, direction, and polarity to a radio signal.  Polarity is the orientation of the transmission from the antenna.  Antennas produce either vertically polarized (VPOL), or horizontally polarized (HPOL) signals – the polarization axis describes the orientation of the radio waves as they radiate out from the antenna.  In other words, RF energy moves in waves, and those waves move up-and-down, or back-and-fourth in space – the polarization of the wave.


  Why does polarization matter? 

In a worst case scenario a perfectly horizontal receiving antenna may not hear anything transmitted by a perfectly vertical antenna.  While objects can impede, or reflect a signal and distort the polarization, signal loss due to polarization differences is real, and can potentially prevent communications from occurring.  At the very least, signal strength is reduced.

In terms of a real-world example, recent MacBook designs have housed their antennas in the hinge section of the laptop, and the antenna has a horizontal orientation.  What this means, is that a transmission coming from a vertically polarized AP will be harder for the MacBook to hear, and likewise the AP will have a harder time distinguishing the MacBook’s transmission.  For laptops, the polarization is generally a static condition based on the orientation of your laptop.  Phones and tablets though?  You tilt their orientation based on whatever you’re doing.  Need a wide-screen to watch Netflix, you hold your phone horizontally.  Reading an article on a web-site… you’ll orient it vertically.  The changes in the devices physical orientation change the orientation of the RF signal.  Which results in complaints along these lines; “When I hold my phone a certain way and I’m standing in this location, the Wi-Fi slows down or stops working” (that and other similarly unusual symptoms).

Here’s why… nearly all Wi-Fi access points use omni-directional dipole antennas that are vertically polarized.  These are considered the norm in the industry.  The reason is that they were common in the wider field of RF prior to the mass adoption of Wi-Fi.   In the case of a Cisco Aironet 2700 series antenna, the omnidirectional antennas housed within the chassis are vertically polarized antennas when mounted in the traditional orientation (e.g. flat).

ruckusAntennaArrayTo contrast Ruckus’s antenna design with omnidirectional antennas, Ruckus’s design takes a large number of small antenna elements and hooks those up to a digital switch.  The AP learns about the environment, and then uses antenna element array combinations to produce a desirable coverage pattern.  Some of these antenna elements are vertically polarized, and others are horizontally polarized.  By leveraging the CPU in the AP, Ruckus then optimizes antenna patterns for performance, in terms of rate control, power selection, and antenna choices.  Then, the choices are remembered for each client device, enabling the Ruckus product to make increasingly better decisions as the devices communicate.  So even under 802.11n, where the client devices aren’t providing any beamforming feedback to the APs,  the Ruckus product is still able optimize antenna patterns to maximize signal strength potential based on historical data for the individual client.  This occurs in a manner that the Cisco Aironet products are fundamentally unable to do.  Moreover, since the Ruckus solution uses antenna element arrays, there’s a mix of horizontally polarized, vertically polarized, as well as directional antenna elements which can create pattern optimizations that Ruckus is able to employ in order to optimize signal for both horizontally and vertically polarized client devices.  Put differently, Ruckus’s antenna designs go a long way to optimizing signals for various device types in the current, bring-your-down-device (BYOD) real-world.  What’s more, even though 802.11ac specifies a beamforming implementation that incorporates client device feedback into channel optimizations, Ruckus is able to optimize for signal-strength in a manner that’s fundamentally more diverse than any omni-directional antenna is able to do.  While Cisco does take issue with that statement, in the form of a whitepaper outlining why DSP processing is better than using thousands of unique antenna patterns for optimizing the RF signal for real-world situations, Ruckus’s antenna arrays are physically able to compensate for RF polarization, and even increase the “Wi-Fi” bubble by employing directional antenna elements which extend RF energy physically in the direction of active clients.

In other words, Ruckus largely eliminates the polarization topic and self-optimizes polarization in favor of improved client communications. Put differently, Ruckus’s Beamflex technology is beamforming on steroids.


Unlike on-chip beamforming, the transmit beamforming via Ruckus’s antenna element array provides a fundamentally unique capability relative to the competition, with antennas capable of producing unique coverage patterns over a more focused coverage area, with less potential for destructive interference relative to omni-directional antennas.

Where do I go from here?

The Enterprise Wi-Fi industry (e.g. Enterprise WLAN market) continues to produce more capable Access Points, with more features.  As more 802.11ac Enterprise-Class product sets are released, and improved upon, they’re increasingly making more efficient use of the available RF spectrum.  This translates into faster Wi-Fi, in denser user environments, with increasing features and capabilities.  This trend is expected to continue for the foreseeable future, as the 802.11ac specification was designed to grow around the anticipated near-term needs for more bandwidth, largly by employing more radios and antennas, as well as through the role out of MU-MIMO in wave 2.

From a marketshare standpoint, Cisco is the clearly the dominate player.  As of the last published IDC marketshare report, while Enterprise Wireless LAN market grew 7.6% year-over-year…

  • Cisco had 46.8% of the market, down though from 53% in the prior year
  • Ruckus saw 20.8% growth in the prior year, growing to 5.7% of the market
  • Aruba’s sales grew at 7.9%, increasing to 9.8% of the market.

In addition to marketshare, the biggest recent news of course, is that HP is buying Aruba, though it’s far too early to know how that will play out and what effect it will have on the market.

But which vendor is the best?

The major Enterprise WLAN vendors all offer generally competitive products, derived from just a few different chipsets.  With market forces being what they are, the result is that all of these products are generally competitive.  From a technical standpoint, the most recent vendor independent, access point analysis and report, comes courtesy of Wireless LAN Professionals in 2013.  Notably, the event was not vendor-sponsored, and as such represents the most unbiased and comprehensive assessment that I’ve seen.  It is however, limited to the last generation of 802.11n products.  What you’ll find if you dig through the reports, is that every single access point reached a choking point with respect to maximizing the use of the RF spectrum.  In the case of the report, the best overall combination ranking was the Ruckus 7982 product, followed by the Cisco 3602i.  If you look a bit further back, Tom’s Hardware did an excellent article on beamforming – “Beamforming: The Best WiFi You’ve Never Seen”.  While some of it is obviously a bit dated having been published prior to 802.11ac being standardized, it’s still a great primer on the differences between on-chip and antenna beamforming.

The real question is best for whom, or for what situation

Failing an updated version of the Wireless LAN Professional report incorporating the newer 802.11ac products, there’s not a comprehensive vendor-neutral assessment that I can point to in order to tell you which device technically has the best overall coverage in this generation of product.  Even if I did, and we could, it wouldn’t effectively answer which is best for your environment.   Beyond RF-capabilities, there are several factors which differentiate Enterprise 802.11ac WLAN products.  These include everything from design, to ease-of-use, to management capabilities, scalability, price and more.  For example, Ubiquiti offers a low-cost controller-less product, that’s generally easy to deploy and manage and is probably reasonable for most IT resources to deploy.  However, you generally need more access points than you would with a Ruckus deployment, and that means potentially more troubleshooting work.  And unfortunately, Ubiquiti doesn’t really offer technical support that would be comparable to any of the Enterprise-Class AP vendors.  Meanwhile, the Cisco Aironet product is an Enterprise-Class access point designed by the marketshare leader, and employs Cisco’s IOS, which makes it generally more suited toward larger environments or for environments where the IT resources involved have Cisco-specific experience.  Cisco has good support, but reaching the right support resources in a timely manner, and getting effective support doesn’t always happen.  Ruckus on the other hand, employs a unique phased-antenna element array design that provides a fundamentally unique advantage relative to the competition, which was apparent during the last generation of products in the vendor-independent assessment.  Further, Ruckus’s products are generally easy to manage and work with, and can be deployed by IT resources with moderate Wi-Fi experience, and their technical support is excellent.

In other words, it’s not necessarily a question of which access point is best, it’s a question of which access point is best for your environment and situation.

My recommendation

Having recently implemented solutions from Cisco, Ruckus, and Ubiquiti my recommendation would be based on your specific situation.  But generally speaking, I do have a preference for the Ruckus product set.  From a technology standpoint, Ruckus has developed a unique edge in terms of their antenna solution, which provides a fundamentally different capability relative to any of the Enterprise-class WLAN competitors.  What’s more, during the last generation of products, their ranking in the independent vendor assessment implies that the adaptive antenna design is giving them an edge relative to their competition.  From a support standpoint, Ruckus’s support team has been subjectively better than Cisco’s in my experience.  From an ongoing management standpoint, the Ruckus ZoneDirector interface provides and good dashboard where you can easily drill down on performance and troubleshooting, without necessarily requiring the support of an IT organization or individual.  While Ruckus isn’t the right solution for every situation, it is a very attractive solution that is priced competitivly with the Cisco Aironet offering.

Bottom line?

Having slow or poor Wi-Fi coverage is a solvable problem today, and 802.11ac represents the latest revision of the technology standard.  At this point in the cycle, we’re starting to see mature second generation product sets coming from the major Enterprise WLAN competitors, and they all offer generally competitive and capable products.  If you’re tired of struggling with unreliable Wi-Fi in your office, it’s a very solvable problem today and we’ll be able to help you find the best solution for your needs and your environment.

Multi-Site WordPress Walkthrough

This guide is intended to provide you with an A-Z walkthough of installing and configuring a multiple WordPress site environment, under a single Ubuntu instance, as well as getting your existing site(s) restored into the new environment.  This would typically be useful for folks who want to move their WordPress sites from one VPS or hosting provider to a another.  This guide may be particularly suited for users of DigitalOcean, as it really consolidates many of their excellent knowledge base and how-to articles, as well as provides solutions to some of the problems that you might run into throughout the process.  That having been said, it’s applicable to many environments.  Before you get started, make sure that you’re using something reliable to backup your WordPress sites.

Deploy a new VPS instance 

If you’re restoring from either a single-instance WordPress deployment (e.g. DigitalOcean 1-click WordPress on Ubuntu 12.x), and you want to move it to a different WordPress VPS instance (e.g. an Ubuntu 14.04 instance on DigitalOcean with WordPress configured in a multi-tenant configuration like so, you’ll first need to deploy and configure your new VPS instance.  Follow the “Initial Server Setup with Ubuntu 12.04” to get you started (its close enough to Ubuntu 14.04 to follow).  Which basically consists of:

  • Changing your root password
  • Creating a new user
  • Giving the new user root privileges via sudo (visudo)
  • Changing the default port that ssh runs on

DNS and LAMP Stack

Next, follow the “How to Set Up a Host Name with Digital Ocean” guide, which is a short primer on DNS.  And proceed to with the “How to Install Linux, Apaches, MySQL, PHP (LAMP) on Ubuntu” guide on digital Ocean.  Basically, what you’re doing here is getting your VPS instance deployed, active, and configured with a LAMP stack.  In short…

  • Install Apache
  • Install & Configure MySQL
  • Install PHP and verify that it’s working

Multiple WordPress Sites on a Single Ubuntu VPS

Finally, follow the “How To Set Up Multiple WordPress Sites on a Single Ubuntu VPS”.  This guide has everything you need to get your VPS instance configured to run multiple WordPress sites on a single Ubuntu instance.

  • Download WordPress
  • Create site databases and users in MySQL
  • Configure the site root directories in /var/www/FirstSite, /var/www/SecondSite, etc.
  • Use rsync (rsync –avP /source /var/www/FirstSite) to copy the directory hierarchy over to the site root directories.
  • Configure WordPress (/var/www/FirstSite/wp-config.php) this is where you link your MySQL databases and users that you created in step ii above to this /var/www/FirstSite (etc.) instance.

From here you’ll configure the Site Virtual Host Configuration

  • From /etc/apache2/sites-available – cp 000*.conf to FirstSite.conf
  • In here you’ll want to add ServerName, ServerAlias (you can also use *.com, etc.), DocumentRoot /var/www/FirstSite
  • Install the PHP Module

Restore you WordPress site from Backup

At this point, you’re going to want to grab one of those WordPress backups that you’ve been doing, and copy it over to your new VPS instance.  If you’re uploading it from a Windows machine, WinSCP will do the trick.  Check this “Restoring Your Database From Backup” guide for more information.

    • Copy the backup archive to your home directory ~/restore
    • unTar/unzip it… tar -zxvf backup.tar.gz
    • Put the backed-up SQL back into your MySQL database that you created earlier (e.g. FirstDatabase).
    • Save a copy of your site root WordPress config file (e.g. cp /var/www/FirstSite/wp-config.php ~/working/wp-config.php) so that you don’t have to re-create it.
    • Next, you’re going to need to dump the restored WordPress files into the site root directory… this can be accomplished via rsync (rsync -avP /source /var/www/FirstSite).
    • Copy your good wp-config.php file which has the links to your MySQL database back over (e.g. cp ~/working/wp-config.php /bar/www/FirstSite/)
    • Restart Apache (sudo service apache2 restart).
    • Your site should now be working.

 Fixing Post Name Permalinks

  • Make sure mod_rewrite is enabled via… “a2enmod rewrite” has been run (then restart apache – service apache2 restart ).
  • Fix the AllowOverride all on every occurrence of allow override in /etc/apache2/apache2.conf .
  • To eliminate the informational warning (non-critical) message on restarting apache2: Could not reliably determine the server’s fully qualified domain name, using for ServerName, edit the /etc/apache2/apache2.conf file to such that the ServerName line has been added to read ServerName localhost.

VAX Virtualization Explored under Charon

VAX Virtualization Explored under Charon thumbnail

 “Hey… you know that plant of ours in Europe… the one with all of the downtime?”

“Sure… “

“Did you know it runs on a 30-year old VAX that we’re sourcing parts for off of Ebay?”

“Really?! … I guess that makes 4 plants that I know of in the exact same situation!

That conversation, or one very much like it is the same conversation being had at thousands publicly traded companies, and government organizations around the world.   If you’re a syadmin, a VMware resource, or a developer who got their start anytime in the x86 era, you’ll be forgiven if the closest you’ve come to hardware from Digital Equipment Corp (DEC)/HP Alpha is maybe Alpha/NT box somewhere along the line.  You’d also be forgiven for assuming that VAX hardware from the 1970’s doesn’t still run manufacturing lines that produce millions of dollars in products a year.

But that’s exactly what’s happening.

… And so is the Ebay part of the equation.

To hear the Alpha folks talk, those old platforms were bulletproof and would run forever.  Perhaps not in exactly the same way that the large swaths of the banking industry still run on COBOL, but it’s an apt comparison.  The biggest difference is that code doesn’t literally rust away.  The DEC/HP Alpha hardware is engineered to something like Apollo-era reliability standards… but while they stopped flying Saturn V’s 40 years ago, these VAX machines are still churning away.  Anyway, there’s a joke that goes something like… you know how some syadmins used to like to brag about our *nix system uptimes being measured in years (before heartbleed and shellshock)?

Well, VAX folks brag about uptimes measured in decades.

Crazy, isn’t it?

You might be sitting there asking yourself how we got to this situation?  In simple business terms… If it ain’t broke (and you can’t make any extra margin by fixing it), don’t fix it!

I know lots of IT folks have this tendency to think in 1-3 year time-spans.  I get it.  We like technology, the latest gadgets, and sometimes have an unfortunate tendency argue about technica-obsecura.  But that’s only really because “Technology moves so fast”, right?  Yes, there’s Moore’s law, and the Cloud, and mobility, and all of that stuff.  Yes, technology does move fast.  But business… business doesn’t really care about how fast technology moves beyond of the context of if it can benefit them.  In other words, you use assets for as long as you can extract value from them.

That’s just good business.

What’s the objective of this project?

The primary objective is to mitigate risk – the risk that a critical hardware failure will occur that takes production off-line for an indeterminate amount of time.  Secondary objectives all include modernizing the solution, improving disaster recovery capabilities, eliminating proprietary or unsupported code, and cleaning up any hidden messes that might have collected over the years.

Put differently, the question really is – can we virtualize it and buy some more time, or do we need to re-engineer the solution?

Starting with a quick overview of the project in question… The CLI looks vaguely familiar, but requires a bit of translation (or a VAX/VMS resource) to interact with it.  Starting Lsnrctl returns an Oracle database version… which, unfortunately several searches return precisely zero results for.  Un/der-documented versions of Oracle are always a favorite.  Backups to tape are reportedly functioning, and there’s also a Windows reporting client GUI (binary only, of course), from a long-defunct vendor.  The good news this time around… the platform is apparently functional and in a relatively “good” working state.  The bad news… there is no support contract for anything.  Not for the hardware, not for Oracle, and certainly not from the Windows reporting client.  In this case, the legacy VAX  is basically a magical black-box that just runs and gives the customer the data they need.  And at this point, all institutional knowledge beyond running very specific commend sets has been lost – which isn’t atypical for 20-30 year old platforms.

Which bring us to the question – virtualize, or re-engineer?

Virtualizing a VAX

To start with, most VAX/VMS operating systems are designed for specific CPU types, so virtualizing directly using something like VMware, or Hyper-V is a non-starter.  But those CPU architectures and configurations are pretty old now.  Like, 20-30 years old.  That makes them candidates for brute force emulation.  And there are a few choices of emulator out there… including open-source options like SIMH, and TS10, as well as commercial solutions like NuVAX, and Charon.   After doing a bit of research, it’s pretty clear that there was only one leading commercial offering for my use case… Charon from a company called Stomasys.  While there may be merit in exploring open-source alternatives further, the reality is that the open-source community for VAX system development isn’t exactly active in the same sense the Linux OS community is active.  So if you do go down the open-source path, keep in mind that some of the solutions aren’t even going to be able to do what you might think of as simple and obvious things… like, say, boot OpenVMS.  Which is pretty limiting.

Charon Overview

Aside from the Greek mythology reference to the ferryman who transported the dead across the river Styx, Charon is also a brand name for a group of products (CHARON-AXP, CHARON-VAX) that emulate several CPU architectures, covering most of the common DEC platforms.  You know… things like OpenVMS , VAX, AlphaServer, MicroVAX3100, and other legacy operating systems.  Why the name Charon?  Like the mythological boatman of who, for a price, keeps the dead from being trapped on the wrong side of the river (e.g. old failing hardware); Charon transports the legacy platform unchanged between the two worlds (legacy and modern).  In a similar manner that running a P2V conversion on say, Windows NT, let’s you run a 20 year old Windows assets under vSphere ESXi, Charon lets you run your legacy VAX workloads unchanged on modern hardware.  In other words, you can kind of think of Charon like a P2V platform for your legacy VAX/VMS systems.  Of course, that’s a wildly inaccurate way to think about it, but that’s basically the result you effectively get.

How does Charon Work?

Charon is emulator … it’s secret sauce is that it does the hard work of converting instructions written for a legacy hardware architecture, so that you can run them on an x86/x64 CPU architecture, and do so quickly and reliably.  Because Charon enables you to run your environment unchanged on the new hardware, not only to you get to avoid the costly effort of reengineering your solution, but you can also usually avoid the painful effort of reinstalling your applications, databases, etc.  So beneath the hood, what Charon is essentially doing is creating a new hardware abstraction layer (HAL), to sit on top of your x86/x64 compatible physical or virtual hardware.  The Charon emulator creates a model of the DEC/HP Alpha hardware and I/IO devices.  Once you have the Charon emulator installed, you have an exact working model on which you can install your DEC/HP/VMS operating system, and applications.  Charon systems then execute the same binary code that the old physical hardware did.  Here’s what the whole solution stack looks like mashed together:


Yes, lots of layers.  But even still, because of the difference between the legacy platform and the modern platform, you still typically get a performance boost in the process.

What do I need?

Assuming you have a running legacy asset that’s compatible with Charon, all you need is a destination server.  In my case, the customer had an existing vSphere environment, and existing backup/recovery capabilities, so all that was really needed was an ESXi host to run a new VM on, and the licensing for Charon.

The process at 30,000 feet looks like this:

  1. Add a new vSphere (5.5x) host
  2. Deploy a Windows 2008 R2 VM (or Linux) template
  3. Use image backups to move your system to the VM
  4. Restore databases from backup.
  5. Telnet into your Charon instance

At a high-level, it really is that simple.

How challenging is the installation? 

If you skim the documentation before installing, it shouldn’t be an issue.  Assuming you have access to the legacy host, you can get an inventory of the information about the legacy platform in order to get the right Charon license… you basically need to grab a list of things like CPU architecture, OS version, tape drive type, etc. (e.g. SHO SYS, SHO DEV, SHO LIC,, SHO MEM, SHO CLU, etc.), which will enable you to get the right Charon licenses.  After that, you’ll be ready to step through the installation.  This isn’t a next/next/finish setup, but once you’ve added got the USB dongle setup, and create a VM based on the recommended hardware specifications, you’re well on your way.

Restoring the data from the legacy hardware onto the new VM, can be a bit more involved.  In a perfect world, you’d be able to restore directly from tape into the new Windows VM – assuming you have the right tape drive, good backups to tape, etc.  Short of that, you’ll need to backup and restore the legacy drives into the new environment.  So you’re going to take image backups of each drive, and then upload the backups to your new VM.  More specifically, do a backup from drive A0 to A1, then A1 to A2, etc.  Upload the A0 backup to your new VM and restore the data.  Proceed like that until you’ve completed the all of the restores.  In this manner, you’ll be able preserve your operating system, database installation, and any other applications, without going through the time consuming installation and configuration process.  As a result, you avoid troublesome things like version mismatches, etc., missing media, poor documentation, etc. After the backups are restored, Charon is able to take those restored files that exist on the parent VM, and boot those as local storage – and you’re off and running.

What does Charon look like?

After you’ve installed Charon, the management interface is accessible via the system tray.


If you’re thinking that’s pretty bare-bones, then you’re right.  Once you’ve installed and configured Charon, there’s just not a lot to do from the management interface.

How do I login to the console?

In order to access the legacy OS console and CLI, you’ll simply fire-up your favorite telnet client and point it at the IP address of your Charon system.


Which should resemble the old physical console.

Issues Encountered 

While the 30,000 foot process that was outlined early in the article is essentially what was followed, the biggest problem that was ran into is probably exactly what you’d guess it to be.  The Oracle database.  Unsupported, and underdocumented as it was, we ran into several problems restoring successfully from tape.  While not a problem with Charon,the reality is that very old and unsupported platforms can have problems that go undetected for years.  Whiel this was resolved within the planned budget, it was still inconvenient.  And should serve as a reminder, that to the extent it’s possible to have a support contract in-place for critical components, you should.  At the same time, that’s not always the boots-on-the-ground situation.

The verdict?

We successfully mitigated the hardware risk associated with the failing hardware in the environment, which was our primary objective.  Using Charon, we were able to pull-forward the legacy environment , running it on new supported hardware.  Between re-installing the legacy OS under Charon, and restoring our application data and backups via tape, we were able to also meet some of the secondary objectives.  As a Windows 2008 R2 VM running on a dedicated vSphere host in the customer’s datacenter, we have something modernized (at least to some extent), and that plugs-into the existing backup infrastructure.  With Veeam Backup and Replication, and a standard backup policy with standard RPO, and RTO objectives we have something that the client has a high-degree of confidence in.

Redmine LDAP Integration – Active Directory Configuration

After you have Redmine installed and configured to the point where you can log in – go ahead and do so. Browse to Administration>Settings>Authentication tab>LDAP Configuration (in the bottom right).

Before you go and start changing things here, there are a few things you should keep in mind that will save you some time. Realize that you can’t do an anonymous bind to Active Directory. So, you need to actually specify a valid set of credentials for the service account. Now, I suppose they could have done something different here to reduce the configuration work… like relying on user login credentials and passing them to query AD. But in any event, you just need a normal domain user account should do just fine – anything that can query Active Directory. Why a domain account? Think about it another way… if someone plugged their laptop into your network, would they be able to query AD for user or computer objects? No… they wouldn’t, because they’d be anonymous. Even if they knew your domain name, had a domain controller’s IP address, the distinguished name, etc… no luck. So create a service account. Just FYI, my domain was at 2003 domain functional level.

As far as the Base DN – keep it simple… base DN means base. You probably don’t want CN=users, or CN=MyBusiness, or anything like that. In my case, I specified DC=domain,DC=local. As for the the attributes, they all come right out of Active Directory… there’s a bunch of places youcould find these if you wanted to spend the time to find them. Or, there’s a bunch of sites that already have this stuff listed (see the below for my config).

When you’re specifying the attributes, keep in mind that you don’t want any extra spaces (blank spaces) after the attributes. For instance, it should be ‘SAMAccountName’ (no quotes), NOT ‘SAMAccountName ‘. If you add a space, it breaks. If you don’t have those “optional” attributes, it breaks. Also – just FYI… if you’re under Authentication, and trying to run a “Test” of authentication, and it say’s successful – that doesn’t mean it’s actually working. You need to test Active Directory account logins from back on the main menu.

If you want to use on-the-fly account creation… you’ll need to make sure all of your Attributes are set correctly and that within Active Directory the attribute fields actually contain data for your users. This is very important. For example, if you have a user trying to login, but their account has “First Name”, and/or “Last Name”, and/or “E-mail” address fields blank (like if you have a “test” user account) – automatic user account creation in Redmine will fail. On top of that – it’s not very verbose about why it failed. So that might be something to file away in the back of your mind, so that when you find one account (or a group of accounts) somewhere that won’t login – you can make sure to check that they have all of the Active Directory attributes specified (just open up Active Directory Users and Computers and check-out the user object that is having a problem).

My Settings:

  • Name: YourDomainOrWhateverYouWant
  • Host: IP address of a Domain Controller (name is probably best)
  • Port: 389
  • Account: Domain\ServiceAccountRedmine01
  • Password: SavedPassword
  • Base DN: DC=domain,DC=local
  • Login: SAMAccountName
  • First Name: givenName
  • Last Name: SN
  • Email: mail

AD: How to Determine the Last Logon time of users

Your ability to determine last logon time really depends on the AD level that you’re at.

For information on the below attributes (and more), check here.

Pre-2003 AD: You can’t do it.
2003 AD: Look at the lastlogon attribute on all DCs.
2003 AD functional level: Look at the last-logon-timestamp
2008: Check the msDS-LastSuccessfulInteractiveLogonTime 

If you’re not at 2008, or 2003 domain functional level, and you want to determine the last logon time, you can use AD-FIND to query each DC, get the time stamp in the nt time epoch format (the time measured in seconds since 1/1/1601) and then usew32tm /ntte to convert the stamp into a readable format… Date, Hour:min:second.

adfind -h DC1:389 -b dc=domain, dc=local -f “objectcategory=person” lastlogon >DC1.txt

adfind -h DC2:389 -b dc=domain, dc=local -f “objectcategory=person” lastlogon >DC2.txt

… and so on for each DC.

To convert lastlogon time, take the time stamps for the user’s that you’re interested in and convert them…

w32tm /ntte value1
w32tm /ntte value2

… and so on.

Then you can compare each. At 2003 functional level the attribute lastlogontimestamp is replicated to each DC – so it’s a single source of truth. In 2008 it gets even better with last logons, last failed logons, and more. With some diligence, you can probably take the above steps do some further learning around them to improve things a bit, and then script the the logic. But for one-offs, and small networks this works.

Visit Us On Twitter