StackAccel

≡ Menu

Articles

Wi-Fi Guide

The conversation always seems to start with slow Wi-Fi, doesn’t it?

Maybe its employees telling you that they’re getting kicked-off of the Wi-Fi at your office.  Or maybe you’re just tired of having the receptionist reboot your access points every other day (even if that seems to fix things for a bit).  Bottom line… if you can watch Netflix in bed from your iPad, then you at least better be able to check email at the office!  What if the quality of your Wi-Fi connection at work was top-notch?  What if you knew you could support hundreds of connections – phones, laptops, tablets… and you knew that they’d be able to work on-line without headaches, without downtime… and most of all, without complaints every other day?  Would you be interested?

Of course it’s possible, in fact the expectation today should be consistent and reliable Wi-Fi.

In 2015, we’re in what’s called the 5th Generation of Wi-Fi/wireless technology, having finally arrived in the 802.11ac era.  This represents the fifth major iteration of the Wi-Fi as a technology standard.  It’s hard to believe when you look back, but Wi-Fi is now more than 18 years old.  What started off in the late 1990s with 2Mbps data rates, and a platform designed for the occasional use in conference rooms, has evolved into a standard that can support gigabit speeds, dense user populations, and video streaming without breaking a sweat.

In other words, if you have slow Wi-Fi service today, it’s a problem that can be solved.

Why is your current Wi-Fi slow?

A common narrative goes something like this… “We’ve been using these wireless access points for a while, and up until recently we almost never had a problem.  Now?  Everything just seems really slow.  We reboot the APs every so often and that seems to help… for a while… but the problem isn’t going away.

Often times, a quick walkthrough reveals a mix of consumer grade APs, placed at random throughout the environment.  While that’s not necessarily a root cause, it is a warning sign, and at the very least something to investigate.  What can make this especially challenging for the average business owner is that the design of their wireless network (WLAN) probably served them reasonably well up until fairly recently.  The explosion of portable devices and increasing end-user expectations are the major market forces at work, combining to strain older WLANs.

The flavors of 802.11

The Institute of Electrical and Electronics Engineers (IEEE) describes the 802.11 family of protocols, which provide the basis for wireless network products using the Wi-Fi brand.  They whole history of the protocol family is beyond the scope of article but here’s a short summary:

  • 1997 – 802.11 supports up to 2 Mbps
  • 1999 – 802.11b supports up to 11 Mbps
  • 2002 – 802.11a/g supports up to 54 Mbps
  • 2007 – 802.11n supports up to 600 Mbps
  • 2012 – 802.11ac supports up to 3.6 Gbps

What’s a bit misleading is that just because the 802.11ac standard was ratified in 2012, doesn’t mean that mature Enterprise 802.11ac access points were available at the time.  For example, while Quantenna and Broadcom were shipping Wi-Fi chipsets in 2012, and there were a couple of consumer grade 802.11ac access point that year, it wasn’t until over a year later that Cisco shipped 802.11ac in their Aironet product line-up.  It was even later than that for Ruckus.  What you may or may not realize, is that vendors like Cisco, Ruckus, Aruba, HP, Apple, etc. don’t actually make the Wi-Fi chip-sets that go into their products.  Instead, similar to the manner in which Dell buys CPUs from Intel – what the vendors do is contract with a chipset vendor like Atheros, Broadcom, or Marvell.  The chipset vendors take the 802.11 standard specifications, and design chipsets that actually implement the 802.11ac standard.  The chipsets typically include the chips that implement that standard, and may also include software drivers, and maybe an operating system to run the chipsets.  The major integrators of the world then buy the chipset and related components in volume from the upstream vendors, and implement those chipsets in their hardware in the form of a product like Cisco’s Aironet, or Ruckus’s R700 access points.

So if everyone is using the same basic building blocks, what differentiates the products?

The same thing that differentiates say, a Samsung Galaxy Android phone, from an Apple iPhone… it’s about how well the components work together, and the experience they provide end-users.  Starting early in a product lifecycle – say the first generation of 802.11ac, there’s usually relatively little “value-add” that the vendors build into the product.  The first generation is often mostly about integrating the various hardware components effectively.  As they iterate through designs, vendors build their own intellectual property on-top of the chipsets to diversity their offering relative to the competition.  In the case of Ruckus, a major component of their intellectual property, or their value-add comes in the form of the antenna system (more below), which differentiates their offering fundamentally from most of the omnidirectional antenna competition.  But all of this takes time.  Even today, with 802.11ac several years old (at least on paper), 802.11ac Enterprise-class hardware has is still somewhat recent, and is still improving as “wave 2” 802.11ac devices start popping up on the horizon.

So what do I get out of 802.11ac?  

It’s faster, and can support more devices than the last generation of products.

How much faster?  That depends on quite a few variables.  Today, under certain conditions 802.11ac is up to 3x as fast today as 802.11n devices are, and it has the potential to be quite a bit faster down the line with the 802.11ac specification being written to support up to 8 antennas, multi-user MIMO, and up to 6.8 Gbps.

But we’re getting a bit ahead of ourselves.

If all you’re interested in is if it’s worth buying 802.11ac Enterprise class devices today, the answer is yes – buy 802.11ac access points… even if you don’t have enough 802.11ac client devices yet to dictate 802.11ac (and you probably don’t).  Why?   Because 802.11ac is backwards compatible with 802.11n, and APs supporting it are generally compatible with all major 802.11 variations in both the 2.4 GHz, and 5 GHz segments of the spectrum.  Plus, because 802.11ac represents an evolution of the old 802.11n specification, by buying 802.11ac Enterprise class access points (APs) today, what you’re really doing is buying the best of the 802.11n designs, and getting 802.11ac as well.

For the majority of customers, it only makes sense to buy 802.11ac access points.

APs and Controllers? 

Wireless Access Points (APs) are the devices on your network which broadcast your Wi-Fi signals, implement encryption/security and physically bridge your wired Ethernet network with your WLAN.  If you’re coming at the wireless market and have experience with consumer APs, or Wi-Fi at home, the APs are what you’re probably already familiar with.

In addition to the APs themselves, most Enterprise-Class wireless solutions all but require a wireless controller.  For example, Cisco and Ruckus both have controllers in the form of the Cisco Wireless LAN Controller for the Aironet products, and the Ruckus ZoneDirector controller.  In broad terms, the wireless controllers extend the capabilities of the APs, and use data generated by the APs such that they’re able to work together to optimize signal, connection, and performance for the participating wireless clients.

But what do the controllers actually do

In real-world terms, controllers do quite a few things.  First, controller-based APs look for and associate with a parent controller automatically, which enables the APs to self-configure and associate with the controller making them immediately available for remote management.  Once that happens, the controller can automatically optimize the environment and adjust settings on-the-fly in response to real-world measurements and feedback from the APs.  This means adjusting power levels and RF channel assignments dynamically in order to minimize the impact of interference.  If you recall setting up consumer-grade APs and having to manually set the channels to keep them from interfering with each other, then you’ll be pleased to learn that optimizations like that will happen automatically and without administrative burden on an on-going basis, providing the best possible signal quality to your end-user’s devices.  Beyond optimizations, the controller also provides value-added troubleshooting capabilities that can not only consolidate logging information, but also enable an administrator to determine individual client-device performance.  Translated, that means you’ll potentially be able to drill-down on complaints of “My Wi-Fi is slow!” in real-time and do root-cause analysis (instead of just rebooting the AP and hoping).  Beyond the above, controllers also work to enable more advanced security integration capabilities, generally enabling RADIUS, captive portal, ActiveDirectory integration, dynamic VLAN assignment, and a host of other capabilities.

Beamforming and MIMO

If you’re unfamiliar with how radio frequency (RF) energy propagates from an omni-direction antenna (e.g. a regular wifi antenna), the propagation looks like a donut-shaped Wi-Fi bubble of coverage emanating out from your APs in 3-Dimensional space.

WiFiin3D

When you think about on-chip beamforming as a technology, you might think in terms of it directing RF energy in a beam-like manner toward the client devices that the AP is communicating with.  Unfortunately, the reality falls somewhere short of that.  In the prior standard, 802.11n specified several beamforming methods, but because there were so many options and because implementing them drove cost and complexity, the result was that most vendors didn’t implement anything on the client side.  And when it comes to the on-chip variety (e.g. nearly every implementation) the benefit that beamforming created in terms of increased signal strength was often times negligible, and occasionally destructive.  The problem really comes down to a lack of data.  Without getting into the weeds, in a version of the 802.11n on-chip beamforming variety, a pair (or more) of omni-directional antennas emit identical signals, but because of timing or transmission paths, those signals arrive a slightly different points in time on the receiving end, with the goal being constructive interference at the point of the receiver.  Something like this…

InterferencePattern

The challenge is that dead-spots and areas of destructive interference combine to minimize the benefit of on-chip beamforming advantage, resulting in coverage patterns with gaps in their effective coverage areas.  There’s a lot more to this topic, but that’s the short version.

802.11ac Beamforming

With 802.11n beamforming iterations generally coming-up short, only one beamforming model was specified in 802.11ac, which works on both sides of the conversation (both the AP, and the client devices).  The specification is an evolution of 802.11n, with support for more Multiple input, multiple output (MIMO) spatial streams (up to 8).  In practical terms, that means up to 8 antennas on both the AP and client side of the conversation.

Big-picture?  This translates into better & faster Wi-Fi.

To elaborate on MIMO briefly, MIMO relies on interference to create signal diversity, which enhances signal quality.  It may sound counter intuitive at first though, so here’s a bit more of how it works.  Instead of using a single carrier transmitting a data set for a short period of time, where temporary interference can damage a signal’s payload, 802.11ac MIMO specifies OFDM.  Essentially, this means multiple data payloads (“symbols”) are transmitted in parallel on the same frequency, on multiple radios for longer periods of time, making it easier to recover the signal if there’s temporary interference.  Essentially, all of the transmitters transmit on the same frequency, but with different data being sent out of each antenna (up to 8 antennas, in the specification).  So the obvious question becomes how can multiple antennas operate at the same time on the same frequency with different data sets, and not create interference? The answer?  A  powerful digital signal processor (DSP) and matrix math, enables either side to recover the actual signals on each antenna.  The receiving antennas are able to differentiate the data, and improve spectrum efficiency to, in real terms, multiply bandwidth by the number of radios (up to 8).  In one application, 3 transmit antennas send 3 different sets of data on the same frequency at the same time, and the three receive antennas are still able to unscramble the data, because the signal diversity causes the signals to arrive at different points in time.  Effectively, the signals bounce off of stuff as they propagate outward, ensuring that the signals all arrive at slightly different points in time, and when combined with a strong DSP, the AP is able to unscramble and recombine the data streams.

(What I’ve presented on MIMO is a simplified version of what’s really going on.  I didn’t touch on the client-end, channel measurements that happen on both the transmit and receive sides, or multi-user MIMO.  So if you’re interested in a more comprehensive answer, including a an explanation of the matrix math involved in unscrambling the data streams, check out 802.11ac: A survival guide, by Matthew S. Gast, and you will not be disappointed).

Put differently… MIMO is all about maximizing the efficient use of the RF spectrum.  Which translates into more, better, faster Wi-Fi for everyone.   Well, for the most part anyway.  For mobile devices, particularly phones and tablets, it’s going to be quite a while before we see 8 antennas implemented.  The reason for this is that each additional radio doubles the power demands, shrinking the battery life of those mobile devices.  So while energy consumption may not matter at the APs, on mobile devices like phones and tablets, every milliwatt counts.

Wouldn’t directional antennas be better than omni-direction antennas?

As I mentioned earlier, most APs use omni-directional antennas that radiate RF energy out indiscriminately in a 3D Wi-Fi coverage bubble.  And even when you add most vendor’s beamforming flavors, you’re not doing anything to increase the size of the coverage zone.  Instead, what you’re really doing is using some clever tricks to take the signal donut and alter the quality (both for good, and ill) within the existing coverage range.  In practice, on-chip beamforming with omnidirectional antennas create coverage areas with improved , as well as reduced coverage, as approximated below.

on-chipBeamforming

And we haven’t even discussed polarization yet.

An antenna provides gain, direction, and polarity to a radio signal.  Polarity is the orientation of the transmission from the antenna.  Antennas produce either vertically polarized (VPOL), or horizontally polarized (HPOL) signals – the polarization axis describes the orientation of the radio waves as they radiate out from the antenna.  In other words, RF energy moves in waves, and those waves move up-and-down, or back-and-fourth in space – the polarization of the wave.

polarizationR2

  Why does polarization matter? 

In a worst case scenario a perfectly horizontal receiving antenna may not hear anything transmitted by a perfectly vertical antenna.  While objects can impede, or reflect a signal and distort the polarization, signal loss due to polarization differences is real, and can potentially prevent communications from occurring.  At the very least, signal strength is reduced.

In terms of a real-world example, recent MacBook designs have housed their antennas in the hinge section of the laptop, and the antenna has a horizontal orientation.  What this means, is that a transmission coming from a vertically polarized AP will be harder for the MacBook to hear, and likewise the AP will have a harder time distinguishing the MacBook’s transmission.  For laptops, the polarization is generally a static condition based on the orientation of your laptop.  Phones and tablets though?  You tilt their orientation based on whatever you’re doing.  Need a wide-screen to watch Netflix, you hold your phone horizontally.  Reading an article on a web-site… you’ll orient it vertically.  The changes in the devices physical orientation change the orientation of the RF signal.  Which results in complaints along these lines; “When I hold my phone a certain way and I’m standing in this location, the Wi-Fi slows down or stops working” (that and other similarly unusual symptoms).

Here’s why… nearly all Wi-Fi access points use omni-directional dipole antennas that are vertically polarized.  These are considered the norm in the industry.  The reason is that they were common in the wider field of RF prior to the mass adoption of Wi-Fi.   In the case of a Cisco Aironet 2700 series antenna, the omnidirectional antennas housed within the chassis are vertically polarized antennas when mounted in the traditional orientation (e.g. flat).

ruckusAntennaArrayTo contrast Ruckus’s antenna design with omnidirectional antennas, Ruckus’s design takes a large number of small antenna elements and hooks those up to a digital switch.  The AP learns about the environment, and then uses antenna element array combinations to produce a desirable coverage pattern.  Some of these antenna elements are vertically polarized, and others are horizontally polarized.  By leveraging the CPU in the AP, Ruckus then optimizes antenna patterns for performance, in terms of rate control, power selection, and antenna choices.  Then, the choices are remembered for each client device, enabling the Ruckus product to make increasingly better decisions as the devices communicate.  So even under 802.11n, where the client devices aren’t providing any beamforming feedback to the APs,  the Ruckus product is still able optimize antenna patterns to maximize signal strength potential based on historical data for the individual client.  This occurs in a manner that the Cisco Aironet products are fundamentally unable to do.  Moreover, since the Ruckus solution uses antenna element arrays, there’s a mix of horizontally polarized, vertically polarized, as well as directional antenna elements which can create pattern optimizations that Ruckus is able to employ in order to optimize signal for both horizontally and vertically polarized client devices.  Put differently, Ruckus’s antenna designs go a long way to optimizing signals for various device types in the current, bring-your-down-device (BYOD) real-world.  What’s more, even though 802.11ac specifies a beamforming implementation that incorporates client device feedback into channel optimizations, Ruckus is able to optimize for signal-strength in a manner that’s fundamentally more diverse than any omni-directional antenna is able to do.  While Cisco does take issue with that statement, in the form of a whitepaper outlining why DSP processing is better than using thousands of unique antenna patterns for optimizing the RF signal for real-world situations, Ruckus’s antenna arrays are physically able to compensate for RF polarization, and even increase the “Wi-Fi” bubble by employing directional antenna elements which extend RF energy physically in the direction of active clients.

In other words, Ruckus largely eliminates the polarization topic and self-optimizes polarization in favor of improved client communications. Put differently, Ruckus’s Beamflex technology is beamforming on steroids.

RuckusWiFiSignal

Unlike on-chip beamforming, the transmit beamforming via Ruckus’s antenna element array provides a fundamentally unique capability relative to the competition, with antennas capable of producing unique coverage patterns over a more focused coverage area, with less potential for destructive interference relative to omni-directional antennas.

Where do I go from here?

The Enterprise Wi-Fi industry (e.g. Enterprise WLAN market) continues to produce more capable Access Points, with more features.  As more 802.11ac Enterprise-Class product sets are released, and improved upon, they’re increasingly making more efficient use of the available RF spectrum.  This translates into faster Wi-Fi, in denser user environments, with increasing features and capabilities.  This trend is expected to continue for the foreseeable future, as the 802.11ac specification was designed to grow around the anticipated near-term needs for more bandwidth, largly by employing more radios and antennas, as well as through the role out of MU-MIMO in wave 2.

From a marketshare standpoint, Cisco is the clearly the dominate player.  As of the last published IDC marketshare report, while Enterprise Wireless LAN market grew 7.6% year-over-year…

  • Cisco had 46.8% of the market, down though from 53% in the prior year
  • Ruckus saw 20.8% growth in the prior year, growing to 5.7% of the market
  • Aruba’s sales grew at 7.9%, increasing to 9.8% of the market.

In addition to marketshare, the biggest recent news of course, is that HP is buying Aruba, though it’s far too early to know how that will play out and what effect it will have on the market.

But which vendor is the best?

The major Enterprise WLAN vendors all offer generally competitive products, derived from just a few different chipsets.  With market forces being what they are, the result is that all of these products are generally competitive.  From a technical standpoint, the most recent vendor independent, access point analysis and report, comes courtesy of Wireless LAN Professionals in 2013.  Notably, the event was not vendor-sponsored, and as such represents the most unbiased and comprehensive assessment that I’ve seen.  It is however, limited to the last generation of 802.11n products.  What you’ll find if you dig through the reports, is that every single access point reached a choking point with respect to maximizing the use of the RF spectrum.  In the case of the report, the best overall combination ranking was the Ruckus 7982 product, followed by the Cisco 3602i.  If you look a bit further back, Tom’s Hardware did an excellent article on beamforming – “Beamforming: The Best WiFi You’ve Never Seen”.  While some of it is obviously a bit dated having been published prior to 802.11ac being standardized, it’s still a great primer on the differences between on-chip and antenna beamforming.

The real question is best for whom, or for what situation

Failing an updated version of the Wireless LAN Professional report incorporating the newer 802.11ac products, there’s not a comprehensive vendor-neutral assessment that I can point to in order to tell you which device technically has the best overall coverage in this generation of product.  Even if I did, and we could, it wouldn’t effectively answer which is best for your environment.   Beyond RF-capabilities, there are several factors which differentiate Enterprise 802.11ac WLAN products.  These include everything from design, to ease-of-use, to management capabilities, scalability, price and more.  For example, Ubiquiti offers a low-cost controller-less product, that’s generally easy to deploy and manage and is probably reasonable for most IT resources to deploy.  However, you generally need more access points than you would with a Ruckus deployment, and that means potentially more troubleshooting work.  And unfortunately, Ubiquiti doesn’t really offer technical support that would be comparable to any of the Enterprise-Class AP vendors.  Meanwhile, the Cisco Aironet product is an Enterprise-Class access point designed by the marketshare leader, and employs Cisco’s IOS, which makes it generally more suited toward larger environments or for environments where the IT resources involved have Cisco-specific experience.  Cisco has good support, but reaching the right support resources in a timely manner, and getting effective support doesn’t always happen.  Ruckus on the other hand, employs a unique phased-antenna element array design that provides a fundamentally unique advantage relative to the competition, which was apparent during the last generation of products in the vendor-independent assessment.  Further, Ruckus’s products are generally easy to manage and work with, and can be deployed by IT resources with moderate Wi-Fi experience, and their technical support is excellent.

In other words, it’s not necessarily a question of which access point is best, it’s a question of which access point is best for your environment and situation.

My recommendation

Having recently implemented solutions from Cisco, Ruckus, and Ubiquiti my recommendation would be based on your specific situation.  But generally speaking, I do have a preference for the Ruckus product set.  From a technology standpoint, Ruckus has developed a unique edge in terms of their antenna solution, which provides a fundamentally different capability relative to any of the Enterprise-class WLAN competitors.  What’s more, during the last generation of products, their ranking in the independent vendor assessment implies that the adaptive antenna design is giving them an edge relative to their competition.  From a support standpoint, Ruckus’s support team has been subjectively better than Cisco’s in my experience.  From an ongoing management standpoint, the Ruckus ZoneDirector interface provides and good dashboard where you can easily drill down on performance and troubleshooting, without necessarily requiring the support of an IT organization or individual.  While Ruckus isn’t the right solution for every situation, it is a very attractive solution that is priced competitivly with the Cisco Aironet offering.

Bottom line?

Having slow or poor Wi-Fi coverage is a solvable problem today, and 802.11ac represents the latest revision of the technology standard.  At this point in the cycle, we’re starting to see mature second generation product sets coming from the major Enterprise WLAN competitors, and they all offer generally competitive and capable products.  If you’re tired of struggling with unreliable Wi-Fi in your office, it’s a very solvable problem today and we’ll be able to help you find the best solution for your needs and your environment.

VAX Virtualization Explored under Charon

VAX Virtualization Explored under Charon thumbnail

 “Hey… you know that plant of ours in Europe… the one with all of the downtime?”

“Sure… “

“Did you know it runs on a 30-year old VAX that we’re sourcing parts for off of Ebay?”

“Really?! … I guess that makes 4 plants that I know of in the exact same situation!

That conversation, or one very much like it is the same conversation being had at thousands publicly traded companies, and government organizations around the world.   If you’re a syadmin, a VMware resource, or a developer who got their start anytime in the x86 era, you’ll be forgiven if the closest you’ve come to hardware from Digital Equipment Corp (DEC)/HP Alpha is maybe Alpha/NT box somewhere along the line.  You’d also be forgiven for assuming that VAX hardware from the 1970’s doesn’t still run manufacturing lines that produce millions of dollars in products a year.

But that’s exactly what’s happening.

… And so is the Ebay part of the equation.

To hear the Alpha folks talk, those old platforms were bulletproof and would run forever.  Perhaps not in exactly the same way that the large swaths of the banking industry still run on COBOL, but it’s an apt comparison.  The biggest difference is that code doesn’t literally rust away.  The DEC/HP Alpha hardware is engineered to something like Apollo-era reliability standards… but while they stopped flying Saturn V’s 40 years ago, these VAX machines are still churning away.  Anyway, there’s a joke that goes something like… you know how some syadmins used to like to brag about our *nix system uptimes being measured in years (before heartbleed and shellshock)?

Well, VAX folks brag about uptimes measured in decades.

Crazy, isn’t it?

You might be sitting there asking yourself how we got to this situation?  In simple business terms… If it ain’t broke (and you can’t make any extra margin by fixing it), don’t fix it!

I know lots of IT folks have this tendency to think in 1-3 year time-spans.  I get it.  We like technology, the latest gadgets, and sometimes have an unfortunate tendency argue about technica-obsecura.  But that’s only really because “Technology moves so fast”, right?  Yes, there’s Moore’s law, and the Cloud, and mobility, and all of that stuff.  Yes, technology does move fast.  But business… business doesn’t really care about how fast technology moves beyond of the context of if it can benefit them.  In other words, you use assets for as long as you can extract value from them.

That’s just good business.

What’s the objective of this project?

The primary objective is to mitigate risk – the risk that a critical hardware failure will occur that takes production off-line for an indeterminate amount of time.  Secondary objectives all include modernizing the solution, improving disaster recovery capabilities, eliminating proprietary or unsupported code, and cleaning up any hidden messes that might have collected over the years.

Put differently, the question really is – can we virtualize it and buy some more time, or do we need to re-engineer the solution?

Starting with a quick overview of the project in question… The CLI looks vaguely familiar, but requires a bit of translation (or a VAX/VMS resource) to interact with it.  Starting Lsnrctl returns an Oracle database version… which, unfortunately several searches return precisely zero results for.  Un/der-documented versions of Oracle are always a favorite.  Backups to tape are reportedly functioning, and there’s also a Windows reporting client GUI (binary only, of course), from a long-defunct vendor.  The good news this time around… the platform is apparently functional and in a relatively “good” working state.  The bad news… there is no support contract for anything.  Not for the hardware, not for Oracle, and certainly not from the Windows reporting client.  In this case, the legacy VAX  is basically a magical black-box that just runs and gives the customer the data they need.  And at this point, all institutional knowledge beyond running very specific commend sets has been lost – which isn’t atypical for 20-30 year old platforms.

Which bring us to the question – virtualize, or re-engineer?

Virtualizing a VAX

To start with, most VAX/VMS operating systems are designed for specific CPU types, so virtualizing directly using something like VMware, or Hyper-V is a non-starter.  But those CPU architectures and configurations are pretty old now.  Like, 20-30 years old.  That makes them candidates for brute force emulation.  And there are a few choices of emulator out there… including open-source options like SIMH, and TS10, as well as commercial solutions like NuVAX, and Charon.   After doing a bit of research, it’s pretty clear that there was only one leading commercial offering for my use case… Charon from a company called Stomasys.  While there may be merit in exploring open-source alternatives further, the reality is that the open-source community for VAX system development isn’t exactly active in the same sense the Linux OS community is active.  So if you do go down the open-source path, keep in mind that some of the solutions aren’t even going to be able to do what you might think of as simple and obvious things… like, say, boot OpenVMS.  Which is pretty limiting.

Charon Overview

Aside from the Greek mythology reference to the ferryman who transported the dead across the river Styx, Charon is also a brand name for a group of products (CHARON-AXP, CHARON-VAX) that emulate several CPU architectures, covering most of the common DEC platforms.  You know… things like OpenVMS , VAX, AlphaServer, MicroVAX3100, and other legacy operating systems.  Why the name Charon?  Like the mythological boatman of who, for a price, keeps the dead from being trapped on the wrong side of the river (e.g. old failing hardware); Charon transports the legacy platform unchanged between the two worlds (legacy and modern).  In a similar manner that running a P2V conversion on say, Windows NT, let’s you run a 20 year old Windows assets under vSphere ESXi, Charon lets you run your legacy VAX workloads unchanged on modern hardware.  In other words, you can kind of think of Charon like a P2V platform for your legacy VAX/VMS systems.  Of course, that’s a wildly inaccurate way to think about it, but that’s basically the result you effectively get.

How does Charon Work?

Charon is emulator … it’s secret sauce is that it does the hard work of converting instructions written for a legacy hardware architecture, so that you can run them on an x86/x64 CPU architecture, and do so quickly and reliably.  Because Charon enables you to run your environment unchanged on the new hardware, not only to you get to avoid the costly effort of reengineering your solution, but you can also usually avoid the painful effort of reinstalling your applications, databases, etc.  So beneath the hood, what Charon is essentially doing is creating a new hardware abstraction layer (HAL), to sit on top of your x86/x64 compatible physical or virtual hardware.  The Charon emulator creates a model of the DEC/HP Alpha hardware and I/IO devices.  Once you have the Charon emulator installed, you have an exact working model on which you can install your DEC/HP/VMS operating system, and applications.  Charon systems then execute the same binary code that the old physical hardware did.  Here’s what the whole solution stack looks like mashed together:

CharonBigStackR2

Yes, lots of layers.  But even still, because of the difference between the legacy platform and the modern platform, you still typically get a performance boost in the process.

What do I need?

Assuming you have a running legacy asset that’s compatible with Charon, all you need is a destination server.  In my case, the customer had an existing vSphere environment, and existing backup/recovery capabilities, so all that was really needed was an ESXi host to run a new VM on, and the licensing for Charon.

The process at 30,000 feet looks like this:

  1. Add a new vSphere (5.5x) host
  2. Deploy a Windows 2008 R2 VM (or Linux) template
  3. Use image backups to move your system to the VM
  4. Restore databases from backup.
  5. Telnet into your Charon instance

At a high-level, it really is that simple.

How challenging is the installation? 

If you skim the documentation before installing, it shouldn’t be an issue.  Assuming you have access to the legacy host, you can get an inventory of the information about the legacy platform in order to get the right Charon license… you basically need to grab a list of things like CPU architecture, OS version, tape drive type, etc. (e.g. SHO SYS, SHO DEV, SHO LIC,, SHO MEM, SHO CLU, etc.), which will enable you to get the right Charon licenses.  After that, you’ll be ready to step through the installation.  This isn’t a next/next/finish setup, but once you’ve added got the USB dongle setup, and create a VM based on the recommended hardware specifications, you’re well on your way.

Restoring the data from the legacy hardware onto the new VM, can be a bit more involved.  In a perfect world, you’d be able to restore directly from tape into the new Windows VM – assuming you have the right tape drive, good backups to tape, etc.  Short of that, you’ll need to backup and restore the legacy drives into the new environment.  So you’re going to take image backups of each drive, and then upload the backups to your new VM.  More specifically, do a backup from drive A0 to A1, then A1 to A2, etc.  Upload the A0 backup to your new VM and restore the data.  Proceed like that until you’ve completed the all of the restores.  In this manner, you’ll be able preserve your operating system, database installation, and any other applications, without going through the time consuming installation and configuration process.  As a result, you avoid troublesome things like version mismatches, etc., missing media, poor documentation, etc. After the backups are restored, Charon is able to take those restored files that exist on the parent VM, and boot those as local storage – and you’re off and running.

What does Charon look like?

After you’ve installed Charon, the management interface is accessible via the system tray.

CharonUI

If you’re thinking that’s pretty bare-bones, then you’re right.  Once you’ve installed and configured Charon, there’s just not a lot to do from the management interface.

How do I login to the console?

In order to access the legacy OS console and CLI, you’ll simply fire-up your favorite telnet client and point it at the IP address of your Charon system.

charonConsoleR2

Which should resemble the old physical console.

Issues Encountered 

While the 30,000 foot process that was outlined early in the article is essentially what was followed, the biggest problem that was ran into is probably exactly what you’d guess it to be.  The Oracle database.  Unsupported, and underdocumented as it was, we ran into several problems restoring successfully from tape.  While not a problem with Charon,the reality is that very old and unsupported platforms can have problems that go undetected for years.  Whiel this was resolved within the planned budget, it was still inconvenient.  And should serve as a reminder, that to the extent it’s possible to have a support contract in-place for critical components, you should.  At the same time, that’s not always the boots-on-the-ground situation.

The verdict?

We successfully mitigated the hardware risk associated with the failing hardware in the environment, which was our primary objective.  Using Charon, we were able to pull-forward the legacy environment , running it on new supported hardware.  Between re-installing the legacy OS under Charon, and restoring our application data and backups via tape, we were able to also meet some of the secondary objectives.  As a Windows 2008 R2 VM running on a dedicated vSphere host in the customer’s datacenter, we have something modernized (at least to some extent), and that plugs-into the existing backup infrastructure.  With Veeam Backup and Replication, and a standard backup policy with standard RPO, and RTO objectives we have something that the client has a high-degree of confidence in.

Must Have WordPress Plugins for 2015

(This article is part of my WordPress Toolkit series)

Best plugin advice?

Keep it simple.

It’s not only good advice when it comes to plugins – it’s good advice for your site’s design too.  This probably goes doubly true for entrepreneurs and small businesses that aren’t in the web-design business.  Why?  Because it’s too easy to go overboard… adding features and functionality until your site becomes slow and distracting (or worse).  This isn’t to say that plugins are bad, or should be avoided, it’s just that you need to be selective about the types of plugins you use and have a clear objectives in mind when using them.  In this case, I’m talking about utility plugins… or plugins that provide you with basic and critical functionality that’s common to most WordPress.

… which is why this is a list of only 5 plugins.

 

Akismet

AkismetAkismet – this one probably almost goes without saying.  It pretty effectively handles comment spam (assuming you allow comments on your site).  It comes pre-installed with every WordPress installation, and just needs to be activated.  Is comment-spam a real problem?  Absolutely.  On popular web-sites the amount of spam comments is up to 85%!  Experience also suggests that it’s even higher for new sites (as in, 99.9%).  Akismet does a solid job at eliminating the vast majority of comment spam.  It also does an effective job at preventing false positives (e.g. legitimate comments flagged as spam).  While some users have criticized the false-positive detection,  by and large Akismet does an excellent job.  Configuration is fairly straight-forward, and there are plenty of Akismet how-to articles available should you need them.

BackupWordPress

BackupWordPress

 

BackupWordPress enables you to back-up your entire WordPress site (including all of your files, and your WordPress database) on a scheduled basis.   It does exactly the job you’d expect it to do.  Mainly, it keep your sites backed-up should something bad happen.  The free version does enable you to get scheduled backups emailed over to you, but your site will quickly grow beyond what email can accommodate.  The $99 bundle lets you direct your backup jobs somewhere convenient, like DropBox, GoogleDrive, Amazon S3, sFTP, etc. and is well worth the cost.

Contact Form 7ContactForm7

 

Contact Form 7 is one of the best contact form plugins for WordPress, which can be particularly useful if your theme doesn’t come with one (or if it’s clunky).  Highly customizable and doesn’t require you to do any coding to make it happen.  It also supports useful things like CAPTCHA, Akismet spam filtering, and more.

WP-DBManager

WP-DBManager

WP-DBManager enables you to keep your WordPress database in check.  You may not realize it yet, but one of the challenges that you’ll encounter as you build out content for you sites is that the number of revisions, drafts, and re-revisions start to add up.  Don’t believe me?  Even this short article took more than a few revisions!  One of the more important aspects of running a successful WordPress site is keeping your database healthy – WP-DBManager enables you to do just that.

WP Mail SMTP

WP-Mail-SMTP

 

Depending on your host, you may or may not need a tool to modify the configuration of wp_mail() such that you can use SMTP.  If you find yourself unable to generate email messages with your host, consider using WP Mail SMTP – configuration is fairly straightforward.

Other Advice

While you could probably spend years playing with the tens of thousands of plugins available, I refer you back to the advice offered at the beginning of this post.  Mainly – keep it simple!  At some point, you’ll probably find a need for some extra bit of functionality that WordPress doesn’t offer out of the box, and when you do… look at your theme first and see if it offers you what you’re looking for, and if you can’t accomplish that way, only then consider adding plugins.  When you check out WordPress Plugin Directory – the comprehensive inventory for all things WordPress plugin related, be sure to check the “Last Updated:” field, and the number of “Downloads” for whatever your interested in, as they’ll give you an idea as to the quality of the plugin your looking at, or at least of the size of the community.  Generally speaking, my suggestion is to stick to Plugins that have either been updated recently, have a high download count (or both), and whenever possible have been recommended to you by someone you trust.

There you have it… my list of 5 Must Have WordPress plugins for 2015.  While by no means an exhaustive list, it’s should be more than enough to keep you busy getting your basic site built-out.

Part 6 – Responsive and Mobile

Do you remember a few years ago, when everything looked terrible on mobile devices and that was kind of just the accepted reality of things?  Obviously that’s no longer the case.  Today, web sites need to look great on both the desktop and mobile devices because most of your typical audience is just as likely to be checking out your site from their iPad, as they are from their desktop computer at the office. The ability of a site to display well across all types of devices is called responsive design.  Put differently, its web design that responds and optimizes site appearance and navigation so that the site looks good on whatever device it’s being displayed on… meaning, the site looks good on a computer, on a tablet, or a phone.  It doesn’t matter what you view it on, because the design responds to the environment that it’s being used on.

ResponsiveComparisonR2As an example, here’s a comparison of what this site looks like when viewed from iPhone vs. Chrome on a desktop.  The responsive design enables the content to display in a useful manner on a mobile device.  Put differently, responsive design is about creating a user-friendly site experience, making the site more appealing, and creating a great user-experience across many devices and screen sizes.  While you can build a responsive design yourself, or contract a company to do this for you, in most cases finding the right theme that’s already responsive is often the best approach.  In the prior article in this series, I mentioned several themes which were responsive and useful.

Is responsive really design for me?

The answer is almost certainly yes.  Here’s why… even if you expect that the vast majority of your site’s visitors will be coming in via a desktop web-browser, the reality today is that your site is going to rank better in terms of Search Engine Optimization (SEO) if you have a responsive design that works well with mobile devices.  In other words, Google loves responsive web design and your search results will rank higher automatically just by choosing a responsive theme.  If for no other reason, that’s why you should consider a responsive theme.  Beyond that, one of the added benefits of responsive design is less management effort.  In the majority of cases, one you have a responsive site, there’s no longer a reason to do both a mobile and desktop version of your site.  That means no separate content management tools, or sites to maintain, or having to worry about separate URLs, and building the authority and rank of each site independently.  Which makes sharing content on social media that much easier – both for you, and your audience.  All of that extra stuff (and cost) just goes away.

Okay, so say none of this is you… maybe you don’t think you really care about responsive design, and maybe you just think of your web-site as a “me too”, or “must have” – a placeholder for your business.  That’s fair… many companies, niche companies, and organizations that have very complex or products just have a site because they have to.  But even if all of that’s true of your business – why wouldn’t you want your site to look good on a tablet?  Why wouldn’t you want to automatically improve your page rank?  Why wouldn’t you want  want a new potential avenue for clients?  If you’re going to be making any investments in a site, choose a responsive theme.

Part 5, Theme Recommendations

By this point, you’ve followed the previous setup information and should be ready to begin working with a demo site.    Which means that you’ve chosen a CMS platform, registered a domain name, selected an appropriate web host, configured DNS, and are now either at the point of choosing a 1-click WordPress instance, or your web host has sent you a link to get started.  Fantastic!  Pick a username and password, and click “Install WordPress”.  Then click Log In.

You’ll be presented with the WordPress Dashboard.  From this console, you’ll be able to change everything about the look and feel of your site, create content, and so on.  Good thing too… because by default your site probably looks something similar to this…

WordPressSetup.4

 

Now, before you get started changing your themes, let me introduce you to one of the biggest challenges with working with WordPress today… separating the good, from the bad (and the ugly).  This isn’t a problem unique to WordPress, or themes, or plug-ins… it’s a problem shared by any platform that has hit critical mass.  As with Microsoft Windows, or Android… the problem usually isn’t finding a tool or an app for the job… the real problem is often finding a good tool for job.  Few places is this more true than when applied to themes.  There’s just so many out there, and if you’re getting started, it’s really hard to know what’s good or bad until you’ve invested quite a bit of time building some familiarity.  So, let me try and save you some time.  In general, you want something that’s being actively developed or maintained by an organization was some staying power… you don’t want the theme to disappear next week.  You probably also want something that’s easy to manage.  Because even if you, or some of your technical resources are doing the initial work, it’s likely that you’ll want to turn over maintenance to someone else – marketing, or HR, etc. and in that case, you’ll need a theme that’s not too terribly hard to work with.  Finally, you’ll want something that’s responsive and looks good on mobile  devices (more on this later).

WordPressSetup.3

Beginner: If you’re just starting out and want to get your feet wet without spending hours looking at “free” themes, check out Themify.me.  Themify.me has plenty of good options – as in, themes that look pretty good, are generally easy to work with, and tend to not break your WordPress install.  For your time, it might be worth picking up “The Master Club” offering ($139), as it offers a good value and saves you from having to wade through a lot of junk.  At the same time it gives enough different themes and options to understand what’s possible.

Intermediate: While not necessarily as easy to get started with, the Avada theme from Theme Fusion is a popular and powerful option. At the risk of overselling this, it’s the #1 selling theme on Themeforest, run on more than 85,000 web sites.  While that might sound overused, the reality is that because it’s highly customizable you probably won’t find lots of other sites with a similar look and feel.  It’s not quite as beginner friendly as Themify.me’s stable of quality themes, but Avada pretty looks good with only a modicum of configuration work, and probably isn’t going to disappear overnight.

Advanced: In addition to what standard themes, there are also theme frameworks.  Two of the more popular  commercial theme frameworks include the Genesis Framework, and Thesis.  Theme frameworks allow you to extend the capabilities of WordPress.  You can think of them as platforms that sit on-top of WordPress and enable you to add functionality (e.g. drag and drop site layouts, replace or eliminate plug-in functionality using small code snippets, etc.).  You still need to add a child theme to skin the theme framework and provide a ‘look-and-feel’, so for starters you may just want to stick to the themify.me or Avada route, if for no other reason than economics.  If you really want to go work with a theme framework, let me save you some research.  There are a lot of conflicting articles and opinions discussing Genesis and Thesis.  When it comes down to it, Genesis is probably going to be easier to work with than Thesis 2.  Thesis lost some of its momentum when they released Thesis 2.  When it was released, there was painfully little documentation, and so it was more than a bit confusing for new comers.  Today?  Most of those documentation problems and bugs have been fixed.  Personally, I like Thesis 2…so your mileage may vary .  Unless you have a child theme that you absolutely have to have, and you want to use a theme framework, Genesis is probably going to be the easier of the two to work with.

 

Visit Us On Twitter