StackAccel

≡ Menu

blog

How-To: Install GrapheneOS

If you’ve arrived at this page, I’ll assume you’re interested in a “de-Google’d” phone.  I won’t speak to the history of the GrapheneOS project, or why you might be interested in such a solution in this post, but I will follow-up if there’s sufficient interest.

I was pleased with the CLI install notes provided by GrapheneOS and found them to be more than sufficient, and was able to get GrapheneOS loaded onto a Pixel 4a without issue.  Presented below are the notes I produced while getting it installed.  While I played around briefly with getting it installed from my Windows 10 machine, the fact is that my Linux  machine had most of the pre-requisites in place to  get the install done and it was just easier to work from it.


sudo apt install libarchive-tools
curl -O https://dl.google.com/android/repository/platform-tools_r30.0.5-linux.zip
echo 'd6d72d006c03bd55d49b6cef9f00295db02f0a31da10e121427e1f4cb43e7cb9 platform-tools_r30.0.5-linux.zip' | sha256sum -c
bsdtar xvf platform-tools_r30.0.5-linux.zip
export PATH="$PWD/platform-tools:$PATH"
fastboot version 30.0.5-6877874
sudo apt install android-sdk-platform-tools-common
fastboot flashing unlock
sudo apt install signify-openbsd
alias signify=signify-openbsd
curl -O https://releases.grapheneos.org/factory.pub
cat factory.pub
- make sure it matches (https://twitter.com/GrapheneOS/status/1145259815851253762)
curl -O https://releases.grapheneos.org/sunfish-factory-2021.01.23.03.zip
curl -O https://releases.grapheneos.org/sunfish-factory-2021.01.23.03.zip.sig
signify -Cqp factory.pub -x sunfish-factory-2021.01.23.03.zip.sig && echo verified
Now you’re ready to flash the factory image – which will replace the existing OS
bsdtar xvf sunfish-factory-2021.01.23.03.zip
cd sunfish-factory-2021.01.23.03
./flash-all.sh
The installation occurs…

fastboot flashing lock

As your phone boots, you’ll notice the GrapheneOS splash screen  – you’re finished!

How-To: Delete Glacier Vaults

Mars North Pole
As you may have realized, it’s not possible to delete Glacier vaults from the AWS S3 Glacier management console.  The purpose of this article is to provide a walk through for getting rid of Glacier vaults, and address the below issues that you may encounter in the process.

Errors this article will help you solve:

  1. “Error deleting vault – vault not empty or recently written too”
  2. “Unknown options: inventory-retrieval}”
  3. “The job is not currently available for download”
  4.  Now that I have finally have an output.json file, how do I delete everything?
I’ll assume that you’ve already followed the Installing, Updating, and Uninstalling AWS CLI, as well as the Configuring the AWS CLI walkthroughs,  which primarmily consists with running the “aws configure” command.

You’ll need to have the following handy:

  • AWS Access Key
  • AWS Secret Key
  • Default region name.
Everything you need for that should all be available under the Identify Access Management (IAM) console in AWS which you hopefully have documented somewhere.   If not, it’s a relatively straightforward process to create a new account and assign it access to Glacier (the AWS Access key is in the Security Credentials tab), and document your Secret key, etc.
Assuming that’s all sorted out,  AWS provides a template for Deleting an Archives in Glacier .  One of the first issues you may encounter is the “Unknown options: inventory-retrieval}” error, which Chris documents in his blog post.  Essentially, create a file aws-json.txt with the following:
{

"Type": "inventory-retrieval"

}

You’ll then be able to run the command:

aws glacier initiate-job --vault-name VAULT-NAME --account-id AccountID1234 --job-parameters file://aws-json.txt

This will produce some json output which will include the job-id you’ll need for the next command.

aws glacier describe-job --vault-name VAULT-name  --account-id AccountID1234 --job-id "longstring"
 
The inventory process will likley take several hours to complete.  As a result, if you run the below command before it’s has been completed you’ll get an error message to the effect of “The job is not currently available for download”
aws glacier get-job-output --vault-name VAULT-NAME --account-id AccountID1234  --job-id "insert-long-string-here" output.json
This will produce a file called output.json which will have all of the archive IDs you need to delete.  In my case, I had over 75k archive IDs that needed to deleted before the vault could be deleted.  Conveniently, BytesAndBolts has a delete-aws-glacier-archives script that will take precisely that output, and loop through deleting the archives.  Be aware, that this may take many hours to complete (in my case about 30 hours).
You’ll just need to edit the python script filling in the below lines before you run it:
json_file='output.json'
account_id=''
vault_name=''

Now you just be able to run it via Python…

python glacier_delete.py

You can finally delete the vault:

After you run it, the script will start looping through the records and deleting everything… returned value: 0 indicates the command was successful. After the script runs (I took about 30 hours to run against one of my vaults), go back to the AWS console and the size of the vault in question should now be empty and ready to delete.

Wi-Fi Guide

The conversation always seems to start with slow Wi-Fi, doesn’t it?

Maybe its employees telling you that they’re getting kicked-off of the Wi-Fi at your office.  Or maybe you’re just tired of having the receptionist reboot your access points every other day (even if that seems to fix things for a bit).  Bottom line… if you can watch Netflix in bed from your iPad, then you at least better be able to check email at the office!  What if the quality of your Wi-Fi connection at work was top-notch?  What if you knew you could support hundreds of connections – phones, laptops, tablets… and you knew that they’d be able to work on-line without headaches, without downtime… and most of all, without complaints every other day?  Would you be interested?

Of course it’s possible, in fact the expectation today should be consistent and reliable Wi-Fi.

In 2015, we’re in what’s called the 5th Generation of Wi-Fi/wireless technology, having finally arrived in the 802.11ac era.  This represents the fifth major iteration of the Wi-Fi as a technology standard.  It’s hard to believe when you look back, but Wi-Fi is now more than 18 years old.  What started off in the late 1990s with 2Mbps data rates, and a platform designed for the occasional use in conference rooms, has evolved into a standard that can support gigabit speeds, dense user populations, and video streaming without breaking a sweat.

In other words, if you have slow Wi-Fi service today, it’s a problem that can be solved.

Why is your current Wi-Fi slow?

A common narrative goes something like this… “We’ve been using these wireless access points for a while, and up until recently we almost never had a problem.  Now?  Everything just seems really slow.  We reboot the APs every so often and that seems to help… for a while… but the problem isn’t going away.

Often times, a quick walkthrough reveals a mix of consumer grade APs, placed at random throughout the environment.  While that’s not necessarily a root cause, it is a warning sign, and at the very least something to investigate.  What can make this especially challenging for the average business owner is that the design of their wireless network (WLAN) probably served them reasonably well up until fairly recently.  The explosion of portable devices and increasing end-user expectations are the major market forces at work, combining to strain older WLANs.

The flavors of 802.11

The Institute of Electrical and Electronics Engineers (IEEE) describes the 802.11 family of protocols, which provide the basis for wireless network products using the Wi-Fi brand.  They whole history of the protocol family is beyond the scope of article but here’s a short summary:

  • 1997 – 802.11 supports up to 2 Mbps
  • 1999 – 802.11b supports up to 11 Mbps
  • 2002 – 802.11a/g supports up to 54 Mbps
  • 2007 – 802.11n supports up to 600 Mbps
  • 2012 – 802.11ac supports up to 3.6 Gbps

What’s a bit misleading is that just because the 802.11ac standard was ratified in 2012, doesn’t mean that mature Enterprise 802.11ac access points were available at the time.  For example, while Quantenna and Broadcom were shipping Wi-Fi chipsets in 2012, and there were a couple of consumer grade 802.11ac access point that year, it wasn’t until over a year later that Cisco shipped 802.11ac in their Aironet product line-up.  It was even later than that for Ruckus.  What you may or may not realize, is that vendors like Cisco, Ruckus, Aruba, HP, Apple, etc. don’t actually make the Wi-Fi chip-sets that go into their products.  Instead, similar to the manner in which Dell buys CPUs from Intel – what the vendors do is contract with a chipset vendor like Atheros, Broadcom, or Marvell.  The chipset vendors take the 802.11 standard specifications, and design chipsets that actually implement the 802.11ac standard.  The chipsets typically include the chips that implement that standard, and may also include software drivers, and maybe an operating system to run the chipsets.  The major integrators of the world then buy the chipset and related components in volume from the upstream vendors, and implement those chipsets in their hardware in the form of a product like Cisco’s Aironet, or Ruckus’s R700 access points.

So if everyone is using the same basic building blocks, what differentiates the products?

The same thing that differentiates say, a Samsung Galaxy Android phone, from an Apple iPhone… it’s about how well the components work together, and the experience they provide end-users.  Starting early in a product lifecycle – say the first generation of 802.11ac, there’s usually relatively little “value-add” that the vendors build into the product.  The first generation is often mostly about integrating the various hardware components effectively.  As they iterate through designs, vendors build their own intellectual property on-top of the chipsets to diversity their offering relative to the competition.  In the case of Ruckus, a major component of their intellectual property, or their value-add comes in the form of the antenna system (more below), which differentiates their offering fundamentally from most of the omnidirectional antenna competition.  But all of this takes time.  Even today, with 802.11ac several years old (at least on paper), 802.11ac Enterprise-class hardware has is still somewhat recent, and is still improving as “wave 2” 802.11ac devices start popping up on the horizon.

So what do I get out of 802.11ac?  

It’s faster, and can support more devices than the last generation of products.

How much faster?  That depends on quite a few variables.  Today, under certain conditions 802.11ac is up to 3x as fast today as 802.11n devices are, and it has the potential to be quite a bit faster down the line with the 802.11ac specification being written to support up to 8 antennas, multi-user MIMO, and up to 6.8 Gbps.

But we’re getting a bit ahead of ourselves.

If all you’re interested in is if it’s worth buying 802.11ac Enterprise class devices today, the answer is yes – buy 802.11ac access points… even if you don’t have enough 802.11ac client devices yet to dictate 802.11ac (and you probably don’t).  Why?   Because 802.11ac is backwards compatible with 802.11n, and APs supporting it are generally compatible with all major 802.11 variations in both the 2.4 GHz, and 5 GHz segments of the spectrum.  Plus, because 802.11ac represents an evolution of the old 802.11n specification, by buying 802.11ac Enterprise class access points (APs) today, what you’re really doing is buying the best of the 802.11n designs, and getting 802.11ac as well.

For the majority of customers, it only makes sense to buy 802.11ac access points.

APs and Controllers? 

Wireless Access Points (APs) are the devices on your network which broadcast your Wi-Fi signals, implement encryption/security and physically bridge your wired Ethernet network with your WLAN.  If you’re coming at the wireless market and have experience with consumer APs, or Wi-Fi at home, the APs are what you’re probably already familiar with.

In addition to the APs themselves, most Enterprise-Class wireless solutions all but require a wireless controller.  For example, Cisco and Ruckus both have controllers in the form of the Cisco Wireless LAN Controller for the Aironet products, and the Ruckus ZoneDirector controller.  In broad terms, the wireless controllers extend the capabilities of the APs, and use data generated by the APs such that they’re able to work together to optimize signal, connection, and performance for the participating wireless clients.

But what do the controllers actually do

In real-world terms, controllers do quite a few things.  First, controller-based APs look for and associate with a parent controller automatically, which enables the APs to self-configure and associate with the controller making them immediately available for remote management.  Once that happens, the controller can automatically optimize the environment and adjust settings on-the-fly in response to real-world measurements and feedback from the APs.  This means adjusting power levels and RF channel assignments dynamically in order to minimize the impact of interference.  If you recall setting up consumer-grade APs and having to manually set the channels to keep them from interfering with each other, then you’ll be pleased to learn that optimizations like that will happen automatically and without administrative burden on an on-going basis, providing the best possible signal quality to your end-user’s devices.  Beyond optimizations, the controller also provides value-added troubleshooting capabilities that can not only consolidate logging information, but also enable an administrator to determine individual client-device performance.  Translated, that means you’ll potentially be able to drill-down on complaints of “My Wi-Fi is slow!” in real-time and do root-cause analysis (instead of just rebooting the AP and hoping).  Beyond the above, controllers also work to enable more advanced security integration capabilities, generally enabling RADIUS, captive portal, ActiveDirectory integration, dynamic VLAN assignment, and a host of other capabilities.

Beamforming and MIMO

If you’re unfamiliar with how radio frequency (RF) energy propagates from an omni-direction antenna (e.g. a regular wifi antenna), the propagation looks like a donut-shaped Wi-Fi bubble of coverage emanating out from your APs in 3-Dimensional space.

WiFiin3D

When you think about on-chip beamforming as a technology, you might think in terms of it directing RF energy in a beam-like manner toward the client devices that the AP is communicating with.  Unfortunately, the reality falls somewhere short of that.  In the prior standard, 802.11n specified several beamforming methods, but because there were so many options and because implementing them drove cost and complexity, the result was that most vendors didn’t implement anything on the client side.  And when it comes to the on-chip variety (e.g. nearly every implementation) the benefit that beamforming created in terms of increased signal strength was often times negligible, and occasionally destructive.  The problem really comes down to a lack of data.  Without getting into the weeds, in a version of the 802.11n on-chip beamforming variety, a pair (or more) of omni-directional antennas emit identical signals, but because of timing or transmission paths, those signals arrive a slightly different points in time on the receiving end, with the goal being constructive interference at the point of the receiver.  Something like this…

InterferencePattern

The challenge is that dead-spots and areas of destructive interference combine to minimize the benefit of on-chip beamforming advantage, resulting in coverage patterns with gaps in their effective coverage areas.  There’s a lot more to this topic, but that’s the short version.

802.11ac Beamforming

With 802.11n beamforming iterations generally coming-up short, only one beamforming model was specified in 802.11ac, which works on both sides of the conversation (both the AP, and the client devices).  The specification is an evolution of 802.11n, with support for more Multiple input, multiple output (MIMO) spatial streams (up to 8).  In practical terms, that means up to 8 antennas on both the AP and client side of the conversation.

Big-picture?  This translates into better & faster Wi-Fi.

To elaborate on MIMO briefly, MIMO relies on interference to create signal diversity, which enhances signal quality.  It may sound counter intuitive at first though, so here’s a bit more of how it works.  Instead of using a single carrier transmitting a data set for a short period of time, where temporary interference can damage a signal’s payload, 802.11ac MIMO specifies OFDM.  Essentially, this means multiple data payloads (“symbols”) are transmitted in parallel on the same frequency, on multiple radios for longer periods of time, making it easier to recover the signal if there’s temporary interference.  Essentially, all of the transmitters transmit on the same frequency, but with different data being sent out of each antenna (up to 8 antennas, in the specification).  So the obvious question becomes how can multiple antennas operate at the same time on the same frequency with different data sets, and not create interference? The answer?  A  powerful digital signal processor (DSP) and matrix math, enables either side to recover the actual signals on each antenna.  The receiving antennas are able to differentiate the data, and improve spectrum efficiency to, in real terms, multiply bandwidth by the number of radios (up to 8).  In one application, 3 transmit antennas send 3 different sets of data on the same frequency at the same time, and the three receive antennas are still able to unscramble the data, because the signal diversity causes the signals to arrive at different points in time.  Effectively, the signals bounce off of stuff as they propagate outward, ensuring that the signals all arrive at slightly different points in time, and when combined with a strong DSP, the AP is able to unscramble and recombine the data streams.

(What I’ve presented on MIMO is a simplified version of what’s really going on.  I didn’t touch on the client-end, channel measurements that happen on both the transmit and receive sides, or multi-user MIMO.  So if you’re interested in a more comprehensive answer, including a an explanation of the matrix math involved in unscrambling the data streams, check out 802.11ac: A survival guide, by Matthew S. Gast, and you will not be disappointed).

Put differently… MIMO is all about maximizing the efficient use of the RF spectrum.  Which translates into more, better, faster Wi-Fi for everyone.   Well, for the most part anyway.  For mobile devices, particularly phones and tablets, it’s going to be quite a while before we see 8 antennas implemented.  The reason for this is that each additional radio doubles the power demands, shrinking the battery life of those mobile devices.  So while energy consumption may not matter at the APs, on mobile devices like phones and tablets, every milliwatt counts.

Wouldn’t directional antennas be better than omni-direction antennas?

As I mentioned earlier, most APs use omni-directional antennas that radiate RF energy out indiscriminately in a 3D Wi-Fi coverage bubble.  And even when you add most vendor’s beamforming flavors, you’re not doing anything to increase the size of the coverage zone.  Instead, what you’re really doing is using some clever tricks to take the signal donut and alter the quality (both for good, and ill) within the existing coverage range.  In practice, on-chip beamforming with omnidirectional antennas create coverage areas with improved , as well as reduced coverage, as approximated below.

on-chipBeamforming

And we haven’t even discussed polarization yet.

An antenna provides gain, direction, and polarity to a radio signal.  Polarity is the orientation of the transmission from the antenna.  Antennas produce either vertically polarized (VPOL), or horizontally polarized (HPOL) signals – the polarization axis describes the orientation of the radio waves as they radiate out from the antenna.  In other words, RF energy moves in waves, and those waves move up-and-down, or back-and-fourth in space – the polarization of the wave.

polarizationR2

  Why does polarization matter? 

In a worst case scenario a perfectly horizontal receiving antenna may not hear anything transmitted by a perfectly vertical antenna.  While objects can impede, or reflect a signal and distort the polarization, signal loss due to polarization differences is real, and can potentially prevent communications from occurring.  At the very least, signal strength is reduced.

In terms of a real-world example, recent MacBook designs have housed their antennas in the hinge section of the laptop, and the antenna has a horizontal orientation.  What this means, is that a transmission coming from a vertically polarized AP will be harder for the MacBook to hear, and likewise the AP will have a harder time distinguishing the MacBook’s transmission.  For laptops, the polarization is generally a static condition based on the orientation of your laptop.  Phones and tablets though?  You tilt their orientation based on whatever you’re doing.  Need a wide-screen to watch Netflix, you hold your phone horizontally.  Reading an article on a web-site… you’ll orient it vertically.  The changes in the devices physical orientation change the orientation of the RF signal.  Which results in complaints along these lines; “When I hold my phone a certain way and I’m standing in this location, the Wi-Fi slows down or stops working” (that and other similarly unusual symptoms).

Here’s why… nearly all Wi-Fi access points use omni-directional dipole antennas that are vertically polarized.  These are considered the norm in the industry.  The reason is that they were common in the wider field of RF prior to the mass adoption of Wi-Fi.   In the case of a Cisco Aironet 2700 series antenna, the omnidirectional antennas housed within the chassis are vertically polarized antennas when mounted in the traditional orientation (e.g. flat).

ruckusAntennaArrayTo contrast Ruckus’s antenna design with omnidirectional antennas, Ruckus’s design takes a large number of small antenna elements and hooks those up to a digital switch.  The AP learns about the environment, and then uses antenna element array combinations to produce a desirable coverage pattern.  Some of these antenna elements are vertically polarized, and others are horizontally polarized.  By leveraging the CPU in the AP, Ruckus then optimizes antenna patterns for performance, in terms of rate control, power selection, and antenna choices.  Then, the choices are remembered for each client device, enabling the Ruckus product to make increasingly better decisions as the devices communicate.  So even under 802.11n, where the client devices aren’t providing any beamforming feedback to the APs,  the Ruckus product is still able optimize antenna patterns to maximize signal strength potential based on historical data for the individual client.  This occurs in a manner that the Cisco Aironet products are fundamentally unable to do.  Moreover, since the Ruckus solution uses antenna element arrays, there’s a mix of horizontally polarized, vertically polarized, as well as directional antenna elements which can create pattern optimizations that Ruckus is able to employ in order to optimize signal for both horizontally and vertically polarized client devices.  Put differently, Ruckus’s antenna designs go a long way to optimizing signals for various device types in the current, bring-your-down-device (BYOD) real-world.  What’s more, even though 802.11ac specifies a beamforming implementation that incorporates client device feedback into channel optimizations, Ruckus is able to optimize for signal-strength in a manner that’s fundamentally more diverse than any omni-directional antenna is able to do.  While Cisco does take issue with that statement, in the form of a whitepaper outlining why DSP processing is better than using thousands of unique antenna patterns for optimizing the RF signal for real-world situations, Ruckus’s antenna arrays are physically able to compensate for RF polarization, and even increase the “Wi-Fi” bubble by employing directional antenna elements which extend RF energy physically in the direction of active clients.

In other words, Ruckus largely eliminates the polarization topic and self-optimizes polarization in favor of improved client communications. Put differently, Ruckus’s Beamflex technology is beamforming on steroids.

RuckusWiFiSignal

Unlike on-chip beamforming, the transmit beamforming via Ruckus’s antenna element array provides a fundamentally unique capability relative to the competition, with antennas capable of producing unique coverage patterns over a more focused coverage area, with less potential for destructive interference relative to omni-directional antennas.

Where do I go from here?

The Enterprise Wi-Fi industry (e.g. Enterprise WLAN market) continues to produce more capable Access Points, with more features.  As more 802.11ac Enterprise-Class product sets are released, and improved upon, they’re increasingly making more efficient use of the available RF spectrum.  This translates into faster Wi-Fi, in denser user environments, with increasing features and capabilities.  This trend is expected to continue for the foreseeable future, as the 802.11ac specification was designed to grow around the anticipated near-term needs for more bandwidth, largly by employing more radios and antennas, as well as through the role out of MU-MIMO in wave 2.

From a marketshare standpoint, Cisco is the clearly the dominate player.  As of the last published IDC marketshare report, while Enterprise Wireless LAN market grew 7.6% year-over-year…

  • Cisco had 46.8% of the market, down though from 53% in the prior year
  • Ruckus saw 20.8% growth in the prior year, growing to 5.7% of the market
  • Aruba’s sales grew at 7.9%, increasing to 9.8% of the market.

In addition to marketshare, the biggest recent news of course, is that HP is buying Aruba, though it’s far too early to know how that will play out and what effect it will have on the market.

But which vendor is the best?

The major Enterprise WLAN vendors all offer generally competitive products, derived from just a few different chipsets.  With market forces being what they are, the result is that all of these products are generally competitive.  From a technical standpoint, the most recent vendor independent, access point analysis and report, comes courtesy of Wireless LAN Professionals in 2013.  Notably, the event was not vendor-sponsored, and as such represents the most unbiased and comprehensive assessment that I’ve seen.  It is however, limited to the last generation of 802.11n products.  What you’ll find if you dig through the reports, is that every single access point reached a choking point with respect to maximizing the use of the RF spectrum.  In the case of the report, the best overall combination ranking was the Ruckus 7982 product, followed by the Cisco 3602i.  If you look a bit further back, Tom’s Hardware did an excellent article on beamforming – “Beamforming: The Best WiFi You’ve Never Seen”.  While some of it is obviously a bit dated having been published prior to 802.11ac being standardized, it’s still a great primer on the differences between on-chip and antenna beamforming.

The real question is best for whom, or for what situation

Failing an updated version of the Wireless LAN Professional report incorporating the newer 802.11ac products, there’s not a comprehensive vendor-neutral assessment that I can point to in order to tell you which device technically has the best overall coverage in this generation of product.  Even if I did, and we could, it wouldn’t effectively answer which is best for your environment.   Beyond RF-capabilities, there are several factors which differentiate Enterprise 802.11ac WLAN products.  These include everything from design, to ease-of-use, to management capabilities, scalability, price and more.  For example, Ubiquiti offers a low-cost controller-less product, that’s generally easy to deploy and manage and is probably reasonable for most IT resources to deploy.  However, you generally need more access points than you would with a Ruckus deployment, and that means potentially more troubleshooting work.  And unfortunately, Ubiquiti doesn’t really offer technical support that would be comparable to any of the Enterprise-Class AP vendors.  Meanwhile, the Cisco Aironet product is an Enterprise-Class access point designed by the marketshare leader, and employs Cisco’s IOS, which makes it generally more suited toward larger environments or for environments where the IT resources involved have Cisco-specific experience.  Cisco has good support, but reaching the right support resources in a timely manner, and getting effective support doesn’t always happen.  Ruckus on the other hand, employs a unique phased-antenna element array design that provides a fundamentally unique advantage relative to the competition, which was apparent during the last generation of products in the vendor-independent assessment.  Further, Ruckus’s products are generally easy to manage and work with, and can be deployed by IT resources with moderate Wi-Fi experience, and their technical support is excellent.

In other words, it’s not necessarily a question of which access point is best, it’s a question of which access point is best for your environment and situation.

My recommendation

Having recently implemented solutions from Cisco, Ruckus, and Ubiquiti my recommendation would be based on your specific situation.  But generally speaking, I do have a preference for the Ruckus product set.  From a technology standpoint, Ruckus has developed a unique edge in terms of their antenna solution, which provides a fundamentally different capability relative to any of the Enterprise-class WLAN competitors.  What’s more, during the last generation of products, their ranking in the independent vendor assessment implies that the adaptive antenna design is giving them an edge relative to their competition.  From a support standpoint, Ruckus’s support team has been subjectively better than Cisco’s in my experience.  From an ongoing management standpoint, the Ruckus ZoneDirector interface provides and good dashboard where you can easily drill down on performance and troubleshooting, without necessarily requiring the support of an IT organization or individual.  While Ruckus isn’t the right solution for every situation, it is a very attractive solution that is priced competitivly with the Cisco Aironet offering.

Bottom line?

Having slow or poor Wi-Fi coverage is a solvable problem today, and 802.11ac represents the latest revision of the technology standard.  At this point in the cycle, we’re starting to see mature second generation product sets coming from the major Enterprise WLAN competitors, and they all offer generally competitive and capable products.  If you’re tired of struggling with unreliable Wi-Fi in your office, it’s a very solvable problem today and we’ll be able to help you find the best solution for your needs and your environment.

Multi-Site WordPress Walkthrough

www.flickr.com/photos/brodiekarel/11382310205

This guide is intended to provide you with an A-Z walkthough of installing and configuring a multiple WordPress site environment, under a single Ubuntu instance, as well as getting your existing site(s) restored into the new environment.  This would typically be useful for folks who want to move their WordPress sites from one VPS or hosting provider to a another.  This guide may be particularly suited for users of DigitalOcean, as it really consolidates many of their excellent knowledge base and how-to articles, as well as provides solutions to some of the problems that you might run into throughout the process.  That having been said, it’s applicable to many environments.  Before you get started, make sure that you’re using something reliable to backup your WordPress sites.

Deploy a new VPS instance 

If you’re restoring from either a single-instance WordPress deployment (e.g. DigitalOcean 1-click WordPress on Ubuntu 12.x), and you want to move it to a different WordPress VPS instance (e.g. an Ubuntu 14.04 instance on DigitalOcean with WordPress configured in a multi-tenant configuration like so, you’ll first need to deploy and configure your new VPS instance.  Follow the “Initial Server Setup with Ubuntu 12.04” to get you started (its close enough to Ubuntu 14.04 to follow).  Which basically consists of:

  • Changing your root password
  • Creating a new user
  • Giving the new user root privileges via sudo (visudo)
  • Changing the default port that ssh runs on

DNS and LAMP Stack

Next, follow the “How to Set Up a Host Name with Digital Ocean” guide, which is a short primer on DNS.  And proceed to with the “How to Install Linux, Apaches, MySQL, PHP (LAMP) on Ubuntu” guide on digital Ocean.  Basically, what you’re doing here is getting your VPS instance deployed, active, and configured with a LAMP stack.  In short…

  • Install Apache
  • Install & Configure MySQL
  • Install PHP and verify that it’s working

Multiple WordPress Sites on a Single Ubuntu VPS

Finally, follow the “How To Set Up Multiple WordPress Sites on a Single Ubuntu VPS”.  This guide has everything you need to get your VPS instance configured to run multiple WordPress sites on a single Ubuntu instance.

  • Download WordPress
  • Create site databases and users in MySQL
  • Configure the site root directories in /var/www/FirstSite, /var/www/SecondSite, etc.
  • Use rsync (rsync –avP /source /var/www/FirstSite) to copy the directory hierarchy over to the site root directories.
  • Configure WordPress (/var/www/FirstSite/wp-config.php) this is where you link your MySQL databases and users that you created in step ii above to this /var/www/FirstSite (etc.) instance.

From here you’ll configure the Site Virtual Host Configuration

  • From /etc/apache2/sites-available – cp 000*.conf to FirstSite.conf
  • In here you’ll want to add ServerName FirstSite.com, ServerAlias www.firstsite.com (you can also use *.com, etc.), DocumentRoot /var/www/FirstSite
  • Install the PHP Module

Restore you WordPress site from Backup

At this point, you’re going to want to grab one of those WordPress backups that you’ve been doing, and copy it over to your new VPS instance.  If you’re uploading it from a Windows machine, WinSCP will do the trick.  Check this “Restoring Your Database From Backup” guide for more information.

    • Copy the backup archive to your home directory ~/restore
    • unTar/unzip it… tar -zxvf backup.tar.gz
    • Put the backed-up SQL back into your MySQL database that you created earlier (e.g. FirstDatabase).
    • Save a copy of your site root WordPress config file (e.g. cp /var/www/FirstSite/wp-config.php ~/working/wp-config.php) so that you don’t have to re-create it.
    • Next, you’re going to need to dump the restored WordPress files into the site root directory… this can be accomplished via rsync (rsync -avP /source /var/www/FirstSite).
    • Copy your good wp-config.php file which has the links to your MySQL database back over (e.g. cp ~/working/wp-config.php /bar/www/FirstSite/)
    • Restart Apache (sudo service apache2 restart).
    • Your site should now be working.

 Fixing Post Name Permalinks

  • Make sure mod_rewrite is enabled via… “a2enmod rewrite” has been run (then restart apache – service apache2 restart ).
  • Fix the AllowOverride all on every occurrence of allow override in /etc/apache2/apache2.conf .
  • To eliminate the informational warning (non-critical) message on restarting apache2: Could not reliably determine the server’s fully qualified domain name, using 127.0.0.1 for ServerName, edit the /etc/apache2/apache2.conf file to such that the ServerName line has been added to read ServerName localhost.

VAX Virtualization Explored under Charon

VAX Virtualization Explored under Charon thumbnail

 “Hey… you know that plant of ours in Europe… the one with all of the downtime?”

“Sure… “

“Did you know it runs on a 30-year old VAX that we’re sourcing parts for off of Ebay?”

“Really?! … I guess that makes 4 plants that I know of in the exact same situation!

That conversation, or one very much like it is the same conversation being had at thousands publicly traded companies, and government organizations around the world.   If you’re a syadmin, a VMware resource, or a developer who got their start anytime in the x86 era, you’ll be forgiven if the closest you’ve come to hardware from Digital Equipment Corp (DEC)/HP Alpha is maybe Alpha/NT box somewhere along the line.  You’d also be forgiven for assuming that VAX hardware from the 1970’s doesn’t still run manufacturing lines that produce millions of dollars in products a year.

But that’s exactly what’s happening.

… And so is the Ebay part of the equation.

To hear the Alpha folks talk, those old platforms were bulletproof and would run forever.  Perhaps not in exactly the same way that the large swaths of the banking industry still run on COBOL, but it’s an apt comparison.  The biggest difference is that code doesn’t literally rust away.  The DEC/HP Alpha hardware is engineered to something like Apollo-era reliability standards… but while they stopped flying Saturn V’s 40 years ago, these VAX machines are still churning away.  Anyway, there’s a joke that goes something like… you know how some syadmins used to like to brag about our *nix system uptimes being measured in years (before heartbleed and shellshock)?

Well, VAX folks brag about uptimes measured in decades.

Crazy, isn’t it?

You might be sitting there asking yourself how we got to this situation?  In simple business terms… If it ain’t broke (and you can’t make any extra margin by fixing it), don’t fix it!

I know lots of IT folks have this tendency to think in 1-3 year time-spans.  I get it.  We like technology, the latest gadgets, and sometimes have an unfortunate tendency argue about technica-obsecura.  But that’s only really because “Technology moves so fast”, right?  Yes, there’s Moore’s law, and the Cloud, and mobility, and all of that stuff.  Yes, technology does move fast.  But business… business doesn’t really care about how fast technology moves beyond of the context of if it can benefit them.  In other words, you use assets for as long as you can extract value from them.

That’s just good business.

What’s the objective of this project?

The primary objective is to mitigate risk – the risk that a critical hardware failure will occur that takes production off-line for an indeterminate amount of time.  Secondary objectives all include modernizing the solution, improving disaster recovery capabilities, eliminating proprietary or unsupported code, and cleaning up any hidden messes that might have collected over the years.

Put differently, the question really is – can we virtualize it and buy some more time, or do we need to re-engineer the solution?

Starting with a quick overview of the project in question… The CLI looks vaguely familiar, but requires a bit of translation (or a VAX/VMS resource) to interact with it.  Starting Lsnrctl returns an Oracle database version… which, unfortunately several searches return precisely zero results for.  Un/der-documented versions of Oracle are always a favorite.  Backups to tape are reportedly functioning, and there’s also a Windows reporting client GUI (binary only, of course), from a long-defunct vendor.  The good news this time around… the platform is apparently functional and in a relatively “good” working state.  The bad news… there is no support contract for anything.  Not for the hardware, not for Oracle, and certainly not from the Windows reporting client.  In this case, the legacy VAX  is basically a magical black-box that just runs and gives the customer the data they need.  And at this point, all institutional knowledge beyond running very specific commend sets has been lost – which isn’t atypical for 20-30 year old platforms.

Which bring us to the question – virtualize, or re-engineer?

Virtualizing a VAX

To start with, most VAX/VMS operating systems are designed for specific CPU types, so virtualizing directly using something like VMware, or Hyper-V is a non-starter.  But those CPU architectures and configurations are pretty old now.  Like, 20-30 years old.  That makes them candidates for brute force emulation.  And there are a few choices of emulator out there… including open-source options like SIMH, and TS10, as well as commercial solutions like NuVAX, and Charon.   After doing a bit of research, it’s pretty clear that there was only one leading commercial offering for my use case… Charon from a company called Stomasys.  While there may be merit in exploring open-source alternatives further, the reality is that the open-source community for VAX system development isn’t exactly active in the same sense the Linux OS community is active.  So if you do go down the open-source path, keep in mind that some of the solutions aren’t even going to be able to do what you might think of as simple and obvious things… like, say, boot OpenVMS.  Which is pretty limiting.

Charon Overview

Aside from the Greek mythology reference to the ferryman who transported the dead across the river Styx, Charon is also a brand name for a group of products (CHARON-AXP, CHARON-VAX) that emulate several CPU architectures, covering most of the common DEC platforms.  You know… things like OpenVMS , VAX, AlphaServer, MicroVAX3100, and other legacy operating systems.  Why the name Charon?  Like the mythological boatman of who, for a price, keeps the dead from being trapped on the wrong side of the river (e.g. old failing hardware); Charon transports the legacy platform unchanged between the two worlds (legacy and modern).  In a similar manner that running a P2V conversion on say, Windows NT, let’s you run a 20 year old Windows assets under vSphere ESXi, Charon lets you run your legacy VAX workloads unchanged on modern hardware.  In other words, you can kind of think of Charon like a P2V platform for your legacy VAX/VMS systems.  Of course, that’s a wildly inaccurate way to think about it, but that’s basically the result you effectively get.

How does Charon Work?

Charon is emulator … it’s secret sauce is that it does the hard work of converting instructions written for a legacy hardware architecture, so that you can run them on an x86/x64 CPU architecture, and do so quickly and reliably.  Because Charon enables you to run your environment unchanged on the new hardware, not only to you get to avoid the costly effort of reengineering your solution, but you can also usually avoid the painful effort of reinstalling your applications, databases, etc.  So beneath the hood, what Charon is essentially doing is creating a new hardware abstraction layer (HAL), to sit on top of your x86/x64 compatible physical or virtual hardware.  The Charon emulator creates a model of the DEC/HP Alpha hardware and I/IO devices.  Once you have the Charon emulator installed, you have an exact working model on which you can install your DEC/HP/VMS operating system, and applications.  Charon systems then execute the same binary code that the old physical hardware did.  Here’s what the whole solution stack looks like mashed together:

CharonBigStackR2

Yes, lots of layers.  But even still, because of the difference between the legacy platform and the modern platform, you still typically get a performance boost in the process.

What do I need?

Assuming you have a running legacy asset that’s compatible with Charon, all you need is a destination server.  In my case, the customer had an existing vSphere environment, and existing backup/recovery capabilities, so all that was really needed was an ESXi host to run a new VM on, and the licensing for Charon.

The process at 30,000 feet looks like this:

  1. Add a new vSphere (5.5x) host
  2. Deploy a Windows 2008 R2 VM (or Linux) template
  3. Use image backups to move your system to the VM
  4. Restore databases from backup.
  5. Telnet into your Charon instance

At a high-level, it really is that simple.

How challenging is the installation? 

If you skim the documentation before installing, it shouldn’t be an issue.  Assuming you have access to the legacy host, you can get an inventory of the information about the legacy platform in order to get the right Charon license… you basically need to grab a list of things like CPU architecture, OS version, tape drive type, etc. (e.g. SHO SYS, SHO DEV, SHO LIC,, SHO MEM, SHO CLU, etc.), which will enable you to get the right Charon licenses.  After that, you’ll be ready to step through the installation.  This isn’t a next/next/finish setup, but once you’ve added got the USB dongle setup, and create a VM based on the recommended hardware specifications, you’re well on your way.

Restoring the data from the legacy hardware onto the new VM, can be a bit more involved.  In a perfect world, you’d be able to restore directly from tape into the new Windows VM – assuming you have the right tape drive, good backups to tape, etc.  Short of that, you’ll need to backup and restore the legacy drives into the new environment.  So you’re going to take image backups of each drive, and then upload the backups to your new VM.  More specifically, do a backup from drive A0 to A1, then A1 to A2, etc.  Upload the A0 backup to your new VM and restore the data.  Proceed like that until you’ve completed the all of the restores.  In this manner, you’ll be able preserve your operating system, database installation, and any other applications, without going through the time consuming installation and configuration process.  As a result, you avoid troublesome things like version mismatches, etc., missing media, poor documentation, etc. After the backups are restored, Charon is able to take those restored files that exist on the parent VM, and boot those as local storage – and you’re off and running.

What does Charon look like?

After you’ve installed Charon, the management interface is accessible via the system tray.

CharonUI

If you’re thinking that’s pretty bare-bones, then you’re right.  Once you’ve installed and configured Charon, there’s just not a lot to do from the management interface.

How do I login to the console?

In order to access the legacy OS console and CLI, you’ll simply fire-up your favorite telnet client and point it at the IP address of your Charon system.

charonConsoleR2

Which should resemble the old physical console.

Issues Encountered 

While the 30,000 foot process that was outlined early in the article is essentially what was followed, the biggest problem that was ran into is probably exactly what you’d guess it to be.  The Oracle database.  Unsupported, and underdocumented as it was, we ran into several problems restoring successfully from tape.  While not a problem with Charon,the reality is that very old and unsupported platforms can have problems that go undetected for years.  Whiel this was resolved within the planned budget, it was still inconvenient.  And should serve as a reminder, that to the extent it’s possible to have a support contract in-place for critical components, you should.  At the same time, that’s not always the boots-on-the-ground situation.

The verdict?

We successfully mitigated the hardware risk associated with the failing hardware in the environment, which was our primary objective.  Using Charon, we were able to pull-forward the legacy environment , running it on new supported hardware.  Between re-installing the legacy OS under Charon, and restoring our application data and backups via tape, we were able to also meet some of the secondary objectives.  As a Windows 2008 R2 VM running on a dedicated vSphere host in the customer’s datacenter, we have something modernized (at least to some extent), and that plugs-into the existing backup infrastructure.  With Veeam Backup and Replication, and a standard backup policy with standard RPO, and RTO objectives we have something that the client has a high-degree of confidence in.

Visit Us On Twitter