≡ Menu

Multi-Site WordPress Walkthrough

This guide is intended to provide you with an A-Z walkthough of installing and configuring a multiple WordPress site environment, under a single Ubuntu instance, as well as getting your existing site(s) restored into the new environment.  This would typically be useful for folks who want to move their WordPress sites from one VPS or hosting provider to a another.  This guide may be particularly suited for users of DigitalOcean, as it really consolidates many of their excellent knowledge base and how-to articles, as well as provides solutions to some of the problems that you might run into throughout the process.  That having been said, it’s applicable to many environments.  Before you get started, make sure that you’re using something reliable to backup your WordPress sites.

Deploy a new VPS instance 

If you’re restoring from either a single-instance WordPress deployment (e.g. DigitalOcean 1-click WordPress on Ubuntu 12.x), and you want to move it to a different WordPress VPS instance (e.g. an Ubuntu 14.04 instance on DigitalOcean with WordPress configured in a multi-tenant configuration like so, you’ll first need to deploy and configure your new VPS instance.  Follow the “Initial Server Setup with Ubuntu 12.04” to get you started (its close enough to Ubuntu 14.04 to follow).  Which basically consists of:

  • Changing your root password
  • Creating a new user
  • Giving the new user root privileges via sudo (visudo)
  • Changing the default port that ssh runs on

DNS and LAMP Stack

Next, follow the “How to Set Up a Host Name with Digital Ocean” guide, which is a short primer on DNS.  And proceed to with the “How to Install Linux, Apaches, MySQL, PHP (LAMP) on Ubuntu” guide on digital Ocean.  Basically, what you’re doing here is getting your VPS instance deployed, active, and configured with a LAMP stack.  In short…

  • Install Apache
  • Install & Configure MySQL
  • Install PHP and verify that it’s working

Multiple WordPress Sites on a Single Ubuntu VPS

Finally, follow the “How To Set Up Multiple WordPress Sites on a Single Ubuntu VPS”.  This guide has everything you need to get your VPS instance configured to run multiple WordPress sites on a single Ubuntu instance.

  • Download WordPress
  • Create site databases and users in MySQL
  • Configure the site root directories in /var/www/FirstSite, /var/www/SecondSite, etc.
  • Use rsync (rsync –avP /source /var/www/FirstSite) to copy the directory hierarchy over to the site root directories.
  • Configure WordPress (/var/www/FirstSite/wp-config.php) this is where you link your MySQL databases and users that you created in step ii above to this /var/www/FirstSite (etc.) instance.

From here you’ll configure the Site Virtual Host Configuration

  • From /etc/apache2/sites-available – cp 000*.conf to FirstSite.conf
  • In here you’ll want to add ServerName, ServerAlias (you can also use *.com, etc.), DocumentRoot /var/www/FirstSite
  • Install the PHP Module

Restore you WordPress site from Backup

At this point, you’re going to want to grab one of those WordPress backups that you’ve been doing, and copy it over to your new VPS instance.  If you’re uploading it from a Windows machine, WinSCP will do the trick.  Check this “Restoring Your Database From Backup” guide for more information.

    • Copy the backup archive to your home directory ~/restore
    • unTar/unzip it… tar -zxvf backup.tar.gz
    • Put the backed-up SQL back into your MySQL database that you created earlier (e.g. FirstDatabase).
    • Save a copy of your site root WordPress config file (e.g. cp /var/www/FirstSite/wp-config.php ~/working/wp-config.php) so that you don’t have to re-create it.
    • Next, you’re going to need to dump the restored WordPress files into the site root directory… this can be accomplished via rsync (rsync -avP /source /var/www/FirstSite).
    • Copy your good wp-config.php file which has the links to your MySQL database back over (e.g. cp ~/working/wp-config.php /bar/www/FirstSite/)
    • Restart Apache (sudo service apache2 restart).
    • Your site should now be working.

 Fixing Post Name Permalinks

  • Make sure mod_rewrite is enabled via… “a2enmod rewrite” has been run (then restart apache – service apache2 restart ).
  • Fix the AllowOverride all on every occurrence of allow override in /etc/apache2/apache2.conf .
  • To eliminate the informational warning (non-critical) message on restarting apache2: Could not reliably determine the server’s fully qualified domain name, using for ServerName, edit the /etc/apache2/apache2.conf file to such that the ServerName line has been added to read ServerName localhost.

VAX Virtualization Explored under Charon

VAX Virtualization Explored under Charon thumbnail

 “Hey… you know that plant of ours in Europe… the one with all of the downtime?”

“Sure… “

“Did you know it runs on a 30-year old VAX that we’re sourcing parts for off of Ebay?”

“Really?! … I guess that makes 4 plants that I know of in the exact same situation!

That conversation, or one very much like it is the same conversation being had at thousands publicly traded companies, and government organizations around the world.   If you’re a syadmin, a VMware resource, or a developer who got their start anytime in the x86 era, you’ll be forgiven if the closest you’ve come to hardware from Digital Equipment Corp (DEC)/HP Alpha is maybe Alpha/NT box somewhere along the line.  You’d also be forgiven for assuming that VAX hardware from the 1970’s doesn’t still run manufacturing lines that produce millions of dollars in products a year.

But that’s exactly what’s happening.

… And so is the Ebay part of the equation.

To hear the Alpha folks talk, those old platforms were bulletproof and would run forever.  Perhaps not in exactly the same way that the large swaths of the banking industry still run on COBOL, but it’s an apt comparison.  The biggest difference is that code doesn’t literally rust away.  The DEC/HP Alpha hardware is engineered to something like Apollo-era reliability standards… but while they stopped flying Saturn V’s 40 years ago, these VAX machines are still churning away.  Anyway, there’s a joke that goes something like… you know how some syadmins used to like to brag about our *nix system uptimes being measured in years (before heartbleed and shellshock)?

Well, VAX folks brag about uptimes measured in decades.

Crazy, isn’t it?

You might be sitting there asking yourself how we got to this situation?  In simple business terms… If it ain’t broke (and you can’t make any extra margin by fixing it), don’t fix it!

I know lots of IT folks have this tendency to think in 1-3 year time-spans.  I get it.  We like technology, the latest gadgets, and sometimes have an unfortunate tendency argue about technica-obsecura.  But that’s only really because “Technology moves so fast”, right?  Yes, there’s Moore’s law, and the Cloud, and mobility, and all of that stuff.  Yes, technology does move fast.  But business… business doesn’t really care about how fast technology moves beyond of the context of if it can benefit them.  In other words, you use assets for as long as you can extract value from them.

That’s just good business.

What’s the objective of this project?

The primary objective is to mitigate risk – the risk that a critical hardware failure will occur that takes production off-line for an indeterminate amount of time.  Secondary objectives all include modernizing the solution, improving disaster recovery capabilities, eliminating proprietary or unsupported code, and cleaning up any hidden messes that might have collected over the years.

Put differently, the question really is – can we virtualize it and buy some more time, or do we need to re-engineer the solution?

Starting with a quick overview of the project in question… The CLI looks vaguely familiar, but requires a bit of translation (or a VAX/VMS resource) to interact with it.  Starting Lsnrctl returns an Oracle database version… which, unfortunately several searches return precisely zero results for.  Un/der-documented versions of Oracle are always a favorite.  Backups to tape are reportedly functioning, and there’s also a Windows reporting client GUI (binary only, of course), from a long-defunct vendor.  The good news this time around… the platform is apparently functional and in a relatively “good” working state.  The bad news… there is no support contract for anything.  Not for the hardware, not for Oracle, and certainly not from the Windows reporting client.  In this case, the legacy VAX  is basically a magical black-box that just runs and gives the customer the data they need.  And at this point, all institutional knowledge beyond running very specific commend sets has been lost – which isn’t atypical for 20-30 year old platforms.

Which bring us to the question – virtualize, or re-engineer?

Virtualizing a VAX

To start with, most VAX/VMS operating systems are designed for specific CPU types, so virtualizing directly using something like VMware, or Hyper-V is a non-starter.  But those CPU architectures and configurations are pretty old now.  Like, 20-30 years old.  That makes them candidates for brute force emulation.  And there are a few choices of emulator out there… including open-source options like SIMH, and TS10, as well as commercial solutions like NuVAX, and Charon.   After doing a bit of research, it’s pretty clear that there was only one leading commercial offering for my use case… Charon from a company called Stomasys.  While there may be merit in exploring open-source alternatives further, the reality is that the open-source community for VAX system development isn’t exactly active in the same sense the Linux OS community is active.  So if you do go down the open-source path, keep in mind that some of the solutions aren’t even going to be able to do what you might think of as simple and obvious things… like, say, boot OpenVMS.  Which is pretty limiting.

Charon Overview

Aside from the Greek mythology reference to the ferryman who transported the dead across the river Styx, Charon is also a brand name for a group of products (CHARON-AXP, CHARON-VAX) that emulate several CPU architectures, covering most of the common DEC platforms.  You know… things like OpenVMS , VAX, AlphaServer, MicroVAX3100, and other legacy operating systems.  Why the name Charon?  Like the mythological boatman of who, for a price, keeps the dead from being trapped on the wrong side of the river (e.g. old failing hardware); Charon transports the legacy platform unchanged between the two worlds (legacy and modern).  In a similar manner that running a P2V conversion on say, Windows NT, let’s you run a 20 year old Windows assets under vSphere ESXi, Charon lets you run your legacy VAX workloads unchanged on modern hardware.  In other words, you can kind of think of Charon like a P2V platform for your legacy VAX/VMS systems.  Of course, that’s a wildly inaccurate way to think about it, but that’s basically the result you effectively get.

How does Charon Work?

Charon is emulator … it’s secret sauce is that it does the hard work of converting instructions written for a legacy hardware architecture, so that you can run them on an x86/x64 CPU architecture, and do so quickly and reliably.  Because Charon enables you to run your environment unchanged on the new hardware, not only to you get to avoid the costly effort of reengineering your solution, but you can also usually avoid the painful effort of reinstalling your applications, databases, etc.  So beneath the hood, what Charon is essentially doing is creating a new hardware abstraction layer (HAL), to sit on top of your x86/x64 compatible physical or virtual hardware.  The Charon emulator creates a model of the DEC/HP Alpha hardware and I/IO devices.  Once you have the Charon emulator installed, you have an exact working model on which you can install your DEC/HP/VMS operating system, and applications.  Charon systems then execute the same binary code that the old physical hardware did.  Here’s what the whole solution stack looks like mashed together:


Yes, lots of layers.  But even still, because of the difference between the legacy platform and the modern platform, you still typically get a performance boost in the process.

What do I need?

Assuming you have a running legacy asset that’s compatible with Charon, all you need is a destination server.  In my case, the customer had an existing vSphere environment, and existing backup/recovery capabilities, so all that was really needed was an ESXi host to run a new VM on, and the licensing for Charon.

The process at 30,000 feet looks like this:

  1. Add a new vSphere (5.5x) host
  2. Deploy a Windows 2008 R2 VM (or Linux) template
  3. Use image backups to move your system to the VM
  4. Restore databases from backup.
  5. Telnet into your Charon instance

At a high-level, it really is that simple.

How challenging is the installation? 

If you skim the documentation before installing, it shouldn’t be an issue.  Assuming you have access to the legacy host, you can get an inventory of the information about the legacy platform in order to get the right Charon license… you basically need to grab a list of things like CPU architecture, OS version, tape drive type, etc. (e.g. SHO SYS, SHO DEV, SHO LIC,, SHO MEM, SHO CLU, etc.), which will enable you to get the right Charon licenses.  After that, you’ll be ready to step through the installation.  This isn’t a next/next/finish setup, but once you’ve added got the USB dongle setup, and create a VM based on the recommended hardware specifications, you’re well on your way.

Restoring the data from the legacy hardware onto the new VM, can be a bit more involved.  In a perfect world, you’d be able to restore directly from tape into the new Windows VM – assuming you have the right tape drive, good backups to tape, etc.  Short of that, you’ll need to backup and restore the legacy drives into the new environment.  So you’re going to take image backups of each drive, and then upload the backups to your new VM.  More specifically, do a backup from drive A0 to A1, then A1 to A2, etc.  Upload the A0 backup to your new VM and restore the data.  Proceed like that until you’ve completed the all of the restores.  In this manner, you’ll be able preserve your operating system, database installation, and any other applications, without going through the time consuming installation and configuration process.  As a result, you avoid troublesome things like version mismatches, etc., missing media, poor documentation, etc. After the backups are restored, Charon is able to take those restored files that exist on the parent VM, and boot those as local storage – and you’re off and running.

What does Charon look like?

After you’ve installed Charon, the management interface is accessible via the system tray.


If you’re thinking that’s pretty bare-bones, then you’re right.  Once you’ve installed and configured Charon, there’s just not a lot to do from the management interface.

How do I login to the console?

In order to access the legacy OS console and CLI, you’ll simply fire-up your favorite telnet client and point it at the IP address of your Charon system.


Which should resemble the old physical console.

Issues Encountered 

While the 30,000 foot process that was outlined early in the article is essentially what was followed, the biggest problem that was ran into is probably exactly what you’d guess it to be.  The Oracle database.  Unsupported, and underdocumented as it was, we ran into several problems restoring successfully from tape.  While not a problem with Charon,the reality is that very old and unsupported platforms can have problems that go undetected for years.  Whiel this was resolved within the planned budget, it was still inconvenient.  And should serve as a reminder, that to the extent it’s possible to have a support contract in-place for critical components, you should.  At the same time, that’s not always the boots-on-the-ground situation.

The verdict?

We successfully mitigated the hardware risk associated with the failing hardware in the environment, which was our primary objective.  Using Charon, we were able to pull-forward the legacy environment , running it on new supported hardware.  Between re-installing the legacy OS under Charon, and restoring our application data and backups via tape, we were able to also meet some of the secondary objectives.  As a Windows 2008 R2 VM running on a dedicated vSphere host in the customer’s datacenter, we have something modernized (at least to some extent), and that plugs-into the existing backup infrastructure.  With Veeam Backup and Replication, and a standard backup policy with standard RPO, and RTO objectives we have something that the client has a high-degree of confidence in.

Must Have WordPress Plugins for 2015

(This article is part of my WordPress Toolkit series)

Best plugin advice?

Keep it simple.

It’s not only good advice when it comes to plugins – it’s good advice for your site’s design too.  This probably goes doubly true for entrepreneurs and small businesses that aren’t in the web-design business.  Why?  Because it’s too easy to go overboard… adding features and functionality until your site becomes slow and distracting (or worse).  This isn’t to say that plugins are bad, or should be avoided, it’s just that you need to be selective about the types of plugins you use and have a clear objectives in mind when using them.  In this case, I’m talking about utility plugins… or plugins that provide you with basic and critical functionality that’s common to most WordPress.

… which is why this is a list of only 5 plugins.



AkismetAkismet – this one probably almost goes without saying.  It pretty effectively handles comment spam (assuming you allow comments on your site).  It comes pre-installed with every WordPress installation, and just needs to be activated.  Is comment-spam a real problem?  Absolutely.  On popular web-sites the amount of spam comments is up to 85%!  Experience also suggests that it’s even higher for new sites (as in, 99.9%).  Akismet does a solid job at eliminating the vast majority of comment spam.  It also does an effective job at preventing false positives (e.g. legitimate comments flagged as spam).  While some users have criticized the false-positive detection,  by and large Akismet does an excellent job.  Configuration is fairly straight-forward, and there are plenty of Akismet how-to articles available should you need them.




BackupWordPress enables you to back-up your entire WordPress site (including all of your files, and your WordPress database) on a scheduled basis.   It does exactly the job you’d expect it to do.  Mainly, it keep your sites backed-up should something bad happen.  The free version does enable you to get scheduled backups emailed over to you, but your site will quickly grow beyond what email can accommodate.  The $99 bundle lets you direct your backup jobs somewhere convenient, like DropBox, GoogleDrive, Amazon S3, sFTP, etc. and is well worth the cost.

Contact Form 7ContactForm7


Contact Form 7 is one of the best contact form plugins for WordPress, which can be particularly useful if your theme doesn’t come with one (or if it’s clunky).  Highly customizable and doesn’t require you to do any coding to make it happen.  It also supports useful things like CAPTCHA, Akismet spam filtering, and more.



WP-DBManager enables you to keep your WordPress database in check.  You may not realize it yet, but one of the challenges that you’ll encounter as you build out content for you sites is that the number of revisions, drafts, and re-revisions start to add up.  Don’t believe me?  Even this short article took more than a few revisions!  One of the more important aspects of running a successful WordPress site is keeping your database healthy – WP-DBManager enables you to do just that.




Depending on your host, you may or may not need a tool to modify the configuration of wp_mail() such that you can use SMTP.  If you find yourself unable to generate email messages with your host, consider using WP Mail SMTP – configuration is fairly straightforward.

Other Advice

While you could probably spend years playing with the tens of thousands of plugins available, I refer you back to the advice offered at the beginning of this post.  Mainly – keep it simple!  At some point, you’ll probably find a need for some extra bit of functionality that WordPress doesn’t offer out of the box, and when you do… look at your theme first and see if it offers you what you’re looking for, and if you can’t accomplish that way, only then consider adding plugins.  When you check out WordPress Plugin Directory – the comprehensive inventory for all things WordPress plugin related, be sure to check the “Last Updated:” field, and the number of “Downloads” for whatever your interested in, as they’ll give you an idea as to the quality of the plugin your looking at, or at least of the size of the community.  Generally speaking, my suggestion is to stick to Plugins that have either been updated recently, have a high download count (or both), and whenever possible have been recommended to you by someone you trust.

There you have it… my list of 5 Must Have WordPress plugins for 2015.  While by no means an exhaustive list, it’s should be more than enough to keep you busy getting your basic site built-out.

Part 7, A Fast and Always-On Site

What if your site’s popularity skyrockets unexpectedly one day?  Maybe you get linked to from a popular industry blog, or your product gets featured on TV, or who knows what… Is your site ready to handle the surge in volume?  Or is there a chance new visitors might be greeted with something that resembles an “Unavailable” message?

Are you sure about that?

One of the most important reasons you have a web-site site is so that you can capitalize on incoming traffic.  If your site is sitting there unresponsive, or gasping just as a slew of new potential clients are checking you out for the first time, is that really the first impression you want to make?  Before you can say, “We really can’t afford having to Engineer high-availability into our site…”, what if having a fast site that’s always-on was feature you could flip-on like a switch?  No expensive server-farms, or round-robin fail-over configurations.  More importantly, what if you could cache copies of your web-site all over the world, pushing your content as close as possible to your visitors reducing or eliminating the pain of having a “slow-site”?  What if you could do all of this without changing your hosting provider, or making any significant changes to your web-site?  Would you do it?

Of course you would.

And you can, by adding a Content Delivery Network to your WordPress Toolkit.

Content Delivery Networks (CDN)

A CDN a a large distributed infrastructure deployed in multiple data centers around the world.  The objective is to serve content (e.g. text, graphics, video, etc.) to end-users fast, and with high-availability (e.g. without downtime).  CDN’s aren’t a new concept.  For example, Akamai Technologies, which was founded in 1998 is one of the largest and most well-known CDN providers with some very high-profile customers… including Adobe, AMD, NBC Sports, and Yahoo.  The customers use Akamai’s proprietary platform, cacheing out content to a network of over 100,000 servers around the world in order to maximize the performance and availability of their sites.  In the case of Akamai, their customers pay hundreds of millions of dollars to provide them with these services.  That’s the bad news.  The good news?  The types of services that used to be only available to the largest Fortune 500 customers, are now available to everyone for a reasonable cost.
Cloudflare-logo-horizontalEnter CloudFlare, founded in 2009.  While a much smaller company than Akamai,   CloudFlare’s CDN services are available to everyone at a relatively low cost, and can be bolted-on to virtually any existing web-site with a simple DNS change.  CloudFlare built their own CDN from the ground up, combining technologies like SSD drives and Anycast routing along with geo load balancing to make their service as fast and efficient.  By adding

CloudFlare services to your existing site, you no longer need to think of your web-site as an individual web server in a physical location – instead it exists as a globally distributed entity that sits physically close to your visitor – no matter where he/she is at.


This means that your site not only is more responsive to your customers than ever before, but it’s also more resilient because it sites on CloudFlare’s content delivery network.  Should a single-server of theirs go down, the service will automatically heal as one of the dozens of other servers take over for the failed server.  CloudFlare also provides a number of other services aimed at further improving the responsiveness of your site, as well as improving the overall security of your site by reducing it’s exposure to targeted DDoS attacks, and eliminating your site’s static IP address as a choke-point (it also masks your static IP).

The Bottom Line

CloudFlare isn’t the only game in town.  Amazon’s Cloudfront, MaxCDN, KeyCDN, and other’s all compete in the content delivery network space, all can be added to your existing website, and offer varying cost/benefit models.  Based on my experience with the platforms, I tend to recommend either CloudFlare or CloudFront.

Part 6 – Responsive and Mobile

Do you remember a few years ago, when everything looked terrible on mobile devices and that was kind of just the accepted reality of things?  Obviously that’s no longer the case.  Today, web sites need to look great on both the desktop and mobile devices because most of your typical audience is just as likely to be checking out your site from their iPad, as they are from their desktop computer at the office. The ability of a site to display well across all types of devices is called responsive design.  Put differently, its web design that responds and optimizes site appearance and navigation so that the site looks good on whatever device it’s being displayed on… meaning, the site looks good on a computer, on a tablet, or a phone.  It doesn’t matter what you view it on, because the design responds to the environment that it’s being used on.

ResponsiveComparisonR2As an example, here’s a comparison of what this site looks like when viewed from iPhone vs. Chrome on a desktop.  The responsive design enables the content to display in a useful manner on a mobile device.  Put differently, responsive design is about creating a user-friendly site experience, making the site more appealing, and creating a great user-experience across many devices and screen sizes.  While you can build a responsive design yourself, or contract a company to do this for you, in most cases finding the right theme that’s already responsive is often the best approach.  In the prior article in this series, I mentioned several themes which were responsive and useful.

Is responsive really design for me?

The answer is almost certainly yes.  Here’s why… even if you expect that the vast majority of your site’s visitors will be coming in via a desktop web-browser, the reality today is that your site is going to rank better in terms of Search Engine Optimization (SEO) if you have a responsive design that works well with mobile devices.  In other words, Google loves responsive web design and your search results will rank higher automatically just by choosing a responsive theme.  If for no other reason, that’s why you should consider a responsive theme.  Beyond that, one of the added benefits of responsive design is less management effort.  In the majority of cases, one you have a responsive site, there’s no longer a reason to do both a mobile and desktop version of your site.  That means no separate content management tools, or sites to maintain, or having to worry about separate URLs, and building the authority and rank of each site independently.  Which makes sharing content on social media that much easier – both for you, and your audience.  All of that extra stuff (and cost) just goes away.

Okay, so say none of this is you… maybe you don’t think you really care about responsive design, and maybe you just think of your web-site as a “me too”, or “must have” – a placeholder for your business.  That’s fair… many companies, niche companies, and organizations that have very complex or products just have a site because they have to.  But even if all of that’s true of your business – why wouldn’t you want your site to look good on a tablet?  Why wouldn’t you want to automatically improve your page rank?  Why wouldn’t you want  want a new potential avenue for clients?  If you’re going to be making any investments in a site, choose a responsive theme.

Visit Us On Twitter