StackAccel

≡ Menu

Articles

The Enterprise Storage Market in 2014 (storage for the rest of us)

Small breakthroughs applied in new ways can re-shape the world.  What was true for the allied-powers in 1942 – by then monitoring coded Japanese transmissions in the Pacific theater of World War II – is just as relevant in the Enterprise storage business today.  In both cases, more than a decade of incremental improvement and investment went into incubating technologies with the potential to disrupt the status quo.  In the case of the allied-powers, the outcome changed the world.  In the case of the Enterprise storage business, the potential exists to disrupt the conventional Enterprise SAN business, which has remained surprisingly static for past 10 years.

In large and small enterprises alike, one thing that everyone can agree on is that storage chews through capital.  In some years it’s a capacity problem.  In other years, it’s a performance problem.  And occasionally, when it happens to be neither of those… it’s a vendor holding you hostage with a business model that looks reminiscent mainframe computing.   You’re often stuck between developers demanding a bit more of everything, leadership wanting cost reductions, and your operations team struggling with their dual mandate.

Storage, it seems… is a constant problem.

If you roll-up all of your storage challenges, the root of the problem with the Enterprise storage and the SAN market over the better half of the past decade is that storage has all too often looked like a golden hammer.  Running out of physical capacity this year?  Buy a module that gives you more capacity.  A performance issue, you say?  Just buy this other expensive module with faster disks.  Historically, it’s been uncreative.  If that pretty much resembles what you’re assessment of the storage market was the last time you looked, you’d certainly be forgiven for having low expectations.

Fortunately, it looks like we’re finally seeing disruption in Enterprise storage by a number of technologies, and businesses, and this has the potential to change the face of the industry in a ways that just a few years ago wouldn’t have even been envisioned.

Where were we a few years ago?

When you breakdown your average SAN, by and large it consists of commodity hardware.  From standard Xeon processors, and run of the mill memory, to the same basic mix of enterprise HDDs… for the most part it’s everything you’d expect to find.  Except perhaps for that thin layer of software where the storage vendor’s intellectual property lives.  That intellectual property – more specifically, the reliability of that intellectual property is why there are so few major storage vendors.  Dell, EMC, NetApp, IBM, and HP are the big players that own the market.  They own the market because they’ve produced reliable solutions, and as a result they’re the ones that we trust not to make a mess of things.  Beyond that, they also acquired strategic niche players that popped up over the years.  In a nutshell, trust and reliability are why enterprise storage costs what it does, and that’s exactly why these companies are able to sustain their high margins… they simply have a wide-moat around their products.  Or, at least, they used to.  Flashing forward to today, things are finally starting to change.

What’s changed?  Flash memory:

For years the only answer to storage has been more spinning magnetic disks (HDD).  Fortunately, over the past few years, we’ve had some interesting things happen.  First, is the same thing that caused laptop and tablets to outpace desktops: flash memory.  Why flash memory?  Because it’s fast.  Over the past couple of years, SSD drives and flash technology are finally being applied to the Enterprise storage market – a fact which is fundamentally disruptive.  As you know, any given SSD outperforms even the fastest HDD by orders of magnitude.  The fact that flash memory is fast and the fact that it’s changing the Enterprise storage market might not news to you, but if there was any doubt… here are some of my takeaways concerning SSD Adoption Trends from August 2013, based on the work done by Frank Berry, and Cheryl Parker at IT Brand Pulse.

These changes range from the obvious…

  • As SSDs approach HDDs dollar per GB cost, organizations are beginning to replace HDD with SSD
  • Quality is the most important HDD feature
  • Organizations are mixing disk types in their arrays in order to achieve the best cost for reliability, capacity, and performance required
  • More organizations have added SSDs to their storage array in 2013 than 2012

… to the disruptive…

  • Within 24 months, the percent of servers accessing some type of SSD is expected double
  • SSDs will comprise 3x the total percentage of storage within the next 24 months
  • IT is depending increasingly more on their storage vendors (to embed the right underlying NAND flash technology in a manner that balances cost, performance, and reliability)

In other words, we’re finally starting to see some real traction in flash-based memory replacing hard disk drives in the Enterprise tier, and that trend appears to be accelerating.

What’s changed?  The Cloud:

It seems that no IT article today is complete without a conversation about the Cloud.  While some have been quick to either dismiss or embrace it, the Cloud is already starting to pay dividends.  Perhaps the most obvious change is applying the Cloud as a new storage tier.  If you can take HDDs, and mix in some sort of Flash memory, and then add the Cloud… you could potentially have the best of all possible worlds.  Add in some intellectual property that abstracts out the complexity of dealing with these inherently different subsystems, and you get a mostly traditional-looking SAN that is fast, unlimited, and forever.  Or, at least that’s the promise from vendors like Nasuni, and StorSimple (now a Microsoft asset), who have taken HDDs, SSDs, and the Cloud and delivered a fairly traditional SAN-like appliance.  However, these vendors have taken the next-step, and inserted themselves between you and the Cloud.  Instead of you having to spin-up nodes on AWS, or Azure, vendors like Nasuni have taken that complexity out, and baked-it into their service.  On the surface, your ops team can now leverage the Cloud transparently.  Meanwhile, Nasuni has successfully inserted themselves as a middleman in a transaction that is on-going and forever.  To the extent that’s a good thing, I’ll leave up for debate.  But it works quite well and solves most of your storage problems in a convenient package.

The Hidden Cloud?

The storage industry’s first pass at integrating the Cloud has been interesting.  If not yet transformative in terms of Enterprise storage, it’s definitely on everyone’s radar.  What’s arguably more interesting and relevant and what has the potential to be truly transformative is the trickle down benefits that come from dealing with Big Data.  In short, it’s large companies solving their own problems in the Cloud, and enabling their customer base as a byproduct.  The Cloud today, much like the Space race of the 1960s, and cryptography advancements of the 1940s, are transformative pressures with the potential to reshape the world.

… and the most disruptive advancements are probably coming in ways you haven’t suspected.

Data durability. 

In a similar way that the Japanese assumed that their ciphers were secure in 1942, IT organizations often assume RAID to be the basic building block of Enterprise storage.  As in, it goes without saying that your storage array is going to rely on RAID6, or RAID10, or what have you.  After all, when was the last time you really gave some thought to the principles behind RAID technologies?  RAID relies on classic erasure codes (a type algorithm) for data protection, enabling you to recover from drive failures.    But we’re quickly reaching a point where disk-based RAID approaches combined with classic erasure codes, like Reed-Solomon in RAID6 (with can tolerate up to 2 failures), simply aren’t enough to deal with the real-world risk inherent in large datasets with many physical drives.  Unsurprisingly, this is a situation that we tend to find in the Cloud.  And interestingly, one type of solution to this problem grew out of the space program.

The medium of deep space is often noisy.  There’s background radiation, supernovae, solar wind and other phenomena that conspire to damage data packets in transit, rendering them  undecipherable by the time they reach their destination.  When combined with situations where latency can be measured in hours or days, re-transmission is at best an inconvenience and at worst a show stopper.  Consider the Voyager I probe for a moment.  Instructing Voyager to re-transmit data, where the round-trip takes about 34 hours and the bandwidth available is on the order of 1.4 kbit/s – extra re-transmissions can frustrate the science team, or could even put the mission at risk.  In other cases, where two way communication isn’t possible, or the window for operation is narrow – say, like if you had to ask Huygens to re-transmit from the surface of Titan – in those cases, re-transmission is simply a non-starter.  As a result of these and other needs – the need for erasure encoding was obvious.

When erasure encoding is applied to storage, the net of it is that you’re using CPU (math), to create new storage efficiencies (by storing less).  In real-word terms, when the challenge is that re-build times for 2TB disks (given a certain I/O profile) are measured in hours or days, they simply aren’t able to deal with failure risk where a second (or third) drive could fail while waiting on the rebuild.  What’s the net result?  Erasure encoding prevents data loss, and decreases the need for storage; as well it’s supporting components (fewer drives, which translate into less maintenance, lower power requirements, etc.).  The need for data durability has spurred the implementation of new erasure codes, targeting multi-level durability requirements, which can reduce storage demands and increase efficiencies.

Two examples of tackling the data durability problem though the use of new erasure codes include Microsoft’s Local Reconstruction Codes (LRC) erasure encoding, which is a component of Storage Spaces in Windows 2012 R2, and Amplidata’s Bitspread technology.

In the case of Microsoft LRC encoding, it  can yield 27% more IOPS given the same storage overhead as RAID6, or 11% less storage overhead given the same reconstruction I/O relative to RAID6.

Bitspread in action

Bitspread in action

Amplidata’s approach to object-based storage is their Bitspread erasure coding technology, which distributes data redundantly across a large number of drives.  Amplidata claims it requires 50% to 70% less storage capacity than with traditional RAID.  It works by encoding (as in, mathematically transforming) data (files, pictures, etc.) at storage nodes, using their propriety equations.  Based on a user-definable policy, you can lose multiple stores, or nodes and Bitspread can still heal the loss.   Amplidata’s intellectual property is such that it can work for small or very large Exabyte-sized data sets. The result in a failure situation is much faster recovery, with fewer total drives and less overhead than RAID6.

The Hidden Cloud: Continued…

When Microsoft was building Azure for their internal services a few years ago, they knew they needed scale-out highly available storage solution.  Like the rest of us, they didn’t want to pay a storage tax to one of the big storage vendors for something they clearly had the technical capability to architect in-house.  What’s more, for Azure to be a viable competitor to AWS, they were driven to eliminate as much unnecessary cost from their datacenters as possible.  The obvious low-hanging fruit in this scenario is storage, in the form of Windows Azure Storage (WAS).

Stepping back, Microsoft implemented the LRC erasure coding component within Storage Spaces, enabling software-defined storage within the context of their Windows server OS, which you can use in your datacenter to create elastic storage pools out of just about any kind of disk – just like Microsoft does in Azure.

One of the most interesting tickle down benefits that I’ve seen from the Cloud, comes in the form of a Microsoft Windows Server 2012 R2 feature known as Storage Spaces.  The pitch for storage spaces boils down to this… Storage Spaces is a physically scalable, continuously available storage platform that’s more flexible than traditional NAS/file-sharing, offers similar performance to a traditional SAN, and does so at a commodity-like cost point. In other words, Storage Spaces is Cloud storage for the Enterprise without the storage tax.

The obvious question becomes, “So how is Storage Spaces as an implementation, any different than the intellectual property of the traditional SAN vendors?  Isn’t Microsoft just swapping out the vendor’s intellectual property, for Microsoft Storage Spaces technology? Yes, that’s exactly what happening here.  The difference being that you’re now just paying for the OS license plus commodity hardware, instead of paying for the privilege of buying hardware from a single storage vendor as well as their intellectual property.  In the process, you’re effectively eliminating much of storage tax premium.  You still need to buy storage enclosures, but instead of vendor-locked-in arrays, they’re JBOD arrays, like one of many offered by DataON Storage, as well as other Storage Spaces-certified enclosures.

storagespacesscale

The net result is dramatically lower hardware costs, which Microsoft calculates based on their Azure expenses to be on the order of 50% lower.  As another datapoint, CCO was able to avoid purchasing a new $250,000 SAN and instead acquire a $50,000 JBOD/Storage Spaces solution.  Now granted, that’s a Microsoft-provided case study, so your mileage may vary.  But the promise is to dramatically cut storage costs.  In a sense, Storage Spaces resembles a roll-your-own SAN type of approach, where you can build out your Storage Spaces-based solution, using your existing Microsoft infrastructure skill-sets, to deliver a scale-out continuously available storage platform, with auto-tiering that can service your VMs, databases, and your file shares. Also, keep in mind that Storage Spaces isn’t limited to Hyper-V  VMs, as Storage Spaces can export NFS mount points, which your ESXi, Xen, etc. hosts use.

The Real World:

What attributes are desirable in an Enterprise storage solution today?

  • Never Fails.  Never Crashes. Never have to worry.
  • Intelligently moves data around to the most appropriate container (RAM, Flash-memory, Disk)
  • Reduces the need for storage silos that crop-up in enterprises (auto-tiers based on demand)
  • Reduces the volume of hardware, by applying new erasure encoding implementations
  • Inherits the elastic and unlimited properties of the Cloud (while masking the undesirable aspects)
  • Requires less labor for management
  • Provides automated disaster recovery capabilities

As IT decision makers, we want something that eliminates the storage tax, and drives competition among the big players, and most importantly is reliable.  While flash-memory, the Cloud, and new technologies are all driving the evolution of the Enterprise Storage, it’s the trickle-down benefits that come in the form of emerging technologies, that are the often times the most relevant.

While the U.S. was monitoring the coded Japanese transmissions int he pacific they picked-up on a target known as “objective AF”.  Commander Joseph J. Rochefort and his team sent a message via secure undersea cable, instructing the U.S. base at Midway to radio an uncoded message stating that their water purification system had broken down and that they were in need of fresh water.  Shortly after planting this disinformation, the US team received and deciphered a Japanese coded message which read, “AF was short on water”.  The Japanese, still relying on a broken cipher, not only revealed that AF was Midway, but they went on to transmit the entire battle plan along along with planned attack dates.  With the strategy exposed, U.S. Admiral Nimitz entered the battle with a complete picture of the Japanese strength.  The outcome of the battle was a clear victory for the United States, and more importantly the Battle of Midway marked the turning point in the Pacific.

Outcomes can turn on a dime.  The Allied Powers code-breaking capabilities in both the European and Pacific theaters are credited with playing pivotal roles in the outcome of World War II.  In business, as in war, breakthrough and disruptive technologies are often credited with changing the face of the world.  Such breakthroughs are often hard-fought, and the result of significant investment of time and resources with unknown, and often times surprising outcomes.  Enterprise storage pales when viewed in comparison with the need to protect lives (perhaps, the ultimate incubator).  But nevertheless, the Cloud is incubating new technologies and approaches to problems previously thought of as solved – like  Data durability, and has the potential to fundamentally change the storage landscape.

With this article, I tried to step back and look at the industry at high-level and see where the opportunities for disruption are, where we’re headed, and the kind of challenges that a sysadmin who is taking a fresh look at Enterprise storage might be faced with today.  As part of my option analysis effort, I got a bit deeper with Nasuni and Microsoft Storage Spaces, and complied that information, and some “getting started” pricing in the 4800+ word Enterprise Storage Guide.  So if you sign-up for my Newsletter here, you’ll get the Enterprise Storage guide as free download (PDF format) in the welcome email.  No spam, ever.  I’ll just send you the occasional newsletter with content similar to this article, or the guide.  And of course, you can unsubscribe at any time.    

Phone System Guide Part 1: Choosing the right phone system for your business

In the late 1990’s, 3Com was the number two network equipment company, competing with Cisco in the high-end, high-margin business of core routers and network gear.  This was back before the HP acquisition… before the Palm spin-off… even before 3Com started to fall apart.  If you remember back then, plenty of capital was floating around telecom-related stocks, and the fortunes of even quality companies were not immune to irrational exuberance.  It was then that 3Com, buoyed in part by the price of its stock, acquired up-and-comer NBX Corp. for $90 million in cash and stock.  The NBX acquisition made sense… they were one of the first to market with a viable small-to-midsized (SMB) VoIP PBX, and by partnering with regional telecoms 3Com quickly established a foothold in the emerging segment with a game-changing product.

3com_nbx_100

Industry Status Today

Even though I’ve deployed, managed, supported, and otherwise been responsible for quite a few PBX systems… including systems from Toshiba, Inter-Tel/Mitel, Asterisk, Allworx,  NBX (and more), telephony has never really been something that got me excited.  Phones are necessary.  Which means you need a cost effective PBX solution.  Therefore, the quicker you can make a good decision and move on, the better.  So that’s the approach I’ve taken… give you the highlights, help you understand the the industry, and then accelerate your option analysis phase.  In short, I want to help you move forward more quickly than you might otherwise have.

Over the course of past decade or so, the phone system market has gone through 3 major disruptions.  First the industry was transformed by VoIP, next came Asterisk and the open-source ecosystem that grew up around it, and finally the Cloud and the “bandwidth transformation” that we’re living through today. While the Cloud is undoubtedly going to change phone systems, many PBX Clouds look very much like the first generation attempts that they actually are.  One significant change over the past 5 or so years is that that Asterisk-based systems have gone from hobby, to good enough, and many integrators offer low-cost alternatives to what you might think of as the “big-names” of the phone system business (e.g. Tier 1 competitors).  That said, there are good reasons to opt for a Tier 1 mid-sized business phone system… you just need to understand what might drive you to such a system, and how to identify the right solution for your business.

Is the PBX business a good business to be in?

Among the Tier 1 competitors, ShoreTel (SHOR), Mitel (MITL), and Cisco (CSCO) are publicly traded, NASDAQ listed companies – which means we have access to some relevant financial data.  Setting aside Cisco (because they’re involved in so many businesses), I can’t say the performance of either Mitel or ShoreTel’s stock has historically really been worth taking note of.  Indeed, as I write this analysts are projecting a 7% decrease in revenue for Mitel, while at the same time a recent 10-Q shows an unsettling uptick in inventory levels.  ShoreTel’s revenue forecasts fare better, and indicate continued growth in terms of the their on-premises product, as well as their hosted product (via the M5-acquisition).  But in neither case does the current trend suggest anything resembling a secular growth story.  But is it a good business to be in?  Just because their stocks aren’t all that interesting, doesn’t mean this is necessarily a bad business to be involved with.  Looking no further than the annual sales of these companies ($200 – $650 million in revenue), and the compensation packages that exist for key executives – one realizes that someone is making money doing this.    So while I don’t see myself picking up SHOR or MITL as an investment… the phone system business is still a descent one to be involved in.

Does their business matter?

The extent to which I care about the business of the company whose products I’m buying is really limited to my expectation of the product’s long-term viability, the anticipated life-cycle of the product, and perhaps the on-going R&D investments that the company is making.  For instance, after 3Com acquired NBX, they made significant in-roads via their regional telephony partners in bringing “turn-key” VoIP systems into mid-sized organizations, and establishing VoIP as a viable technology.  More importantly, even though 3Com self-destructed, the NBX assets continued to be supported by HP, and despite some unpleasantness the platform remained viable.  For what it’s worth, it seems that the average life-cycle for a product-set in this business is about 7 years.  While your purchase may be viable for longer, it’s probably reasonable to assume based on past behavior that they go through a major product change about every 7 years.  In the current market, just about everyone appears to be trying to figure out what the cloud means to their customers, their businesses, and how they can have a competitive solution in that space.

Getting Started

If you were to create a diagram comparing the major features in prominent PBX systems that make sense for a mid-sized organization, you’d probably find about 90% overlap between the platforms.  That wasn’t the case back in the NBX days, when VoIP was relatively new to the space.  While many phone systems are similar today, they’re not perfect substitutes for one another.  So, when you go though the vendor selection process,  unless you know in advance what features to care about, you might find yourself at a loss for where to start.

Terminology: Is it a phone system, PBX, or Unified Communication platform?

PBX switchboard

The answer of course, is all of the above.  The term PBX (Private Branch Exchange), is a generic term that conveys the concept of a “phone system”… it’s the aggregation and interface point between endpoints (e.g. phones), and the public switched telephone network (PSTN).  Continuing down that line of thought… Unified Communications (UC) extends the concept of a PBX to integrate all types of communications.  So think of UC in terms as aggregating telephony (phones), instant messaging, presence status, video conferencing, email, cellular, SMS… and so on, so that your communications “follow-you”,  and that you’re reachable via a single contact point.  So, you no longer have to separate out the concepts of work phone, cell phone, email, IM, etc… because now it’s all unified.  While Microsoft’s Lync is promising, most of the systems today that I’ve looked at haven’t quite bridged the gap between PBX and everything else, and therefore in my mind,  aren’t quite yet unified UC platforms.  So for the most part, I tend to think of UC as a check-box on the feature list that falls somewhere short of my top priority at this point.

The things that matter – critical features

Before you survey the entire market, bring in integrators, evaluate the demos, and stratify your options based on features or cost… my first suggestion is to determine what functionality is critical to you and your business.  If you haven’t been in the market for a telephony solution in some time, that question might be difficult to answer.  If your perception of a phone system is limited to something that lets you and your employees pick-up a phone and make calls… and any extra feature as “nice to have”,  it might be worth digging a bit deeper.

  • Redundancy Requirements
  • Employee Headcount / Geographic Dispersion

Let’s size this up pretty quickly… did your business do $200+ million in sales last year, employing a geographically diversified workforce numbering in the thousands, in a business where phone system downtime can be measured in hundreds of thousands of real dollars per hour?  Then you need something bullet proof, and you need to get a team of stakeholders involved, hopefully driven by the CTO (or your equivalent role) to get the right solution for your organization.  Otherwise, if you’re a mid-sized organization redundancy requirements, and head-count might just be your driving decision factors.

Redundancy Requirements

Setting aside most of the feature overlap that exists in these systems, a key differentiator is redundancy capability.  We can talk about web-conferencing capabilities (yes, most of have some sort of GoToMeeting imitation product), ease-of-use, Unified Communications, market share, and whatever else might matter to you when looking at phone systems… but understanding your redundancy requirement really points you in a particular direction.  It will also save you a great deal of time and energy during vendor selection to know if you only need to consider Tier 1 players.

Let me put it to you another way… can you tolerate any downtime on your phone system?  If a critical component of your phone system were to fail, and result in the system being off-line for a period of time (minutes to hours), would that materially affect your business?  I think the knee-jerk reaction that most of us would give is “Yes, of course that would kill us!“.  But are you sure about that?  And there’s the rub… you need to be able to answer the question of how just how important the phone system is to you, and even better… put a dollar value on downtime.  So if your revenues live-and-die by the phone,  or if you run a busy call center and being without the PBX for a few hours every X number of years would be significantly disruptive, then you need to consider a Tier-1 system that supports N+1 redundancy, or Active/Active redundancy.  Expect to pay on the order of 2x more for systems that support this kind of redundancy.

Employee Headcount

At the risk of over simplifying the conversation, 600 employees is often the break-point.  For instance, if you have fewer than 600 employees, and the majority are based out of only a few locations, then look at the Tier-2, or Tier-3 options because you’re going to significantly reduce your CapEx.  Once you get over 600, and certainly when you get into the thousands, you should focus more on Tier-1 systems.

Other Features

Once you move beyond redundancy requirements and employee count you’ll quickly find yourself approaching the 90% feature-overlap realm.  When it comes to vendor selection, understanding your business and the real-world value you should expect to derive from the considerable (exhausting) list of features will go a long-way to filtering out the noise of the selection process.  Two remaining features are CRM integration, and Mobility capabilities.

Mobility:

By mobility, I really mean everything that occurs outside of your office.  The extent to which you have mobility requirements, are often driven by remote worker and your sales forces requirements.  Within all 3 tiers, mobility as a “feature” exists.  The details though are likewise going to drive your requirements.

  • Android/IOS app for call handling
  • Mobile worker (Phones in home-offices)
  • Data/Minutes integration for mobility (or lack therefore)

Mobility can become a bit of a quagmire, as nearly all of the systems that I’ve evaluated have some type of mobility functionality.  But the details of how it works, the relative security of the platform, and the “ease-of-use” are where you start to see the complexity of the solution you opt for really increasing.  Here are some things to think about…

  • Is there an Android/IOS app the currently exists that I can use to integrate my employee handsets into the PBX’s infrastructure?
  • What specific capabilities do the Android/IOS apps have and to what extent are they transparent to the employee?  What do you mean by Transparent?
  • Do the Android/IOS apps utilize data, minutes, or both?  And is it configurable?
  • How is the call quality if using data on 3G?
  • Does the Android/IOS app have its own dialer?  Does it mask caller ID?
  • How does voicemail integration happen on the Android/IOS app?
  • Is the Android/IOS integration seamless?  Or does the PBX bridge the remote caller, and the current caller in an non-intuitive manner?

The mobility conversation can go on at length, because the “how” of it, is still evolving for most platforms.  From my perspective, the above is all critical to know as you dig down into an individual system’s capability, but more important than the above is the question… “Will your sales people actually use the mobility features, or are they just call from their cell phones all of the time?”   This is really the point where the rubber meet the road, because in my experience across many clients, the Sales folks (mostly) aren’t leveraging these features, even though they’re the target audience and stand to benefit the most.  There are many reasons why this may be the case, but if you’ve spent any time in sales the why is probably not a mystery.    In any case, if they’re not going to use the features, or if the features aren’t obviously intuitive… then I’m not sure that these are important features for your organization (yet).  Sure, geeks might play with this type of functionality – but if mobility is a key topic for you, then you need to both understand how your sales force (or other target stakeholders) will benefit from this feature, and I recommend you pull them into the decision making process early.

CRM Integration

Customer Relationship Management (CRM), and/or Enterprise Resource Planning (ERP) integration mean varying things to different platforms.  To generalize, CRM integration enables the ability for an incoming or outgoing call to sense-caller ID information and automatically do something relevant with your CRM software (e.g. pull-up client information, display last touch data, etc.).  Different platforms have different integration levels and capabilities, with some of the more common being Microsoft CRM, and Salesforce.com integration.

The Bottom Line

While the 3Com acquisition of NBX may have served as the beginning of the VoIP adoption cycle for many organizations, unfortunately for 3Com and many of their 12,000 employees, that acquisition also marked a less than strategic shift away from 3Com’s Enterprise equipment cash-cow.  Having survived the acquisition, outlived 3Com, and outlasted HP’s interest in inventorying parts, many organizations are left with no one to support the NBX and similar products.  If you find yourself in the market once again for a new phone system, realize that the core functionality of most phone systems is “good enough”, and that the big differentiators have become redundancy and scalability.  Beyond that, understanding how and to what extent mobility, CRM integration, and any other key features are relevant to you and your organization will go a long way to saving you time, and frustration and help you to hit the ground running.

The above article is based in large part on a vendor selection process that I assisted a client with recently.  The platforms that we evaluated included Avaya’s IP Office, ShoreTel, Digium’s Switchvox, Trixbox, FreePBX, and Allworx.  If you’re interested in what our decision criteria included, approximate platform costs, what my client-facing deliverable looked like you, just sign-up for the newsletter here, and you’ll be able to download this article along with my deliverable (which includes some additional commentary).  I use the newsletter to send the occasional update, often with content that’s exclusive to the newsletter (e.g. the client-facing deliverable).  No spam ever.  Promise.  Unsubscribe at any time. 

Visit Us On Twitter