I love working with converged (CI) and hyper-converged (HCI) infrastructure technologies. These types of data center systems have allowed administrators from all sorts of environments realize some big benefits when it comes to optimization and architecture.
First, it’s important to define and understand HCI and note that there are numerous similarities between HCI and CI environments. Both are deployed as blocks, and both converge critical resources to deliver higher levels of density. However, the biggest difference comes in how these environments are managed. In HCI, the management layer (storage for example) is controlled at the virtual layer. Specifically, HCI incorporates a virtual appliance that runs within the cluster. This virtual controller runs on each node within the cluster to ensure better failover capabilities, resiliency, and uptime.
Benefits of hyper-converged infrastructure include:
- Integrated VDI, convergence, and hypervisor management
- Rapid-scale deployment of VMs and applications
- Smaller data center footprint
- Greater levels of application, desktop, and user densities
- Direct integration with the software layer
- Creation of hyper-scale capabilities
- Leveraging all-flash systems
- Integration with cloud systems
- Increased capabilities around resiliency, business continuity, and disaster recovery
HCI is a quickly growing market. According to IDC’s Worldwide Quarterly Converged Systems Tracker, worldwide converged systems vendor revenues increased 6.2 percent year over year to $3.15 billion during the second quarter of 2017. The market consumed 1.78 exabytes of new storage capacity during the quarter, which was up 5.6 percent compared to the same period a year ago.
"The converged systems market is benefiting from an expansion into new environments and a new set of customers," said Eric Sheppard, research director, Enterprise Storage & Converged Systems. "This expansion is driven by products that are offering new levels of automation, tighter integration between technologies, and, in many cases, software-defined solutions based on scale-out architectures."
IDC’s numbers indicate that Dell EMC has taken the top spot when it comes to hyper-converged infrastructure. However, it needs to be noted that the Dell EMC XC appliances are powered by Nutanix software, which is arguably the engine that’s driving a lot of that growth. Also, let’s not forget that Nutanix sells its own hyper-converged infrastructure appliances too.
Here’s an overview of the top players in hyper-converged infrastructure and their solutions:
In April of last year, I wrote a post about what HyperFlex was and wasn’t good at. Well, at version 2.5, I can honestly say that HyperFlex has come a long way. First of all, HyperFlex Connect, a standalone HTML5 interface for the management and orchestration of HyperFlex from any device, makes management way simpler. Connect acts as an extensible interface that is hypervisor agnostic and has built-in automation with RESTful API.
They also added higher-capacity all-flash nodes, which are now coupled with their 40-Gbps USC fabric. All of this translates to big performance enhancements, more density, better VDI multi-tenancy, and optimized resource controls. Another big add was native replication of clusters. This helps protect data and applications. HyperFlex now also includes data-at-rest security options using self-encrypting drives.
Finally, remember that CLiQr acquisition? We’re seeing even deeper integration with Cisco CloudCenter and HyperFlex. That being said, integration with existing Cisco systems has been made much easier as well. That is, working with existing UCS domains and incorporating HyperFlex has been greatly simplified. So, if you’re a Cisco shop that wants to support remote office or leverage Cisco’s hyper-converged infrastructure, HyperFlex is a great option!
HPE has been in the CI space for quite some time. However, they became a real HCI player with the 2017 acquisition of SimpliVity. In its own space, SimpliVity was a solid product, going head-to-head with Nutanix. Starting out in 2009, they quickly gained more than a thousand partners with customers worldwide. They had some very cool innovation keys to success, which HPE is leveraging.
What is now called the “HPE” OmniStack Data Accelerator Card performs inline deduplication, compression, and data optimization across primary and backup storage repositories, offloading this processing so VMs suffer no performance penalty. As per HPE SimpliVity, the median data efficiency is rate 40:1.
From there, HPE SimpliVity Data Virtualization Platform operates as a virtual controller on vSphere ESXi and abstracts data from the underlying hardware. Designed for a bunch of use cases, the HPE SimpliVity 380 HCI architecture is a solid option for organizations looking to support remote office or new virtualization deployments.
We’re seeing some real muscle-flexing from Dell EMC (and VMware). At the last VMworld, Dell EMC and VMware announced two joint solutions, VxRail 4.5 and VxRack. In its newest version, VxRail 4.5 includes automation and lifecycle management for VMware’s vSAN and vSphere. The really cool part here is that upgrading and patching software is now highly automated. This helps reduce configuration errors and allow admins to focus on more valuable operations. This level of automation is awesome for DevOps, higher levels of scale, and fast deployments.
Updates also include multi-node scaling, which automates the scaling of a single VxRail appliance to multi-node environments. Finally, you’ll see some cool updates around REST-based APIs for programmatic lifecycle management. You can now manage a single appliance or entire clusters.
I didn’t forget about VxRack, the beefier version of VxRail. At VMworld we saw improved capabilities around a self-contained system via integration with VMware Cloud Foundation for simplified management of VMware vSphere 6.5, vSAN 6.6, and the network virtualization product NSX 6.3.
The other cool part here is the hybrid cloud option. You can now run Dell EMC’s Enterprise Hybrid Cloud (EHC) on top of VxRack. When it comes to Dell EMC, whether you’re a smaller shop or a large data center, there are options for use cases here. Plus, deep integration with your underlying VMware environment make, this tech a must when examining HCI.
I’d call Nutanix of the original companies behind the hyper-converged infrastructure revolution. And, they’re still here and making waves. The Acropolis Operating System (AOS), formerly known as the Nutanix Operating System, has continued to see updates and improvements. Their recent 5.1 release allows customers to add performance to their clusters simply by increasing their SSD tier for example. This is accomplished by adding an all-flash node to an existing hybrid cluster, and the new SSDs are seamlessly added to existing storage containers.
Furthermore, instead of doing forklift migrations from hybrid systems to all-flash systems, users can add all-flash nodes to existing clusters and retire their older hybrid gear.
In 5.1, we also saw capacity optimization improvements. According to Nutanix, the erasure coding algorithm is more intelligent in 5.1, where every time a node gets added new EC strips or existing EC strips on writes will automatically take advantage of the new nodes. This functionality improves capacity utilization across while still maintaining the same protection levels as the cluster grows and shrinks.
Another really cool function has been the further enhancement around containerization. In 5.0 we saw some cool support for things like Docker. In 5.1 we see even deeper integration with the Acropolis Container Services.
Another cool addition is general availability for support of XenServer. This helps further support workloads like XenApp, XenDesktop, and virtual NetScaler appliances.
Overall, Nutanix is an absolute leader in the hyper-converged infrastructure space. However, their strength isn’t just in the hardware. Their software architecture around AOS is truly impressive. Nutanix should be a consideration in almost any HCI scenario.
Staring out in 2007, Scale Computing is one of the last standalone HCI vendors on the market. With thousands of customers and deployments, this is a mature solution offering serious benefits to the customer. The new HC3 architecture has big improvements around storage deduplication, multi-cluster remote management, disaster recovery capabilities, and even user management. Plus, HC3 allows you to deploy single appliances – instead of the previously required minimum of three. You’d still want a cluster for HA and primary production systems, but if you’re a smaller business and have no need for all that extra horsepower, the single appliance will work for you.
Scale has also done a solid job getting into the automation space. They’ve created an automated an intelligent mechanism to allocate storage across the tiers. According to the company, this tuning capacity allows you to increase the allocation of flash for workloads that have higher I/O requirements while decreasing the allocation for virtual disks and workloads that have minimal I/O needs. I’ve always been a fan of Scale Computing. If you’re looking to support smaller offices and are on a budget (but still want awesome tech), look to Scale as a solid option.
Let’s be clear -- I know this isn’t the full list of hyper-converged infrastructure vendors. Plus, there are going to be more hardware vendors supporting CI using software (like Pivot3 or Nutanix OEM) to deliver HCI solutions. Lenovo is a great example of that. Furthermore, we’re seeing a broadening market around whitebox integration with HCI software options. Whichever way you approach it, the HCI landscape continues to change and evolve.
Goodbye Atlantis Computing; Hello Hive-IO
Atlantis computing has been in the market for a long time. If you’ve worked with virtualization technologies (XenAppp and XenDesktop in particular) you’ll know about Atlantis. They came to market with their ILIO products and then further impacted the industry with USX. Then, they released their Hyperscale HCI appliance and attempted to enter a very volatile market. Sometimes it works, and sometimes it doesn’t. Its concepts behind hyper-scale were actually really awesome, but there were challenges with the hardware, where it could be deployed, and issues with the deployment itself.
And so, Atlantis Computing’s assets were sold to Hive-IO, a young software-defined focused organization. According to Hive-IO, they’ll continue to support all of Atlantis products and work to retain its essence and technology to help expand Hive-IO's storage offering to include intelligent software-defined solutions. The focus will revolve around an area which both Hive-IO and Atlantis know very well: VDI.
Over the past few quarters, I can honestly say that CI and HCI have dominated a lot of the projects we’ve been working on. We’ve seen use cases in healthcare, government, pharma, education, manufacturing, and other verticals. Furthermore, we’re seeing growth in how HCI is being deployed within remote and branch locations.
For HCI to be successful, make sure you know your use case and where you’re deploying the environment. Do your best to reduce complexity and fragmentation by leveraging hyper-converged infrastructure systems that easily integrate with existing data center and virtualization components. Finally, I always recommend testing these systems out. Deploying HCI in parallel with your existing environment can help you better understand utilization, best practices, and where the design can positively impact your specific requirements.