John Williams is Vice President of Marketing and Product Management for AppliedMicro.
The technologies that power the data center have grown at an incredible rate in recent years. What was once considered “state of the art” is now deemed outdated.
Next-generation servers have broken the commodity mold offering capabilities ranging from customizable solutions with application accelerators to appliances that address specific workloads. Data center software is also evolving at break-neck speeds. Use of high-level languages, virtualization and open source software has opened the door to a new generation of server solutions and deployment models – solutions that are not necessarily constrained to the almost 40-year-old x86 instruction set.
In short, the "one-size-fits-all" data center is dead. To achieve order of magnitude increases in application performance and reductions in operational costs, new approaches are required. The future of the data center is a broad set of solutions using cost-effective, energy-efficient processors, new platform architectures, and workload accelerators to achieve maximum performance, power-efficiency and scalability.
Server-based compute is rapidly becoming a commodity. In the past, symmetric multiprocessing (SMP) was used to scale compute resources with a unified memory and input/output (I/O) subsystem. There was always more demand for compute cycles. For years, enhancements to pipeline architectures and new fabrication process technologies drove performance upwards – gigahertz was a relevant performance metric. All of that changed in 2005. Dual-core server processors were introduced and dramatically changed the amount of compute available and the level of power efficiency for these devices. Today, server processors with 8, 16, or more compute cores are common.
Server utilization in 2005 was generally poor with 10 percent or less not uncommon. Virtualization enabled server consolidation to better utilize compute resources, but that moved the bottleneck. Compute has become a commodity in today’s data center. IT organizations rarely invest in the highest performance "top-bin" processors. Why? They are expensive and the compute provided is difficult to monetize in the majority of data center workloads. Servers need more memory and better I/O subsystems to scale performance, not more compute. Few workloads are compute-bound today. The problem this creates is that to get access to more memory that is badly needed, one needs to add processors with additional compute resources that are generally of little value in achieving higher workload performance.
So what does this all mean to the data center of tomorrow?
- Adoption of scale-out compute platforms running distributed workloads across servers with a healthy balance of compute, memory and I/O will continue at a rapid pace.
- Performance will increasingly be a workload metric – not a processor metric. Synthetic CPU benchmarks will become an increasingly weak predictor of delivered application performance.
- Adoption of new server compute architectures like ARM with the 64-bit ARMv8 architecture will accelerate based on a rapidly expanding enterprise software ecosystem.
- Data center costs will drop. With ARM and other broadly available architectural alternatives, multiple suppliers will offer differentiated, workload-optimized solutions driving competition and innovation – something that has been sorely lacking in recent years.
- Server platforms will become more differentiated based on rack density, underlying compute architecture, memory, storage and networking. Platform vendors will offer more appliance-like solutions – ‘the right tool for the job.’
- You won’t care what instruction set the processor is running.
The availability of ARM-based solutions is an important step in the evolution of the data center. IT organizations are clearly seeing that a solution that offers a balance of strong compute, high integration, large memory and excellent power efficiency is a powerful tool to address critical workloads ranging from web serving and caching to in-memory databases and data analytics. What is key to these new solutions ?
- Large memory is not an upsell. The processor solution is the same whether one chooses to address 32 gigabytes of memory or 256 gigabytes of memory.
- Power efficiency is not an upsell. ARM is inherently power efficient. There is no premium for a low power 35-watt processor – that is just what the product is.
ARM-based server platforms for both compute and storage workloads are in production and available today. The software ecosystem has developed and matured rapidly and is enterprise-ready. Data centers are deploying the technology now. The list of silicon and platform suppliers continues to grow as we enter 2016. After multiple years of ARMv8 development by silicon vendors, original equipment manufacturers (OEMs) and software vendors, the ARM adoption cycle is accelerating. The data center will never be the same.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.