Nanci Vogtli is the Director of Product Marketing at GigaIO.
For edge environments, whether being built to deliver cloud level services from the base of a cell tower or run complex, compute-intensive workloads at a remote industrial site, space is at a premium - with power, to operate these mini data centers, not far behind. Thus, compute density needs to be fully optimized; without adding a lot of specialized, expensive and power-hungry hardware.
Technologies and architectures that may work quite well in hyperscale data centers are not so easily transferred to their little sibling edge counterparts. It’s a bit like to going from a spacious suite at a big city hotel with room service and running water, to camping in a tent in the wilderness and cooking over a campfire. Successfully navigating each of these environments requires a separate set of skills, tools, and resources.
For instance, Ethernet is a very reliable network technology for sending packets of data from one server to another and running predictable workloads. It also scales out quite nicely – just call room service and order another rack of servers with a switch on top, to continue the analogy. Challenges at the edge, however, demand more ingenious utilization of limited available resources. One way would be to use an interconnect fabric, instead of a network.
There are multiple benefits associated with replacing a network with an interconnect which work to improve compute density and reduce power consumption. Mostly though, it means less physical hardware. By interconnecting all the storage and compute elements deployed at an edge site into a single, yet disaggregated system, typical hyperscale architectural practices such as adding power-hungry processing units to handle network data traffic and excessive amounts of underutilized storage capacity become unnecessary. Indeed, compute and storage resources can then be pooled and shared which reduces hardware redundancy, thus improving overall utilization. Also, because a unified computing machine eliminates all those time-consuming conversions involved with network protocols, it can much more efficiently run the same workloads on a fraction of the servers otherwise required.
Diverse sets of next generation applications such as autonomous vehicles, telemedicine, and industrial robotics demand fast response times. Successful edge deployments will require innovative architectures and inventive utilization of limited resources on hand, rather than simply replicating on a smaller scale the same methodologies developed and deployed by highly staffed, well-provisioned hyperscale data centers. Despite the lack of amenities, the edge represents exciting opportunities in previously undeveloped terrain, for those prepared to think in new and novel ways.
In seeking to optimize compute density in edge environments, data center designers may want to consider using new PCIe based fabrics that are emerging on the scene. Not only does this interconnect architecture allow several racks to scale into a single high-performance computing machine, but fewer servers, storage elements, and energy consuming processors to direct data traffic are needed.
An added bonus of such an approach using a PCIe fabric, is the flexibility to remotely reconfigure and allocate system resources to other applications as workload demands or priorities shift. Since you can’t just call room service to deliver another server, this feature also comes in handy in the event of a hardware failure, where it will take a truck role to reach an edge site and replace the faulty equipment. The site can generally be configured to still stay up and running in the meantime.
To accommodate the wide range of workloads, despite limited power and space, new edge data center designers must rethink conventional industry architectures, including traditional network technologies. When it comes to maximizing delivery of services, density matters.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.