Skip navigation
An Open19 brick Open19
An Open19 brick server

Packet Readies Custom Open19 Servers for the Edge

LinkedIn’s hardware standard makes its way outside LinkedIn data centers and to the bases of cell towers.

People behind the new Open19 data center hardware standard that came out of LinkedIn hope that it can bring the scale and standardization advantages of hyperscale cloud infrastructure and hardware innovation to enterprises that will never operate at hyperscale. Another goal is to streamline deployment of edge computing infrastructure in new locations, such as cell towers.

One of the first adopters of Open19 outside of LinkedIn and one of the first companies deploying computing infrastructure at a cell tower is bare metal cloud provider Packet. The New York-based startup is getting ready to install high-density micro-servers with Netronome SmartNICs built to the Open19 form factor to six edge test sites as well as its 18 existing cloud data center locations.

Packet’s Open19 design fits up to 120 microservers into a 42U rack and supports Intel, AMD, and Arm processors, with SmartNICs integrated on the motherboard to offer network acceleration for telemetry, security, switching, and load balancing for Linux workloads.

While most customers want x86 systems today, Arm is increasingly relevant for custom silicon, Jacob Smith, Packet chief marketing officer told Data Center Knowledge in an interview. “Our customers are trying to innovate with different combinations or brand-new hardware, so being somewhat agnostic to that very valuable CPU or IPU, or GPU, or whatever you're using to do that special thing, is critical to our design.”

The lowest-power option will be the Arm systems, at around 100W per microserver. The Intel and AMD SKUs will use around 600W to 700W, Liftr Cloud Insights principal analyst Paul Teich, estimated.

Today, Arm systems are suitable for specific applications, especially in the telco space, where Open19 is getting some traction, he told us. But in the future the low-power SKUs may become increasingly useful, especially since key Linux workloads already do run on 64-bit Arm.

Offloading the network component to the SmartNIC plays a big role here, since it provides more flexibility for software-defined networking without taxing the central processor, Smith explained, leaving more of the Arm processing capacity to the applications that run on the server.

SmartNICs Allow Squeezing More Power Out of Arm Servers

Because its customers’ requirements frequently change, the design is a disaggregated system that takes storage out of compute nodes. “We’re trying to give different servers to different users who each have single tenancy, even though they’re in the same brick, or the same enclosure,” Smith explained.

In Open19 parlance, “bricks” are pre-defined server-chassis form factors compatible with the Open19 “cage,” designed to fit into standard 19-inch racks. (See all the Open19 standard components here)

Packet’s SmartNICs support customized network acceleration. They’re also a good fit for new concepts like network service meshes, which handle routing and security between microservices in a containerized application. Cloud platforms like Amazon Web Services and Microsoft Azure offer similar SmartNIC capabilities, but they abstract the base networking from developers, while Packet gives them access to network functions all the way in the Linux kernel.

“You’re not sharing resources with others, like in a container or VM environment. You’re getting access to a small CPU and a network-offload capability, so people can do things with the networking functions in the kernel itself,” Ron Renwick, Netronome senior director for products and product marketing, told Data Center Knowledge.

“Traditional network cards just get traffic and they might do a little bit of filtering or VXLAN termination, but for the most part they push it up to the CPU to figure out, ‘Is this app data? Do I send it to a coprocessor first? Do I punt it over to storage?’”

Disaggregation for the Sake of Acceleration

Speeding up compute-node performance by offloading networking functions to the SmartNIC could give the microservers wide appeal, Liftr’s Teich suggested. “By putting in a network accelerator and pre-integrating the driver stack for that SmartNIC, they're providing a pretty significant off-the-shelf capability for a smaller service provider,” he said. “The Arm SKU will get some attention, but they have the mainstream Intel and AMD Epyc SKUs, so they have a good spread of instance capabilities. They have the right type of networking to disaggregate the storage and disaggregate a lot of switch bandwidth, so there's a choice of how to deploy a rack of this equipment.”

Network acceleration will sit alongside more familiar accelerators, Renwick said, as part of the trade-off needed to get the processing capacity in environments where access to sufficient power is challenging. “This disaggregation is about a multi-accelerator world,” he said. “As Moore's Law is dying, especially if you’re trying to do AI processing very close to the infrastructure edge, you’re going to need GPUs at the edge. They’re power-hungry, but they’re necessary, so can you use a smaller CPU and offload the network so you can have a GPU? Packet can provide that by using a disaggregated architecture and provide the developer with what they need for total application performance and latency.”

Packet isn’t the only one doing this kind of accelerator-powered disaggregation, Teich pointed out. “AWS is basically disaggregating the GPU and their new Inferentia inferencing chip from EC2 instances, kind of in the same spirit,” he said. That makes this a trend worth watching.

‘The Last 10 Feet’

Open19 appeals to Packet, an Open19 Foundation member, because of the efficiency of standardization and the operating model it enables, Smith explained.

“The current way of building racks the way we all do it is so subscale that you can’t get the efficiencies [of hyperscale],” he said.

Hardware also needs to change all the time nowadays. “It's no longer, I’ll buy this, and it will last me for seven years. If you're using even last year’s Nvidia GPU, it’s orders of magnitude not as good.”

The way it simplifies hardware installation helps Packet, which plans to add many cloud nodes at wireless towers in the coming years, avoid the high cost it would require to have specialized technicians get to those sites to add or replace new servers.

“It’s all about operational efficiency, and it’s all about that last ten feet,” Smith said. In places like Kansas City or Connecticut, where tech talent is scarce, it can cost “thousands of dollars in people time to do very basic things like make sure the cables aren't swapped. So, we’re trying to funnel a lot of our work into the Open19 design, because it removes a lot of the complexity. If the UPS driver can slot [a new server] in or take it out, it makes your cost basis completely different.”

Automation is another piece of the puzzle. Packet will use the same homegrown provisioning system for the new microservers. Infrastructure automation will extend to how customers will work with the hardware, including programming the SmartNICs.

“What developers ask for first and foremost is an API; they want to touch it with automation first, and then everything else waterfalls from that,” Smith explained. Automation and APIs will allow developers to tune the hardware their applications run on even in a bare metal cloud like Packet’s. “I can say, I have this workload and I want to only deploy to systems at this price point, with this power efficiency. That’s data we can feed to smart software like Kubernetes and have it schedule that and be much more aware of these business issues,” Smith said.

“I think we’ll see software come in and solve a lot of problems for us in collaboration with hardware.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish