Skip navigation
metaverse meeting Alamy

New Configurations Allow for Real-Time Collaboration in the Metaverse

To address performance and latency issues, companies should look to edge computing solutions while also taking into account the significant power requirements to handle metaverse workloads.

It's no secret that rapidly evolving virtual reality (VR) and augmented reality (AR) technologies have already begun to change the fundamental ways in which humans interact with each other. With Facebook's announcement in 2021 of plans to create a "metaverse," and more innovation and investment in VR technologies and projects from numerous companies, this is leading to the proliferation of new virtual worlds and applications that respond and react in real time.

While these new technologies mark the beginning of an exciting new era for consumers, they require a much different understanding and implementation of hardware systems. The rapid response time required by these VR and AR experiences inherently require higher-caliber equipment and therefore also pose a new set of challenges for the data centers that make these environments possible. For both the manufacturers of data center equipment and managers of data centers, metaverse workloads create even greater hurdles to overcome than advanced AI deployments.

Another World, But Still in the Data Center

First, it's not uncommon for these virtual worlds to exist remotely in data centers, where powerful, high-bandwidth servers track and process hundreds of thousands of users and user databases. These data center infrastructures must be able to not only process real-time movements and interactions with precision, but in the case of AR, also must be able to retrieve the data to create impressive, lifelike images and overlays.

Data centers like these must have sufficient processing capabilities, cores/threads, and the fastest GPUs to render frames at the required rate and avoid latency issues which may create an unpleasant experience for the user. What's more, they have to be able to handle fluctuating user traffic without sacrificing that speed. For large-scale metaverse realities based in data centers, this means multi-CPU servers, the highest performance graphics cards, fast networking bandwidths, and the supporting software and connections to ensure minimal lag or jitter.

Although this may sound the same as all other high-performance applications like AI, it is even more important in AR and VR environments. These applications demand the utmost performance from a wide array of technologies to be incredibly responsive in real time and render the graphics as close to the user as possible. They necessitate extreme computational power (CPUs), robust graphic processing (GPUs), and, to ensure real-time functionality, they rely on top-tier memory (on-chip and storage), high bandwidth (networking/PCIe), as well as unwavering reliability (jitter/lag) to ensure a seamless experience — or at the very least, one that isn't jarring or nausea-inducing.

The Power of the Edge

As part of addressing all these performance and latency issues, enterprises managing deployments for metaverse applications look to support at the edge of the network to help ease some of these challenges.

The three different types of interaction for AR and VR a non-immersive virtual environment, an immersive virtual environment, and augmented reality all require two-way communication between the edge and data center, which is why localized data centers can help lower latency and improve performance. Moving closer to the edge, or user, companies can leverage different techniques to process the information, and do so away from the data center namely, at or near the edge. In addition to other benefits, this can cut down on transmission time and delays in response to the user's actions, as massive volumes of compressed visual imagery data no longer have to be transferred from the data center all the way to the headset.

One suggestion is to relay only commands from the data center, letting the servers based at the edge render the scene based on their sequence. However, this can be a challenge due to the need for incredibly low latency, where bandwidth may not be as critical. Technically speaking, the most responsive technique used to interact in AR and VR environments is to render the graphics as close to the user as possible, thanks to the past few years' advances in graphics performance. Depending on computational complexity and the level of interactivity, the quality will be high enough to run these edge workloads. 

An infrastructure of devices, networking, and back-end servers must be designed to satisfy users' service-level agreements (SLAs) for virtual or augmented realities. Updating visuals based on the user's movement is just as crucial as rendering performance, as any lags in the program's reaction to user movements can quickly detract from their experience and the real-life factor.

Sustainability for Today and Tomorrow

As more companies continue to push for more effort in the metaverse, these companies need to think about the power consumption of the data center and consider the environmental impacts of a new or higher-powered data center. It's important that businesses keep this energy need top of mind, because the scale of metaverse deployments means huge power requirements. Just last year, Meta had to pause the construction of its Netherlands data center for energy efficiency reasons, as it would use the same amount of energy as 22,000 residents.

The power needed to support such metaverse workloads is a challenge for the data center infrastructure, and can affect operating costs as well as have an environmental impact. For all these reasons, for companies entering this space, it is vital that efforts are taken to ensure energy efficiency. Before jumping in, companies need to understand these complexities in sourcing the needed power and should adopt more energy-efficient hardware, server architectures, and data center designs to deploy the data centers in a sustainable way.

This means crafting agile, adaptable, and scalable architectures to enhance computing capabilities. Effective collaboration can also assist businesses in steering clear of over- and under-provisioning their operational AI environments, ensuring efficiency and value are maximized. In order to support modern enterprises' massive AI workloads, the database, rendering, and network performance of the program's infrastructure must operate as a well-oiled machine, tuned to work harmoniously with the other components in the process.

Erik Grundstrom is Director, FAE & Business Development, at Supermicro.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish