Skip navigation
Hosting Graphics-Rich Apps in the Data Center

Hosting Graphics-Rich Apps in the Data Center

In today’s atmosphere of data consolidation and security, it’s important to know that you can store your corporate data in your corporate data center, and still provide users with the access and performance they need.

Karen Gondoly is CEO of Leostream.

Hosting workstations in the data center — it’s a topic that deserves a second look. The mobile era is upon us, and with everyone demanding access to resources on the go, how do you mobilize graphically demanding applications in the data center for users that usually have workstations below their desks? While popular wisdom says that hosting graphics-rich applications is hard, thanks to recent advancements in workstation and hypervisor technology the answer may be easier than you think.

In today’s atmosphere of data consolidation and security, it’s important to know that you can store your corporate data in your corporate data center, and still provide users with the access and performance they need. What’s the best option for your organization? Here are a few approaches to consider:

Dedicated Hardware

In the past, your most viable option for running graphically demanding applications was to use dedicated hardware. In this scenario, a Windows or Linux client OS is installed directly on the hardware the applications are installed. The downside? This approach and the hardware to support it can be expensive. That said, if you have an application that requires the heavy lifting of dedicated hardware, then don’t fight it. The key when using dedicated hardware is to maximize its usage by sharing the applications among users, and monitoring usage so that that you don’t waste resources. Connection broker technology can help in this regard by tracking resource consumption, pooling resources together and appropriately allocating the resources out to users.

Pass-Through GPU

GPU technology has begun to take off, and provides an entirely new approach for organizations looking to run high-end workloads on virtual desktops. Pass-through GPU, is the name of the game and simply means that each physical GPU in the workstation is passed through to its own virtual machine. Pass-through GPU has opened up windows of opportunity for those running 3D, CAD, video editing, etc. How does it work? The virtual machines are hosted on the hypervisor that is installed on the workstation. For example, if your workstation has two GPUs, you can host two virtual machines so that 4 GPUs = 4 virtual machines.

With pass-through GPU, the operating system on each virtual machine has full and direct access to a dedicated GPU and can use the native graphics driver loaded in the VM. In the described environment, each physical workstation hosts multiple operating systems, which improves the density in your data center without compromising performance.

Virtualized GPU

Virtualized GPU takes things a step further. Instead of passing GPU directly to the virtual machine, a hypervisor is used to sit between the VM and the GPU. To elaborate, in this architecture, each physical GPU is shared by multiple virtual machines. (Again, the virtual machines are hosted on the hypervisor that is installed on the workstation.) The hypervisor provides additional technology that gives the virtual machine operating system direct access to the GPU, giving the performance of pass-through GPU while allowing greater density. Note that the virtual machines do share the resource of the GPU processing power.

To date, only Windows OS are supported by virtualized GPU, for Linux users, this option is not yet available.

Connecting the Users

Dedicated hardware, pass-through GPU, and virtualized GPU all provide a path to securely hosting your data in the data center. Next, your users need a way to connect. There are two components that you will need to add into the mix. The first is a high-performance display protocol that is specifically designed to handle graphics-heavy applications.

At a minimum, the display protocol connects the user’s client device to their remote desktop, and is responsible for remoting the graphical display to the user’s client device. Ideally, the display protocol goes above and beyond this and is responsible for the complete end-user experience, which includes things like redirecting USB devices from the client to the remote desktop, redirecting audio, and more.

Second, unless you want your end users to memorize IP addresses or hostnames, you need a connection broker to offer out and connect users to their resources.

A connection broker provides the login portal for the users who need access to the hosted desktops and applications. Behind that login portal, the administrator defines the connection broker logic that directs the user to the correct desktop based on who that user is, and where they log in from.

Connection brokers record user information for the lifecycle of the user’s connection, from the moment they log in, to when they lock the desktop, to when they log out, allowing you to track and report on resource consumption. By watching the trends in application use, you know which applications are under utilized, or which you need to purchase more of ensuring that expensive applications are utilized to their greatest potential!

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish