Skip navigation
Five Steps to Preparing Your Data Center for VDI
Getty Images

Five Steps to Preparing Your Data Center for VDI

With greater data center resource capabilities, you can now deliver rich content to many distributed users. While every enterprise has its own set of business requirements, depending on the type of organization, vertical, how the user interacts with his or her desktop, and much more, there are five key steps that should always be followed when creating a truly powerful virtual desktop infrastructure (VDI) solution.

Virtualization at the desktop level has reached maturity and is being used in all types of organizations.

Virtualization at the desktop level has reached maturity and is being used in all types of organizations.

Virtualization, at least at the server level, has been in use for some time. Since, the concept has expanded to user, application, network, security, storage, and, of course, desktop virtualization (VDI). This new approach took the market by storm, and many thought it was the direct answer to many of their enterprise's desktop problems. Initially, there were some challenges – serious challenges. Data centers never really understood the requirements that initial VDI technologies really required. So, during the onset of the VDI push, there were some very flawed VDI deployments.

Now, the maturity is certainly here. Many companies understand the direct fit for VDI. Labs, kiosks, call centers, educational institutions and healthcare are all finding powerful users for VDI. Of course, as with any technological deployment – each organization is unique and will have its own set of business requirements. The requirements can depend on the type of organization, vertical, how the user interacts with his or her desktop, and much more. Still there are five key steps that should always be followed when creating a truly powerful VDI solution.

The Big 5 VDI Considerations

Virtualizing a desktop requires a few data center components to be present for the solution to be successful. Furthermore, planning and designing the infrastructure goes well beyond just sizing and scoping out the actual virtual desktop.

  • Consider the end-point. Many organizations are not just trying to go with better computing options, but they're simultaneously aiming for greener computing technology. The option to remove a thick terminal to replace it with a very thin or even zero client is very enticing. With VDI, you’re able to deploy highly efficient end-points which look for the central boot server to pull their images. Here’s the point – hardware manufacturers are seeing the end of the PC days. New types of endpoints like those from nComputing are creating powerful platforms while pulling less than 5 watts. Furthermore, mobile technologies like those from the ChromeBook and the Chrome OS allow for complete application and desktop delivery to truly mobile devices. These are web-enabled platforms capable of streaming very rich content directly to the end-user.  The important part here is that these SoC designs will only continue to proliferate as we redefine the end-point, the data, and how this data is delivered.
  • QoS, LAN and WAN optimization. VDI can be very resource intensive – this includes traffic over the wire. Having a good core switching infrastructure will help alleviate this pain by allowing the administrators to create certain rules and policies revolving around traffic flow. Setting QoS metrics for VDI-specific traffic can help remove congestion and ensure that the right traffic has the proper amount of priority. As for traffic leaving the data center – knowing where the user is located and optimizing their experience based on certain criteria becomes very important. New VDI technologies allow for users to connect over 3G/4G networks and still have their traffic optimized. The protocols delivering this rich media are improving. Along with that, WANOP systems and bandwidth in general have evolved a long way as well.
  • Persistent vs. pooled. Or possibly both – or maybe just apps. When deploying VDI, there are two major options an administrator can go with when it comes to designing the actual image. A persistent desktop is one that will save the changes a user makes to it. On the other hand, pooled desktops will go into their original state when rebooted. In some cases, many users will touch the end-point and the administrator may want the devices to be booted into a clean state each time. In many cases, user groups will actually require that both pooled and persistent desktops be deployed. Remember, some users will have one type of image, while another set will have a completely different one. Keep in mind that virtual resource delivery doesn’t only have to be desktops. In many cases, it’s much more efficient to only deliver applications – instead of entire desktops. Plus, in some of those cases, users can utilize a hosted (or shared) desktop model instead of their own dedicated image. In all of these situations, the user and their behavior have to be understood to deliver the appropriate computing experience.
  • Storage preparation. Large organizations will oftentimes have numerous storage controllers. At the same time, some smaller organizations will be using only one. Regardless of the amount of storage controllers available, they need to be sized properly for VDI. To prevent boot and processing storms, organizations must look at IOPS requirements for their images. To alleviate processing pains, administrators can look at flash technologies (NetApp, Fusion-IO, XtremIO) or SSD technologies (Violin, Nimbus) to help offload that kind of workload. Furthermore, intermediary platforms like Atlantis ILIO run on top of a virtual machine that utilizes massive amounts of RAM as the key storage repository. Developments around this technology now allow for both persistent and non-persistent images all to reside on RAM-based storage.
  • The infrastructure consideration. High-density, multi-tenancy computing has truly evolved how we utilize resources within the modern data center. Massive blades and chassis now make up the DNA of data center and cloud computing. The introduction of truly converged systems created an even more efficient way of delivering all core computing resources from one massive chassis plane. Fast-forward to today. We see even more progression in the converged infrastructure field. When Cisco acquired Whiptail, they introduced a new model which will integrate millions of IOPS directly into a UCS blade chassis. This sort of trend will only continue as the digitization of the modern business becomes the norm. There will be more users accessing data via the cloud, more resources delivered over the WAN, and entire workloads will be delivered to a variety of end-points. All of this will require platforms which are capable of integrating network, storage, and compute to deliver true data acceleration.

As the virtual platform continues to evolve, organizations will need to make sure that their infrastructure continues to stay directly in line with business needs. Never forget that modern businesses are tied at the hip to their IT environment. A lack of technological understanding can allow the competition to jump ahead.

So, as with any new innovation or technology, do not overlook management and training. Take the time to learn the key metrics that revolve around keeping a virtual environment proactively healthy. Furthermore, educate your staff so that they can not only support the end-user more efficiently, they can understand the true power of their virtual infrastructure. This in-line training and communication will help strengthen the vision of the entire organization directly with the IT department.

TAGS: Hardware
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish