The days of the PC, as we know it, are numbered. Corporations are already dealing with IT consumerization and demands around mobility, and the evolution of the data center has helped IT departments deliver more using a lot less. Current data center platforms have become the home of many new technologies. With more high-density and multi-tenancy computing, increased resiliency, and better overall resource utilization, many more organizations are centralizing their entire business model around their data center platform. Because we have better bandwidth and resource capabilities, cloud computing and virtualization have helped digitize the industry.
With that, comes the next-generation end-point. What’s the point of having big resource-intensive machines sitting at every user’s location? Why dedicate extra hours in repairs, maintenance and life cycle management? Why create this extra work when the entire end-user experience can now be delivered directly from your data center down to a tiny end-point device?
In fact, virtualization and compute technologies have come even further by allowing heavier, resource-intensive, applications to function better within the data center. For example, NVIDIA’s GRID pass-through technology allows you to directly integrate with the hypervisor and allocate full memory capabilities to a virtual machine – running on XenDesktop 7, for example. This virtual desktop can then be streamed down to a very small hardware footprint. Although GPU pass-through technology has been available in the past, the big difference is that we have better resource utilization for the virtual desktop and we can place more users per GPU.
Smaller, Faster End Points
In designing more efficient corporate environments, IT managers must look to end-points which are easier to manage, faster to deploy and require less overhead. The introduction of thin-clients paved the way for a small, easy to control end-point. The challenge, in many cases, has been the price. These terminals would still cost between $300 and $400. Many IT managers would argue that the minor savings in management were outweighed by performance gains that a bigger PC might deliver. Still, as the IT infrastructure continued to evolve, virtual applications, desktops, and the data center that supports it all became much more efficient. And, as a result, the end-point evolved. Here’s a look at where we heading:
- Breaking the $100 barrier. What’s the point of deploying an end-point if it’s not cost-effective? Well, zero-clients look to break that trend by breaking the $100 barrier. These devices will deliver workloads, connect to the central data center, and be easy to manage. Already we see devices in the $150 area and below. As data center resources become even more centralized and powerful, much of the processing power will be offloaded to the data center. This will allow the end-point to get even smaller and less expensive.
- Centralizing the data and the management. With faster network closets, and better data delivery mechanisms, the end-point really doesn’t need to be complex. With no moving parts, and really just one main board, zero-clients are used a direct machines to deliver virtual workloads. All of the data will be centrally managed and controlled. This means if a device is lost, the data will always be safe. Plus, pushing new images and controlling versions will be even more simplified. Centralized management consoles will allow for full control and visibility of the end-point environment.
- Rip/replace methodology. It takes more time and money to replace a hardware component, reinstall software, or troubleshoot an issue at the end-point level than many may think. For $100, it’ll become standard practice to go to the end-point, unplug it, and put a new one in – making the workload available immediately after a network connection is established.
- Content redirection and management. A zero-client isn’t just some weak little end-point. In fact, these devices are able to deliver HD content with no lag. Furthermore, administrators are able to control whether the devices process some of the information or if the content is rendered at the data center level. This type of visibility into how traffic flows allows managers to deliver an even more powerful end-user experience. The idea isn’t to lock down or restrict the user. If these devices deliver a poor user experience, deployment will be a serious challenge. That’s why next-generation end-points are designed to run efficiently and leverage the bandwidth and resources that it is provided.
- Flexibility around security and compliance. The great part about zero-computing clients is the flexibility around security and compliance. Not only is the data always centrally held, the end-point will never retain the information. If the device is stolen, no data can be pulled from the machine. Administrators are able to always centrally control data, how it’s delivered and where it’s being access from. With visibility into the information that’s flowing in and out of these zero-clients, security administrators are able to better set data loss prevention policies and have great visibility into data flow.
Vendors like nComputing and Wyse are working hard to replace the big PC end-point with better and more efficient computing platforms. New chips, more bandwidth, and faster networks are all simplifying the end-point and enhancing the data delivery process. As cloud computing and virtualization continue to pick up a bit steam, the end-point community will benefit. By creating an easy-to-manage end-point environment, managers can focus on improving the end-user experience without having to worry about the machine that they’re deploying. The ability to consistently deliver a fast and easy to access workload will create a more efficient (and happier) end-user.