Skip navigation
Audience at DCW Data Center World

Representatives From Dell and Switch Ponder the Next Phase in Business IT at Data Center World 2022

Four recommendations from a keynote by Dell CTO Ihab Tarazi and Switch CRO Jonathan King.

Live technology events appear to finally be making a return in 2022 and in one of the opening executive keynote programs at the Data Center World in Austin, Dell CTO Ihab Tarazi and Switch CRO Jonathan King took to the stage to discuss where they believed data center technology is heading.

The major theme focused on the changing nature and growth of data in light of the IoT development, and some of the many evolving challenges that IT faces in order to collect, process, and establish value from all of the new forms and massive quantities of incoming data.

Much of this evolution is directed by the need to connect increasingly remote data with the applications and infrastructure appropriate for the task, and to establish the optimum combination of performance, security, and data protection for these new types of data; a far more complex problem than could be covered in a single keynote.

To break it down, Tarazi and King focused on four key points: modernize the existing data center, adopt colocation, optimize connectivity, and implement new data services to manage the new forms of data being generated from a rapidly growing number of sources. Of course, there’s a lot to unpack within those four points, so we’ll drill down a bit into each.

New data = new infrastructure

Modernizing the existing data center should always be a part of any data center strategy, but with the massive adoption of hybrid cloud, (not to mention a worldwide pandemic and a near total disruption of traditional supply lines) it’s been easier than usual to lag behind the state of the art. Tarazi and King contended that a combination of on-prem, public cloud, and multi cloud should be table stakes for most IT shops. It’s hard not to agree, yet there remains substantial room for improvement in the form of the standardization needed to accomplish seamless resource utilization spanning multiple clouds. They also suggested that companies could become their own “fourth cloud” to provide connectivity and extend IT services beyond the traditional firewall.

Adopting colocation that utilizes extended locations and infrastructure as a bridge between edge, cloud, and core can help ensure reliability and reduce latencies. While the idea of colocation is nothing new and has been part of many BC/DR strategies for decades, in the context of IoT there is a case to be made for placing enterprise-class systems closer to these new sources of data. Relative popularity of colocation always changes, but the increasingly mobile nature of IoT applications can make a strong case for moving a number of IT services and supporting infrastructure much closer to new data sources.

Optimizing connectivity is a task that has become a much greater challenge throughout the evolution of IoT initiatives. In the wired data center, Ethernet speeds of 50 to 100Gb are becoming commonplace, with 400Gb well on the way, so there’s no lack of network performance options on the wire. Unfortunately, many IoT projects can’t be tethered by wires; and though wireless networking options have been ubiquitous for the last decade, they are only now getting to the point of enterprise reliability and performance. Today’s WiFi 6 and 5G wireless networking offer multi-gigabit connectivity, but the speakers noted that bandwidth isn’t the only issue. The rest of the challenge lies in the need for better network automation to insure that network latency, security and performance matches the growth from an increasing number of data sources and end points.

Implementing new data services may be the greatest challenge outlined in the keynote. Data from next-generation IoT applications is different that generated in the past, and along with traditional database information it will include a broad range of unstructured data elements spanning documents, log files, images, video, and other dense forms of content. Some of these production challenges can be addressed at the edge, given the substantial amount of computing power available on many new endpoint devices, but there also is a need to develop a data management plan downstream that can automate the management of data throughout its lifecycle.

Predicting the future of IT is no small task, and, as Mr. Tarazi pointed out, “We’re at the beginning of the cycle, and the cycle is long. Much of this is doing the initial groundwork to support the future.” There’s no shortage of remarkable resources that are available to business today through the hybrid cloud, but there’s typically a gap between what’s available and what’s actually functional for a given task.

Increasingly intelligent automation is needed at every level of the data center of the future, but ultimately, it still comes down to getting a handle on business needs and having a clear idea of what the goals of any new application should be. Determining that depends on establishing good communication between business leaders, end users, developers, and now, data scientists and engineers, to figure out in advance what data is useful and how it can be collected, processed and delivered. But a data plan should also go on to address its movement, security, protection and long-term availability. Data may well be at the core of the Fourth Industrial Revolution, but it’s only a revolution if we can learn to optimize the way we collect, understand and use that information to its fullest.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish