Skip navigation

Powering AI in the Enterprise

AI and ML will eventually power most of our world. But what powers AI?

Gerry Fassig is the Vice President of Cloud and Hosting Services at CoreSite.

When a novel concept becomes a viable business tool, every company is eager to hop on the bandwagon and capitalize on the buzz and  technology.

The latest buzz surrounds Artificial Intelligence (AI) and Machine Learning (ML). To say AI/ML is big right now would be a massive understatement. Everyone from 100-year-old stalwart tech giants, to innovative three-man startups, are actively investing time and resources into accelerating the technology’s development and mobilizing it for business.

But AI is more than a passing fad. Analyst firm Tractica projects that global AI enterprise spending will grow from $644 million in 2016 to nearly $39 billion by 2025 and will be the driving force behind everything from highly efficient sales platforms and virtual digital receptionists to children’s toys, autonomous vehicles, and products or services that don’t yet exist.

AI and ML will eventually power most of our world. But what powers AI?

Data and processing power. Lots of it.

Great Potential, Great limitations

The potential impact of AI on every business, vertical, or industry can’t be overstated. As unassisted machine learning, natural language processing (NLP), and deep learning capabilities improve, the applications of each skill will continue to grow and expand into new use cases.

Already, companies across the business spectrum are investigating how AI/ML technologies can be used for object recognition and tracking, localizing geodata, preventing fraud, improving marketing outcomes, and many other applications. While business leaders in those arenas wait for the technology to catch up to the promise, other luminaries are already applying the innovations into practical applications in today’s market—autonomous vehicles, call centers and customer care and cybersecurity.

Companies already employing AI have been systematically and strategically aggregating data for years. They have a running head start against organizations that have just now begun focusing on data collection and organization. But they’re also running up against the biggest limitation of AI and ML technologies: capacity.

Power, Capacity, Speed Critical to Smart Technologies

The Artificial Neural Networks (ANNs) that drive AI and ML are designed to model and process relationships between inputs and outputs in parallel. To do that, they need to store massive volumes of input data and call upon large-scale computing to make sense of those relationships and deliver the appropriate output.

Consider a chatbot deployed to provide customer self-service, assisting a team of customer service agents in a contact center. Ideally, the bot answers questions accurately, directs customers to the appropriate resources, and generally interacts with them in a personal, natural manner.

To accomplish that, the bot’s back end needs to quickly compare the inquiry against the entire lexicon a company’s consumer base would use (namely, their native language) to “understand” the context of the interaction and “make a decision” based on those inputs to hopefully arrive at the right response—and do it instantly just as a human representative would.

But the processors and memory resource (DRAMs) required for those processes consume vast amounts of bandwidth beyond what most on-premise networks are designed to handle. They also add considerable overhead in power consumption due to the number of CPUs or GPUs involved that go well beyond what most organizations are prepared to spend. And trying to do that all within a single data center geographically removed from where the interaction is taking place introduces latency concerns that can wreck whatever the product or app is trying to accomplish.

So, what’s a business to do?

Maximizing AI performance with Direct Cloud Connections

Increasingly, companies using process-intensive AI applications are turning to hybrid-ready, edge data centers to resolve bandwidth and compute challenges, lower operating costs, and all but eliminate latency concerns. Hybrid-ready data centers should:

  • Provide easy on-ramps to cloud providers within facilities in order to significantly reduce latency and data transfer costs. A direct cloud interconnect product can lower latency and data transfer costs by as much as 50 percent when compared to the public internet – all while eliminating the need for private WAN connections to each provider manually.
  • Be in close proximity to cloud providers’ core compute nodes to further reduce latency between dedicated environments and the cloud providers of choice.
  • Be in close proximity to as many end users and devices as possible to enable processing information closer to the user or device, which can significantly improve performance and reliability. This is especially beneficial for supporting latency-sensitive AI applications like autonomous vehicles or cybersecurity operations, while also maximizing workload flexibility and cost management.
  • Feature scalable and configurable central infrastructure to facilitate sustainable growth.

 AI and machine learning technologies are continuing to mature, advance, and become increasingly common in our daily lives. As they do, the companies offering those products and services will need to think strategically about how best to balance the various demands on their business to realize the full potential of their technologies to maintain a competitive advantage.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish