According to the financial analysts at Raymond James, the ongoing AI boom will create a "bandwidth opportunity" inside data centers worth up to $6.2 billion in sales by 2027. And that's music to the ears of companies like Corning, Coherent, Lumentum and others that also play in the wider global telecom industry.
For example, Corning CEO Wendell Weeks suggested big data center operators are going to need to build a "second optical network" in order to connect all the GPUs that underpin the development of artificial intelligence.
"Overall orders grew in the fourth quarter and we're seeing the earliest edge of AI-related network builds in our order books," Weeks said during Corning's quarterly conference call this week, according to Seeking Alpha. Corning sells fiber cabling.
Optical component vendor Coherent also has nodded to the opportunity. The company said orders of its 800G transceiver "significantly increased" late last year due to demand from companies in the data center market looking to bulk up their operations for AI computing.
"During the quarter we enjoyed both expanding AI-related product engagements with existing customers and a number of new significant AI-related customer engagements including with some of the largest webscale and networking equipment manufacturer (NEM) companies," Coherent explained.
"AI will catalyze the networking optical component market," wrote the analysts at Raymond James in a note to investors last week.
From Cloud to AI
Data centers are massive buildings housing hundreds or thousands of computers. They are where hyperscale companies like Google, Microsoft and Amazon run their cloud computing operations.
Now, with the rise of cloud-based AI services like ChatGPT and Microsoft's Copilot, demand for space in those cloud computing data centers is skyrocketing. "We believe the stage is set for 2024 hyperscale data center leasing to set another record, eclipsing the record leasing seen in 2023 after a tsunami of AI demand hit the data center market," wrote the financial analysts at TD Cowen in a recent note to investors.
According to the latest figures from Synergy Research Group, AI helped drive enterprise spending on cloud infrastructure to $68 billion worldwide in the third quarter of 2023, up by a whopping $10.5 billion from the third quarter of last year.
Indeed, data center operator Vantage Data Centers just this week announced it raised a record $10 billion to address "unprecedented demand" for AI and cloud services.
But cutting-edge AI services can't run inside everyday data center servers. Instead, they need computers with high-performance graphics processing units (GPU) from a supplier like Nvidia.
In turn, high-performance clusters of GPUs running AI services need additional power and cooling technologies. And they also need a super-high-speed network to move AI-related bits and bytes around inside a data center and then out to the wider world.
The Network Inside the Network
A top Corning executive hinted at that concept in a DCD article late last year. "An AI network within a data center, is essentially, a network within a network. Within the AI network, GPUs and CPUs [central processing units] function like the two halves of the human brain. Large server farms with this setup can effectively act as a supercomputer, speeding up the time to train AI models," Corning's Nate Hefner wrote.
The analysts at Raymond James noted that AI computing requirements have scaled around 215 times every two years, "with next-gen models now requiring ~108 petaFLOPS of training."
Floating Point Operations per Second (FLOPS) is a measurement of computing performance.
"Fiber is the key to enable a system to grow smarter and smarter at exponential rates," continued Hefner, the Corning executive. "For example, when a person poses a question to a digital assistant, AI functions will be interlinked with fiber connections that will analyze untold amounts of data and possible answers in real time. And as those answers become faster, more accurate and more 'human' sounding, these features will become more useful and more integrated into everyday life."
But other networking elements are necessary too to speed up AI computing within a data center. "If you want to go any more than a few meters from the switch to another appliance, you have to transmit optically. The way you do that today is with pluggable optical transceivers," Manish Mehta, vice president of marketing and operations for optical systems at Broadcom, told Semiconductor Engineering. "As a reference point, each of these transceivers is 400 gigabits per second of bandwidth, and one of these switches can have up to 32 plugged in. That's a 12.8-terabyte switch."
Such metrics ought to be familiar to most telecom network operators. Fiber connections and optical components are common inside telecom networks worldwide, and Corning, Coherent and others sell their offerings to both data center operators and telecom network operators.
To be clear, the scale of the AI networking opportunity inside data centers doesn't compare with the scale of the global telecom industry. But advances in one sector will undoubtedly trickle down into the other.