Last year, the Greater London Authority told real estate developers that new housing projects in West London may not be able to go forward until 2035 because data centers have taken all of the excess grid capacity. EirGrid said it won't accept new data center applications until 2028. Beijing and Amsterdam have placed strict limits on new facilities. Cities in the southwest and elsewhere, meanwhile, are increasingly worried about water consumption as mega-sized data centers can use over 1 million gallons a day.
When you add in the additional computing cycles needed for AI and applications like ChatGPT, the outline of the conflict becomes more heated.
On the other hand, we know we can't live without data centers. Modern society, with remote work, digital streaming, and modern communications, depends on them. Data centers are also one of sustainability's biggest success stories. Although workloads grew by approximately 10x in the last decade with the rise of SaaS and streaming, total power consumption stayed almost flat at around 1% to 1.5% of worldwide electricity thanks to technology advances, workload consolidation, and new facility designs. Try and name another industry that increased output by 10x with a relatively fixed energy diet.
Data centers in fact will play a pivotal role in reducing carbon emissions of other industries by fine-tuning the power consumption of homes, factories, buildings, and equipment: The World Economic Forum asserts that digital technology could reduce emission by 15% by 2030, or nearly one-third of the target they've set. You can see similar factors in telecommunications with companies such as Telefonica, which increased traffic by 5x while lowering overall power at the same time.
Thus, the debate isn't really about whether we should build new data centers. We need to, and we will. The debate is really about how they get built.
DSPs and the Digital Archipelago
Instead of building big, we should think small, shifting from building massive, million-square-foot data centers to archipelagoes of more moderately sized centers linked by optical connections to form virtual hyperscalers. Pluggable modules — which contain Digital Signal Processors (DSPs), amplifiers, controllers, optical modulators, photodetectors, lasers, and other devices — essentially aggregate server traffic, convert it so it can flow across faster optical networks, and then fine-tune signals so that they can travel longer distances without dropping bits. Think of how a flashlight beam disperses and blurs the further away you point: DSPs counteract that effect.
From a high level, modules come in two varieties. To connect racks inside data centers, modules running on PAM4 DSPs get deployed. Modules capable of transferring 200G of data per second over a single wavelength of light, double the former 100G standard, were announced earlier this year. These effectively let clouds double bandwidth capacity within the same space while lowering cost per bit. (PAM4 modules are also now starting to connect assets within a rack.)
To connect data centers or other geographically distant facilities, modules with coherent DSPs are employed. Both bandwidth and distance have steadily increased since their debut in the middle of the last decade to the point where modules can replace older, more complex equipment sets in most use cases. Distributed systems, of course, will also require improvements in software for distributing and coordinating workloads.
Now go back to data centers. Instead of building a 100,000-square-foot data center in a remote, rural area, what if a cloud provider built ten 10,000-square-foot data centers (about the size of a grocery store) in a more heavily populated area? The impact on local grids and electrical distribution networks could be spread out across different neighborhoods, reducing the impact on housing and possibly lessening the need for additional grid investments. Overall power could be reduced as well: Instead of serving consumers video streams from a data center 1,500 miles away, the latest releases could be cached in a downtown urban corridor close to consumers. Distributed distribution might even reduce bandwidth congestion.
These smaller data hubs could be housed in closed strip malls, abandoned industrial sites, and shuttered office buildings where power infrastructure still exists and that communities want to revitalize. Water? Some companies are experimenting with using gray and/or tainted water for cooling. Virtual, modular data center pods could also be sited at solar farms or in areas where cold, ambient winds could be harnessed to replace mechanical air conditioning. By taking advantage of underutilized infrastructure optical technology and creative ways of thinking about facilities, a service provider could create a virtual hyperscale data center with little of the downside communities see now.
So are massive data centers doomed? No. They just need a retrofit.
Radha Nagarajan is SVP and CTO of Connectivity at Marvell and is a Visiting Professor at the National University of Singapore.