This site is part of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa logo

Generative AI and the Need for New Data Center Designs

The explosion of generative AI is forcing a rupture in the classic data center model: densities jumping from 5–10 kW per rack to 80–100 kW (and toward 600 kW) demand new electrical, thermal, and deployment designs to continue delivering availability, efficiency, and adherence to ESG goals.

1. Generative AI Changed the Data Center Profile

AI clusters already operate at up to 100 kW per rack, against the typical 15–20 kW of non-AI IT racks. With current configurations, each next-generation GPU consumes between 700 W and 1,200 W, and a rack with approximately 80 GPUs can require 80 kW of continuous power, equivalent to 20–30 traditional racks.

Recent studies show clusters with tens of thousands of H100 GPUs consuming around 31 MW of IT load distributed across approximately 700 racks, not even accounting for cooling overhead. Manufacturers like NVIDIA are already projecting infrastructure for the next generation (Rubin/Blackwell), discussing AI racks that can reach 600 kW, completely changing the logic of room design, power supply, and cooling.

This combination of high density and practically continuous load (24/7 training and inference) eliminates the typical margins of corporate data centers and leads design toward the logic of "AI factories," where IT power in MW becomes the central planning unit.

2. Extreme Density Requires Different Physical Design

Traditional data center racks historically operate in the 5–15 kW per rack range; in environments optimized for AI, the reference is migrating to 30–80 kW per rack as the new "normal." Some Blackwell projects with 72 GPUs per rack already speak of approximately 120 kW, ten times the power of a typical rack.

This triggers important physical changes:

  • Smaller technical rooms with much higher density, which increases thermal load per square meter and requires rethought aisle layouts, containment, and technical passages.
  • Rack weight limits (around 3–3.5 tons) become real constraints for raised floors, anchoring, and transport, even in environments with fewer racks.
  • Power and network cable paths need to support much larger currents per run, pressing for the use of busways and shorter vertical distribution.

Instead of designing rooms for "number of racks," the design becomes sized by "MW of IT" per room or module, with thermal and electrical envelopes dedicated to generative AI islands.

3. Liquid Cooling Becomes Mandatory

With GPUs consuming 700–1,200 W each, air alone is no longer efficient for removing heat economically and within noise and space limits. Manufacturers and operators already treat liquid cooling (direct-to-chip, rear-door, immersion) as the de facto standard for high-density AI clusters.

Turning points include:

  •  Racks based on Blackwell GPUs, with approximately 72 GPUs, generate more than 100–120 kW of heat, requiring high-capacity liquid solutions; vendors already offer plug-and-play systems for this thermal level.
  • Platforms like GB200 NVL72 use liquid cooling to operate with warmer water, reducing or eliminating mechanical chillers and thus reducing energy and water consumption of the HVAC system.
  • The transition to medium-temperature water (up to ~45 °C) opens space for heat reuse in district heating or industrial processes, improving the data center's ESG balance.

Manufacturer reports estimate that in a 50 MW data center, the combination of optimized liquid cooling and dedicated AI architecture can reduce annual energy and water costs by millions of dollars and significantly improve metrics such as PUE and WUE.

4. Power in MW: New Electrical Requirements

AI consumes 4 to 8 times more energy per server than traditional loads, straining internal networks and the utility itself. This shifts the electrical design focus to medium-voltage solutions, high-efficiency UPS, and leaner distribution.

Clear trends in AI-focused data center projects include:

  • Increasing use of medium-voltage UPS, with efficiency close to 98% up to 24 kV, reducing losses, cable sections, and in some cases even eliminating the need for conventional generator sets.
  • Distribution architectures prioritizing short and scalable paths, with high-capacity busways feeding 1–3 MW AI "pods" each, instead of long low-voltage meshes of 400/480 V.
  • Network connection planning based on multiple dedicated feeders in high/medium voltage and, increasingly, renewable energy contracts (PPAs) and use of BESS batteries for peak support.

The pressure for efficiency appears directly in the PUE metric: studies show global average PUE around 1.57, while cutting-edge data centers already operate below 1.2 using advanced cooling techniques and optimized electrical distribution. In an AI context, every tenth gained in PUE represents millions per year in OPEX savings.

5. Modularity and New Deployment Formats

The pace of AI demand growth and the investment ticket are pushing the sector toward more modular, prefabricated, and scalable solutions.

Two lines stand out:

  • Prefabricated modular data centers: complete modules of power, IT, and cooling that leave the factory tested, with competitive PUE and time-to-market up to 50% lower than conventional construction.
  •  AI pods or containers: standardized high-density units (for example, 100–300 GPU servers per module), which can be coupled to existing facilities or installed at new sites near cheap energy sources.

This approach allows:

  • Growing in MW steps, aligning CAPEX to the real demand curve of AI.
  • Developing hybrid layouts: part of the site focused on general cloud loads, part on AI "pods" with radically different thermal and electrical design.

For colocation operators, new designs include specific "AI ready" zones, with floors prepared for 30–80 kW per rack, redundant high-capacity power supply, and pre-installed liquid cooling infrastructure for GPU clients.

6. Sustainability, PUE/WUE, and ESG Goals

Even as energy consumption increases, AI has raised the bar for sustainability demands. The discussion moves beyond just "how much it consumes" to include "how it consumes" and "where that energy comes from."

Concrete vectors include:

  • Data center projects with AI operating with 100% certified renewable energy (I-REC), PUE around 1.4, and WUE near zero, using hydro, solar, and wind sources and technologies that avoid direct water use.
  •  Use of AI and AIOps to dynamically optimize PUE, control temperature setpoints, activate fans and pumps predictively, and reduce waste without compromising SLA.
  • Application of consolidated metrics – PUE, WUE, and CUE – as central indicators in ESG reports, with large operators assuming explicit Net Zero targets for 2030–2040.

Studies indicate that global data center consumption may double by 2030, with AI as the main driver, making projects that do not incorporate renewable sources, high efficiency, and clear carbon compensation plans from the outset unacceptable.

7. Implications for ROI and Planning of New Projects

New data center designs for generative AI have higher CAPEX per MW installed, but also concentrate more revenue and productivity per square meter and per rack. ROI analysis moves beyond looking only at construction cost per MVA to considering:

  • Revenue potential per AI rack of 30–80 kW, often multiples of traditional 5–10 kW rack revenue.
  • OPEX reductions via improved PUE (for example, moving from 1.6 to 1.2 can cut approximately 25% of non-IT energy) and via liquid cooling that replaces intensive mechanical chillers.
  • Competitive differentiation of sites with 100% renewable energy and carbon-neutral certifications, increasingly demanded by global cloud and AI customers.

Consultancies point out that large technology companies plan to invest approximately $1 trillion in new data centers over the next five years, driven precisely by AI loads. To capture a relevant portion of this flow, operators will need to:

  •  Redesign portfolios with zones or sites dedicated to AI, prepared for densities of 30–100 kW per rack and native integration of liquid cooling.
  • Review electrical architecture (more medium voltage, more efficient UPS, integration with renewables and, in many cases, BESS) to support peaks and guarantee resilience.
  •  Incorporate modularity and prefabrication to reduce timelines and dilute investment risks in a rapidly changing market.

In summary, generative AI is not just another IT load: it requires a new generation of data centers, thought through from site lotting to the last rack, with a focus on high density, extreme efficiency, and sustainability as a business condition – no longer as an optional differentiator.