Evaluating power, water, and operational limits as AI workloads reshape infrastructure design.
As AI workloads scale and compute density rises, many data center teams are running into hard physical limits not because of cooling technology choices, but because power availability, water usage, space, and operational complexity are no longer elastic.
In these environments, traditional air-cooled servers increasingly struggle to keep pace. The question for IT and facilities leaders is not whether to adopt a specific cooling technology, but how to align their cooling architecture to the constraints created by high-density compute. When power, water, and cooling limits converge, often alongside large AI and GPU deployments, immersion cooling tends to emerge as a practical response rather than a theoretical alternative.
Start With Power and Density
The most successful outcomes begin with a clear understanding of power and density constraints and then selecting the architecture that fits those realities.
In power-constrained facilities, cooling infrastructure often consumes a disproportionate share of available capacity. When that overhead limits IT growth, cooling architectures that dramatically reduce cooling energy, sometimes by as much as ~90% compared to traditional air, can free meaningful headroom for additional compute without waiting years for utility upgrades.
Density is often the second forcing function. When the business needs to double compute capacity within the same footprint, particularly in urban centers or high-cost real estate markets, conventional air-cooled designs quickly reach practical limits. In these scenarios, the higher rack-level densities achievable with immersion may offer a more scalable path forward than traditional air-cooled designs.
If your AI or GPU deployments are growing quickly, rack power is climbing past 20 kW, and new power is limited, it's time to take a closer look at alternative cooling architectures
Water Usage Is Now a First-Order Design Constraint
As AI workloads scale, cooling architecture is increasingly shaped by site‑specific constraints, most often power and, in many cases, water.
Globally, data centers already consume hundreds of billions of liters of water each year, with projections of over 1 trillion liters annually by 2030. Under traditional evaporative cooling models, a single 100 MW facility can use millions of liters of water per day, and industry-average Water Usage Effectiveness (WUE) remains close to 1.9 liters per kWh.
These pressures are forcing organizations to re-evaluate cooling architectures from a water perspective. Independently, Microsoft-backed research indicates that liquid and immersion cooling approaches can reduce water consumption by approximately 31–52% compared with conventional air-cooled designs.
For organizations operating in drought-sensitive regions, under ESG scrutiny, or bound by corporate water-neutrality commitments, cooling architectures that minimize or eliminate water use can offer meaningful compliance and sustainability advantages. In environments where water availability becomes a limiting factor, immersion-based designs are often evaluated alongside other liquid cooling options to better align infrastructure with long-term environmental constraints.
Comparing Cooling Approaches in Practice
A practical cooling decision starts with a side-by-side evaluation of how each approach behaves under your specific power, water, density, and operational constraints.
Cooling characteristics by approach
| Factor | Air-Cooled | Direct Liquid Cooling (DLC) | Single-Phase Immersion Cooling |
|---|---|---|---|
| Power efficiency | Lowest: heavy fans and HVAC loads | Improved: varies by design | Highest: cooling energy can drop by up to ~90% |
| Water usage | High, typically evaporative | Reduced or variable | Low to none; often waterless |
| Noise | High | Reduced | Near-silent (no server fans) |
| Equipment failure drivers | Dust, humidity, thermal cycling | More stable than air | Lowest: sealed, clean, thermally stable |
Operational Simplicity and Reliability
As power densities rise and AI deployments scale, operational complexity and reliability become increasingly important design constraints, particularly in facilities built around aging mechanical infrastructure.
In environments with legacy chillers, raised floors, and extensive air-handling systems, supporting higher-density workloads often means choosing between costly mechanical upgrades or evaluating simpler cooling architectures better aligned with those demands. Where operational simplicity is a priority, approaches that reduce reliance on large air systems can lower the risk of mechanical failures, cut routine maintenance, and significantly reduce noise. These benefits are especially relevant for edge, office-adjacent, and acoustically sensitive environments.
Reliability pressures also intensify in remote or harsh locations. More stable, sealed operating environments help protect servers from dust, moisture, and vibration, reducing failures in limited-access environments and minimizing downtime. For organizations focused on mission-critical uptime and extending hardware lifecycles, these characteristics can support longer refresh cycles and more predictable operations.
In practice, immersion cooling tends to enter the conversation when multiple constraints—power, density, water, space, and long-term operational goals—begin to limit what air-based approaches can reasonably support.
As power, water, and density constraints continue to shape data center design, the most effective cooling decisions start with system-level architecture, not technology. The growing body of industry standards and ecosystem work, including efforts led by the Open Compute Project, is helping organizations evaluate liquid and immersion cooling with greater confidence by reducing integration risk and improving interoperability.
For many teams, the next step is informed assessment: understanding where air, direct liquid, or immersion cooling best aligns with their workloads, facilities, and long-term operational goals. Organizations with deep experience in designing and validating high-density compute environments can play a valuable role in that process, helping turn constraints into architectures ready for what comes next.
