Microsoft, Google, And Meta Have Borrowed Ev Tech For The Next Big Thing In Data Centers: 1mw Watercooled Racks

- Liquid cooling isn't optional anymore, it's the only way to survive AI's thermal onslaught
- The jump to 400VDC borrows heavily from electric vehicle supply chains and design logic
- Google’s TPU supercomputers now run at gigawatt scale with 99.999% uptime
As demand for artificial intelligence workloads intensifies, the physical infrastructure of data centers is undergoing rapid and radical transformation.
The likes of Google, Microsoft, and Meta are now drawing on technologies initially developed for electric vehicles (EVs), particularly 400VDC systems, to address the dual challenges of high-density power delivery and thermal management.
The emerging vision is of data center racks capable of delivering up to 1 megawatt of power, paired with liquid cooling systems engineered to manage the resulting heat.
Borrowing EV technology for data center evolution
The shift to 400VDC power distribution marks a decisive break from legacy systems. Google previously championed the industry's move from 12VDC to 48VDC, but the current transition to +/-400VDC is being enabled by EV supply chains and propelled by necessity.
The Mt. Diablo initiative, supported by Meta, Microsoft, and the Open Compute Project (OCP), aims to standardize interfaces at this voltage level.
Google says this architecture is a pragmatic move that frees up valuable rack space for compute resources by decoupling power delivery from IT racks via AC-to-DC sidecar units. It also improves efficiency by approximately 3%.
Cooling, however, has become an equally pressing issue. With next-generation chips consuming upwards of 1,000 watts each, traditional air cooling is rapidly becoming obsolete.
Liquid cooling has emerged as the only scalable solution for managing heat in high-density compute environments.
Google has embraced this approach with full-scale deployments; its liquid-cooled TPU pods now operate at gigawatt scale and have delivered 99.999% uptime over the past seven years.
These systems have replaced large heatsinks with compact cold plates, effectively halving the physical footprint of server hardware and quadrupling compute density compared to previous generations.
Yet, despite these technical achievements, skepticism is warranted. The push toward 1MW racks is based on the assumption of continuously rising demand, a trend that may not materialize as expected.
While Google's roadmap highlights AI's growing power needs - projecting more than 500 kW per rack by 2030 - it remains uncertain whether these projections will hold across the broader market.
It’s also worth noting that the integration of EV-related technologies into data centers brings not only efficiency gains but also new complexities, particularly concerning safety and serviceability at high voltages.
Nonetheless, the collaboration between hyperscalers and the open hardware community signals a shared recognition that existing paradigms are no longer sufficient.
Via Storagereview