Artificial Intelligence (AI) is no longer just a technological trend; it is becoming an integral part of business operations, transforming industries from healthcare to finance. Industry analysts are a good source of guidance about how AI is developing. For example, channel focused analyst company Canalys, recently held its 2024 Forum in Berlin, Germany. The event covered topics including how AI technologies and solutions are driving change in and introducing unprecedented workloads and computing demands that are reshaping data center infrastructure. According to recent research from Canalys, AI will represent a $158bn opportunity for the channel.
These AI challenges and opportunities were debated in the GenAI Build Out panel at the Canalys event. During the session , the speakers, including Christopher Parker , Vertiv’s IT and Edge Offering Director for EMEA, explored how companies are adapting their data centers for AI, highlighting both the advantages and the complexities involved.
“It's rewarding working with the channel community to make Enterprise AI a powerful reality,” Christopher stated. “However, it will need education and training across the whole ecosystem from the definition of the AI use case to the design for the supporting data center infrastructure.
Christopher emphasized that the exciting thing for Vertiv is that it can provide the High Performance Computing (HPC)/High Density (HD) power and thermal architecture for any AI solution using its innovative liquid cooling solutions.
How AI is Transforming Data Centers
Dealing with AI workloads requires an evolution in the architecture of data centers, especially in terms of power and cooling requirements. A recent Vertiv study showed that traditional IT racks, which once operated with 5-10 kW , are rapidly evolving to handle workloads exceeding 40 kW per rack—and even surpassing 100 kW. Computing infrastructure accelerated with GPUs, such as NVIDIA’s H100 chips, are significantly more power-hungry and heat-intensive than conventional servers. For example, by the end of 2024, Meta will have deployed 350,000 AI chips.
To accommodate this shift, data centers will need to expand power capacity across the entire infrastructure, from the grid to chip. In addition to power, traditional air-cooling systems are often no longer sufficient for AI hardware. AI chips require typically five times more cooling capacity than traditional servers, pushing the industry toward adding liquid-cooling technologies as their key heat rejection strategy. This shift is essential to provide effective heat management in these high-density environments.
Vertiv’s Response to AI Trends: Power and Cooling Innovations
To address the rising power and cooling demands brought on by accelerated computing, innovative solutions are essential. Vertiv™ 360AI includes a complete portfolio of power, cooling and services solutions that solve the complex challenges arising from the AI revolution. It covers simple pilot projects, Edge inferencing, and large-scale AI factories.
As AI demand continues to surge, organizations are overhauling their operations to adopt new data center constructions or retrofit designs featuring environmentally conscious technologies, such as liquid cooling for AI servers and latest energy efficient designs.
As AI adoption grows, new cooling technologies are becoming commonplace in data centers:
- Direct-to-chip liquid cooling: Cold plates attached to GPUs and other high heat-generating components inside a server transfer their heat to a liquid. This liquid pumped through IT racks transfer heat to another fluid loop and out of the data center. This method efficiently captures and removes the majority (typically 75% to 80%, sometimes up to 95%)of the heat generated .
- Rear-door heat exchangers: The residual heat generated by IT equipment still requires to be removed out of the data room. Traditional air-cooling technologies are often viable options, but as densities rise, higher capacity systems such as rear-door heat exchangers are often an important alternative to be considered.
As cloud and colocation providers expand to meet increasing AI demand, data centers are diversifying investments by focusing on modular solutions and designs considering future needs. Strategies to scale up AI workloads as their market demand grows and adapt to new technologies as they are launched are fundamental to ensure competitiveness of data center operators.
During the Canalys event panel , Christopher stated that both data needed to feed AI models and the intelligence behind the models themselves are sensitive information. Many companies are not only doubling down on investments in security to keep these assets safe, but are also deploying on-premise AI solutions to avoid moving their data outside their walls.
While the market demand for AI solutions is strong, and the necessary hardware infrastructure is available, what’s often lacking is technical expertise: the consulting capability to analyze different AI use cases and strategically align them with the right solutions.
To address the expertise deficit and also speed up AI implementation, Vertiv has developed a series of 30 to 40 reference designs tailored to different AI infrastructure requirements, based on workload demands. These include solutions ranging from straight forward single-rack systems consuming 100kW to large-scale setups of 500kW and even 1–2MW,with more being added.
The standardized reference designs span the full range of IT loads, enabling faster deployment cycles. They cover key elements such as power distribution, liquid cooling, busbars, switchgear, and pre-configured server solutions—including Dell hardware and NVIDIA chips—enabling seamless, efficient deployment.
Navigating the Future of AI-Driven Data Centers
During the GenAI Build Out Canalys session, it was claimed that in 2024 investments surged with over a 100% increase in AI server shipments, totaling 5 million servers powering predictive AI and 1 million dedicated to generative AI.
As AI continues to scale, high-density and high-performance computing (HPC) deployments will become increasingly prevalent. Data centers must adapt to manage growing power and cooling requirements effectively.
The infrastructure buildout is ramping up in preparation for the growth of generative and predictive AI. Partner sentiment on AI remains deeply interested—most optimistic, others still cautious due to a few of the challenges to be overcome.
Christopher emphasized how AI business performance in the hyperscale space is strong and has allowed the industry to build experience and competence in AI solutions, but that he believes unlocking full potential in the AI space requires deeper enterprise maturity aligned with optimized channels, supply chains, and expanded talent pool.
Currently, the enterprise demand for AI solutions is growing from a small base but holds significant promise that provides a powerful incentive for continued investment and development in AI installations.