Artificial intelligence (AI) applications are now incredibly commonplace. Industries ranging from health care to manufacturing use them, as do everyday users who employ smart home apps or play high-tech games.
Data center infrastructure has had to scale up to meet AI’s increasing demands. The good news is that artificial intelligence is well-equipped to enhance data centers, too. More specifically, they use an AI accelerator to speed up related processes like machine learning and computer vision. Let’s explore options for using this type of microchip for better data center energy efficiency.
A University of Michigan professor named Lingjia Tang noticed the growing trend of data centers using specialized hardware to achieve up to 100 times more efficiency in data centers, thanks to the improvements made by AI-equipped chips. However, her research examines whether tailored software could generate even better gains.
Tang’s first goal is to reenvision data center architecture by designing the infrastructure for situations that often involve these special chips. Another component of Tang’s project features a software interface that estimates the potential improvements these accelerators make and determines how to maximize overall performance and identify bottlenecks.
The final stage of Tang’s effort requires building a manager that enables multiple chips to share available resources. Together, the software gives data center managers insights they didn’t otherwise have. Moreover, the statistics about individual performance could enable measuring the worthiness of a particular piece of AI hardware versus another that performs a similar function.
Designing electronics requires choosing the optimal kind of power supply for the task. Computer and industrial applications normally have linear power supplies. Conversely, it’s more common to see switching power supplies in industries like aviation and boating, although some network equipment utilizes them, too.
Beyond those specifics, power efficiency is a topic commonly on the minds of data center managers. Keeping overall usage in check has positive environmental effects and supports the enterprise’s bottom line by reducing utility costs. One company offering accelerators mentioned energy-efficient data processing as one of the features that helped a new chip design stand out from other options.
When data centers can process larger amounts of data in shorter timeframes than before, overall power usage goes down. Selecting the right chip for the job increases the chances of optimal results, similarly to how electrical engineers have to make power supply decisions during the design process for electronics.
LeapMind built a new Efficiera chip that minimizes the power required for convolutional processing. It reduces the data transfer and the number of bits. How does that tie into data center architecture? The authors of a 2017 research paper pointed out how convolutional neural networks serve various purposes, including lowering a data center’s energy costs.
Those researchers devised a method of calculating the energy efficiency of a convolutional neural network that makes an inference. Knowing this ahead of time would help people verify that training the model would give beneficial results. The team found that their predictive framework offered the estimated energy use with 97.21% accuracy.
There are generally two categories of AI chips in data centers — training and inference. Each can contribute to better data center efficiency. Chips for training handle huge data sets to build processing models. That task takes anywhere from days to weeks. Then, inference chips use those newly created models to process individual time-sensitive inputs, providing results in a matter of milliseconds.
Although you can find chips capable of both training and inference in the data center, they’re typically optimized to do only one of those things. Parties in the market for an AI accelerator should plan to assess both training and inference varieties. The best approach is to do that while understanding which tasks put the data center under strain concerning energy usage or otherwise.
Qualcomm recently focused on inference hardware when it released a new chip called the Cloud AI 100. It promises to perform 350 trillion operations per second, which is substantially faster than what other chips on the market could do at the time the company launched this new offering. A Facebook representative came to Qualcomm’s launch event and explained the need for power-efficient accelerators as the inference workload grows.
Companies interested in buying accelerators to enhance their data centers should look for real-world case studies that show the hardware in action. Such content puts the product specifications into context and makes it easier to determine if a certain product is a good buy.
Analysts say people depend on liquid cooling options in data centers more often, with hardware accelerators being one reason behind the trend. Hardware accelerators have much higher thermal design points than central processing units. That means they usually require 200 watts or more of cooling.
Paul Finch, CEO at Kao Data, said that direct water-cooled chips could support many of the AI applications data centers manage today. The increased prominence of artificial intelligence made liquid cooling necessary rather than optional.
He clarified, “Many of these new processors, the real Ferraris of chip technology, are now moving towards being water-cooled. If your data center is not able to support water-cooled processors, you are actually going to be excluding yourself from the top end of the market.
In terms of the data center architecture, we have higher floor-to-ceiling heights — obviously, water is far heavier than air, so it’s not just about the weight of IT, it’s going to be about the weight of the water used to cool the IT systems. All of that has to be factored into the building structure and floor loading. We see immersion cooling as a viable alternative — it just comes with some different challenges,” Finch continued.
As data center brands work to keep AI accelerator chips at the right temperature, they’re more likely to discover different data center energy efficiency methods to keep costs down while getting results.
Implementing accelerators also keeps data centers from reaching their limits. Certain kinds called field-programmable gate arrays (FPGAs) allow for precise reprogramming based on the needs of a particular task or algorithm. Matching the accelerator’s capabilities to in-the-moment needs causes lower power and energy consumption, plus faster speeds.
Data centers may use FPGAs for non-AI related tasks, but compelling examples exist to show why this hardware fits into an AI framework. Intel and Microsoft are a couple of the companies using FPGAs for AI. Moreover, an enterprise called Myrtle created an FPGA-accelerated deep neural network inference engine for machine-learning applications.
Some of Google’s data center energy efficiency projects cause a 40% reduction in cooling costs and 15% less energy overhead. Sensors in the data center continually gather information and use it to determine which adjustments to make. That’s one example of a case where a machine-learning algorithm’s needs may change based on environmental conditions.
These examples give an overview of how to combine AI-equipped accelerators to achieve energy efficiency goals in the data center and for power-intensive tasks. As technology continues to improve, so should the chips that help data center applications perform at high levels without ramping up costs.
Article by —
Megan Ray Nichols
Freelance Science Writer
Read More Articles
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
© Divya Media Publications Pvt. Ltd. All rights reserved