Home > News > Techscience

Taking Multiple Measures to Make Artificial Intelligence More Energy-Efficient

LiuXia Sat, May 11 2024 11:10 AM EST

According to the official website of the World Economic Forum, in order for artificial intelligence (AI) to unleash its transformative potential, enhance productivity levels, and improve societal well-being, humanity must ensure its sustainable development. The core challenge facing this vision lies in the rapid growth of energy consumption as computing power and performance continue to increase.

The AI ecosystem, spanning from hardware and training protocols to operational techniques, consumes a significant amount of energy. In light of this, scientists are striving to make AI more energy-efficient. Measures include altering AI operational strategies, developing more energy-efficient algorithms, and chips.

Energy-Hungry Beast

AI is an energy-intensive technology. Just how energy-consuming is it? Data provides the answer.

As reported by the French website "Le Figaro," Sasha Luchoni, a researcher and environmental manager at the AI platform "HugFace," stated that algorithms like Midjourney or Dall-E consume electricity equivalent to fully charging a smartphone to generate a single image. The annual electricity consumption of an NVIDIA H100 graphics processing unit surpasses that of an average-sized American household.

The website of Harvard Magazine points out that large language models excel in generating human-like, coherent, and contextually logical text. However, this improvement comes at a cost, with training GPT-3 consuming energy equivalent to that used by 120 American households in a year. The New York Times reported that ChatGPT processes over 200 million requests daily, consuming over 500,000 kilowatt-hours in the process.

Data from the World Economic Forum reveals that the energy consumption required to run AI tasks is increasing annually at a rate between 26% and 36%. By 2027, the energy consumed by the AI industry is projected to rival the annual energy consumption of countries like Iceland or the Netherlands.

Changing Operational Strategies

Making AI more energy-efficient is imperative.

Firstly, adjusting AI operational strategies is crucial. The World Economic Forum website notes that AI operations generally consist of two main phases: training and inference. During the training phase, models learn and develop by processing vast amounts of data; once trained, they enter the inference phase to address user queries. Limiting energy consumption in these phases can reduce overall AI energy consumption by 12% to 15%.

Professor Thomas Dietterich from Oregon State University's School of Electrical Engineering and Computer Science highlights another effective strategy of optimizing scheduling to save energy. Running lightweight tasks at night or larger projects during colder months can significantly reduce energy consumption. Moreover, shifting AI processing to data centers can help reduce carbon emissions since data centers operate with high efficiency, some utilizing green energy sources.

In the long term, fostering synergy between AI and emerging quantum technologies is a crucial strategy to guide AI towards sustainable development. Traditional computing energy consumption grows exponentially with computational demands, whereas quantum computing shows linear growth in energy consumption.

Furthermore, quantum technologies can make AI models more compact, enhance learning efficiency, and improve overall functionality.

New Models and Devices

A driving force behind competition among AI companies is the belief that bigger is better, leading to an increase in parameters and energy consumption. For instance, GPT-4 boasts 1.8 trillion parameters, compared to its predecessor GPT-3 with 175 billion parameters. Therefore, to make AI more energy-efficient, many scientists are exploring algorithms that require fewer parameters.

HawAI.tech utilizes novel electronic components and probability-based AI technology to save energy. Within the same timeframe and energy consumption, the processing speed of their new devices is 6.4 times faster than an NVIDIA Jetson chip. Co-founder and CEO Raphael Frisch mentioned that by combining probability theory and optimized electronic components, their solution requires less data and energy.

Moreover, neuromorphic chips simulating brain functions hold promise for enhancing AI efficiency. Recently, Intel unveiled a large-scale neuromorphic system named Hala Point. It features 1152 Intel Loihi 2 processors built on Intel's four-process technology, supporting up to 11.5 billion neurons and 128 billion synapses, capable of processing over 3.8 quadrillion 8-bit synapses and more than 2.4 million neuron operations per second. Its unique capabilities enable real-time continuous learning for future AI applications such as addressing scientific and engineering problems, logistics, smart city infrastructure management, large language models, and AI agents.