Home > News > Techscience

Put an "Inhibitor" on AI Autonomous Weapons Early

LiuXia Sun, May 05 2024 11:08 AM EST

According to a recent report on the Nature website, lethal autonomous weapons empowered by artificial intelligence (AI) have arrived, with AI potentially aiding them in targeting specific objectives. Researchers, legal experts, and ethicists are working diligently to address the issues surrounding the use of these weapons on the battlefield. 66318c35e4b03b5da6d0e0a2.jpg In July 2023, the United Nations Security Council discussed the issue of AI-enhanced weapons.

Image source: Nature website 66318c3fe4b03b5da6d0e0a4.jpg The X-62A VISTA aircraft of the US Air Force has successfully tested the capability of advanced aerial maneuvers using AI.

Enhancing Weapon Performance with AI

Autonomous weapons have been in existence for decades, such as heat-seeking missiles, but the development and utilization of AI algorithms are expanding their capabilities.

Researchers point out that AI possesses exceptional processing and decision-making skills, theoretically offering significant advantages. In annual tests of rapid image recognition over the past decade, algorithms have consistently outperformed human experts. For instance, a study last year demonstrated that AI could identify recurring images in papers faster than human experts. Additionally, in 2020, an AI model defeated an experienced F-16 fighter pilot in a series of simulated dogfights, credited to the "precise maneuvers that the AI model executed beyond the capabilities of human pilots."

This indicates that lethal autonomous weapons, including AI drones, are entering the fast lane of development.

The US Department of Defense has allocated $1 billion for the "Replicator" program. On August 28, 2023, Deputy Secretary of Defense Catherine Hicks announced the "Replicator" program, aiming to produce, deliver, and deploy thousands of unmanned combat systems within the next 18-24 months. Experimental submarines, tanks, and ships utilizing AI for autonomous driving and shooting have been introduced, and commercial drones equipped with AI image recognition can target and destroy objectives.

Zack Kalenborn, a security analyst at the US Strategic and International Studies Center, highlights that AI weapons can easily lock onto infrared or powerful radar signals, comparing them to databases to assist in decision-making. This implies that when AI weapons detect the source of incoming radar signals on the battlefield, they can engage in shooting with minimal harm to civilians. AI drones can also make highly complex decisions regarding the distance and angle of attack against opponents.

Controversy Surrounding Battlefield Use

AI enhances the speed and evasion capabilities of lethal autonomous weapons. Some observers are concerned that any group in the future could deploy large numbers of inexpensive AI drones to the battlefield, using facial recognition technology to eliminate targets.

In theory, AI could also be used for other aspects of warfare, such as compiling potential target lists. Reports suggest that Israel has created a database containing tens of thousands of suspected armed individuals using AI. However, the Israeli military has denied these claims.

In 2017, Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent anti-AI weapons activist, helped create a video called "Slaughterbots," highlighting the risks of using AI in warfare. The video depicts a future where miniature drones equipped with facial recognition systems and explosives become terrifying "AI killers." They can identify and kill targets or attack specific groups, such as individuals wearing uniforms.

The emergence of AI on the battlefield has sparked debates among researchers, legal experts, and ethicists.

Some believe that AI-assisted weapons may be more precise than human-guided weapons, reducing the number of deaths and injuries to soldiers, mitigating collateral damage like civilian casualties and destruction in residential areas, and aiding vulnerable countries and groups in self-defense. Others emphasize that these AI-powered autonomous weapons could lead to catastrophic errors.

Kalenborn argues that the main issue lies not in the technology itself but in how humans utilize this technology.

Urgent Need for Relevant Regulations

For years, researchers have been striving to control the threats posed by AI-enhanced weapons, and the United Nations has taken a crucial step.

In December last year, a UN resolution included lethal autonomous weapons on the agenda for the UN General Assembly in September this year. UN Secretary-General Antonio Guterres expressed his hope in July last year to ban weapons that could be used without human supervision by 2026.

Experts believe that this move provides a practical pathway for countries to take action on AI weapons. However, implementation faces challenges, partly due to a lack of consensus among countries on the actual content of the laws. An analysis in 2022 found that there are at least a dozen definitions of "autonomous weapons."

Russell notes that these definitions vary widely and lack consensus. For example, the UK considers lethal autonomous weapons as those "capable of understanding higher-level intent and direction"; Germany views "self-awareness" as a necessary attribute of autonomous weapons, while most researchers believe that current AI is far from possessing such capabilities.

Kalenborn states that enforcing any ban on lethal autonomous weapons through inspection and observation is challenging, as AI can easily conceal or alter system states.

All these issues will be discussed at the UN General Assembly in September this year.