Home > News > Techscience

Combating the Misuse of Artificial Intelligence: Collaboration is Key for the Future

YangYiFu Sun, Mar 17 2024 10:59 AM EST

In recent years, the rapid advancement of Artificial Intelligence (AI) technology has been nothing short of staggering. Just as 2023 saw significant breakthroughs in the field of language models, 2024 brings forth strides in video and virtual simulation of the physical world. However, amidst AI's widespread application across industries such as science, culture, education, healthcare, transportation, and even in daily life, concerns about its potential misuse have begun to surface.

The Era of "Deepfakes": Where Reality Blurs

The creation of highly realistic fake images, audio, and videos using AI algorithms, known as "deepfakes," has become a growing concern since the concept emerged in 2017. Instances of deception or manipulation of public opinion using this technology have become increasingly common globally. For instance, in the United States, AI-related scams have seen a staggering 50% increase in the past year alone.

Yet, despite efforts, addressing this issue remains challenging. The rate of progress in AI-generated deception far outpaces the development of technology to detect it. Today, anyone can quickly and inexpensively generate images, audio, or videos that are indistinguishable from reality, making them difficult to trace. Conversely, detection technology remains specific and challenging to deploy widely due to its specificity in both subject matter and software. Moreover, fact-checking requires significant time and effort. According to a survey conducted by Japan's Mizuho Research Institute and a tech company, 70% of Japanese respondents find it difficult to discern truth from falsehood on the internet, yet only 26% indicated they would verify suspicious information to some extent.

In early 2024, the World Economic Forum's Global Risks Report highlighted AI-generated misinformation and disinformation as the top risk for the next two years, fearing it could exacerbate the already polarized and conflict-prone global situation. With 2024 being a "major election year" with over 70 countries or regions holding significant elections, there is concern that AI could be weaponized to mislead voters, defame candidates, or even incite violence, hatred, and terrorism.

"The 2024 U.S. election will witness a tsunami of AI-generated misinformation," as noted by Daryl West, a senior researcher at the Brookings Institution's Center for Technology Innovation. The U.S. primary elections have already seen attempts to manipulate voters using deepfake technology. In January of this year, during the Democratic primary in New Hampshire, many voters reported receiving calls purportedly from President Biden advising them not to participate in the primary and to save their votes for the November elections. These calls were actually orchestrated by a political consultant for Biden's competitor, Dean Phillips. Using AI, the consultant simulated Biden's voice and targeted voters who were most likely to be influenced. The consultant even remarked, "It only costs $500, and anyone can do what I did."

Industry insiders fear that the proliferation of deepfake content could lead the public to completely lose trust in their own senses, thereby questioning even genuine information. For instance, in 2016, Trump vehemently claimed that a scandalous recording related to him was fabricated. If this had occurred today, his denial might have been more convincing.

Similar issues have arisen in multiple countries globally. However, technological advancements have outpaced the laws and regulations of various countries, leaving the resolution of deepfake issues heavily reliant on tech companies' "self-regulation." During the 60th Munich Security Conference held in February, numerous global tech companies signed agreements pledging to jointly combat AI misuse aimed at disrupting elections. This includes developing detection tools and adding "unreal content" labels and digital watermarks to generated images to clarify their origins. Some companies are even considering banning the generation of political candidate images. However, some media outlets view these agreements as merely outlining basic principles without specifying the concrete measures and timelines for companies to fulfill their commitments, making it appear more like a hollow PR campaign. Moreover, the subjective control of tech companies over their AI products raises questions about imposing their values on users.

Furthermore, electronic watermarking technology still has limitations. For example, watermarks provided by the "Content Authenticity Initiative" can only be used for static images and are "easily accidentally or intentionally removed." Therefore, challenges such as controlling the generation of false information at the source, limiting the spread of false information on social media, and fostering critical thinking among the public remain significant hurdles for countries to address the issue of deepfakes.

The Shadow of an "AI War"

While the public's growing interest in AI primarily stems from the development of generative AI, military forces in many countries have long been focusing on the application of AI technology on the battlefield, particularly in autonomous weapon systems capable of deep learning. The strategic community in the United States has likened the development of AI to the emergence of nuclear weapons in history, and prominent tech entrepreneur Elon Musk believes that the proliferation of AI technology has allowed companies worldwide to "manufacture nuclear weapons in their own backyard."

The application of AI can be seen in the two most prominent conflicts globally. A recent article in Time magazine referred to the Russia-Ukraine conflict as the "first AI war," revealing how U.S. tech companies like Palantir are providing AI software to Ukraine. This software analyzes intelligence from sources such as satellite images and drone footage to provide the most effective targets for enemy strikes and learns and improves with each strike. These "AI arms dealers" have turned Ukraine into their latest technological testing ground.

In the conflict between Israel and Palestine, the Israeli military has deployed artificial intelligence technology for destroying drones, mapping tunnel networks, and recommending strike targets. According to reports, an AI system named "Gospel" has boosted the efficiency of target acquisition for the Israeli Defense Forces by several hundred times. Many media outlets are concerned that this could mean the system not only identifies military facilities but also targets civilian residences, raising fears that "artificial intelligence may be used to determine the life and death of Gaza residents."

Even as early as 2020, the United States, which pioneered AI target identification experiments, has recently extensively utilized this technology to locate rocket launchers in Yemen, surface vessels in the Red Sea, and strike targets in Iraq and Syria.

According to U.S. media reports, the U.S. military has been increasingly strengthening its collaboration with leading companies like OpenAI. In August 2023, shortly after the surge in generative AI, the U.S. Department of Defense swiftly established the "Generative AI Task Force" within its Office of the Chief Digital and Artificial Intelligence Officer. In January of this year, OpenAI quietly updated its "Usage Policy" page, removing restrictions on "military and warfare" applications and replacing them with vaguer terms prohibiting the "development or use of weapons" using their products. Shortly after, the company admitted to collaborating with the Pentagon on multiple projects, including the development of cybersecurity tools. Recently, senior officials from the U.S. Department of Defense once again invited American technology companies to a secret meeting, hoping to expedite the exploration and implementation of military applications of AI.

Experts in the field believe that just as humanity successfully deterred the use of nuclear weapons over several decades through strategic coordination among nations, there is an urgent need for a multilateral consensus to prevent conflicts or arms races resulting from the uncontrolled proliferation of AI technology. UN Secretary-General Guterres has emphasized the need for the United Nations to reach a legally binding agreement by 2026 to prohibit the use of AI in autonomous weapons.

International coordination and cooperation are indispensable.

AI holds tremendous potential for both positive and negative impacts and requires timely measures to mitigate risks. Since 2016, various countries have issued policies and regulations regarding AI, but progress has lagged behind technological advancements. In October last year, President Biden signed the first U.S. executive order on AI regulation, establishing standards for safety and privacy protection, but it has been criticized for lacking effective enforcement mechanisms. In the EU, although the European Parliament passed the "AI Act" on March 13th, some provisions of the act will be phased in, with some rules not taking effect until 2025. Japan's ruling party, the Liberal Democratic Party, only recently announced plans to propose legislation on generative AI to the government later this year.

Whether it's the issue of misinformation brought by generative AI or the risks of military AI applications, the impact transcends national borders. Therefore, the regulation and governance of AI should be addressed through international cooperation, with countries working together to mitigate risks and jointly establish a governance framework and standard regulations for AI with broad consensus.

However, it is regrettable that the United States not only views China's AI development plan as hostile from the outset but also regards China as a hypothetical enemy in deploying AI military applications and has long hindered China's technological progress to maintain its technological advantage. According to reports, the U.S. government not only prohibits American companies from exporting the most powerful AI chips to China but also requires U.S. cloud computing companies to disclose the names of foreign clients developing AI applications on their platforms in an attempt to cut off Chinese companies' access to data centers and servers.

The development and governance of AI are crucial to the fate of all humanity and require collective efforts and coordinated responses. Building "small courtyards with high walls" in the field of AI will only weaken humanity's ability to collectively address the risks and challenges. The United States should genuinely respect the objective laws of technological development, respect the principles of market economy and fair competition, cease maliciously obstructing the technological development of other countries, and create favorable conditions for strengthening international coordination and cooperation in the field of AI.

(Reporter: Yang Yifu)