Home > News > Techscience

Tackling AI Tech Misuse: Collaboration is the Key to the Future

YangYiFu Sun, Mar 17 2024 11:25 AM EST

In recent years, the rapid evolution of Artificial Intelligence (AI) technology has been nothing short of dazzling. Just as we left behind 2023, a year marked by significant breakthroughs in the field of language models, we've entered 2024 with new advancements in video and virtual generation in the physical world. However, as AI technology finds its way into various sectors such as science and technology, culture, education, medicine, transportation, and even our daily lives, concerns about its potential misuse have begun to surface.

"Deepfakes" Blurring the Lines Between Real and Fake

Highly realistic fake images, audio, and videos generated using AI algorithms, known as "deepfakes," have been causing a stir since the term first appeared in 2017. With incidents of fraud or manipulation using this technology becoming increasingly common worldwide, the past year has seen AI-related scams in the United States increase by more than 50%.

Despite this growing concern, effective solutions remain elusive. The pace of advancement in AI's ability to generate fake content far exceeds the development of technologies to detect such falsehoods. Today, anyone can quickly and cheaply generate images, audio, or even videos that are hard to distinguish from reality, and tracking the sources of these deepfakes is challenging. In contrast, detection technologies are not widely adopted due to their specificity in themes and software. Moreover, fact-checking consumes considerable time and effort. According to a survey by the Japan's Mizuho Research Institute and tech companies, 70% of Japanese respondents find it hard to discern the truth of information on the Internet, yet only 26% check the veracity of suspicious information they encounter.

The World Economic Forum's "Global Risks Report 2024," released earlier this year, listed misinformation and false information generated by AI as one of the top ten global risks for the next two years, fearing it could exacerbate the already polarized and conflict-ridden global situation. With 2024 being a year of major elections worldwide, with over 70 countries and regions holding significant elections, there are concerns about AI being weaponized to mislead voters, slander candidates, and even incite violence, hatred, and terrorism.

"The 2024 US elections will face a tsunami of AI-generated misinformation," said Darrell West, a senior fellow at the Brookings Institution's Center for Technology Innovation. As the US elections approached the primary phase, there were already attempts to manipulate voters using deepfakes. For example, during the Democratic primary in New Hampshire in January, many voters reported receiving a call "from President Biden" advising them not to participate in the primary and to save their vote for the Democrats in November. In reality, the call was orchestrated by Dean Phillips' political advisor, Kramer, who used AI to mimic Biden's voice and target the most susceptible voters. Kramer boldly stated, "With just $500, anyone can replicate what I did."

Industry insiders worry that the proliferation of deepfake content could completely undermine public trust in their own senses, with genuine information also facing increased skepticism. For instance, Trump once vehemently claimed that a scandalous recording related to him was fabricated. If such a claim were made today, his argument might be more convincing.

Similar issues have arisen in many countries, but technological development has outpaced legal and regulatory frameworks, forcing reliance on the "self-regulation" of tech companies. During the 60th Munich Security Conference held in February, several global tech firms signed agreements committing to fight against the misuse of AI in election interference, including developing detection tools and marking generated images with "unreal content" labels and digital watermarks to clarify their origins. Some companies are even considering banning the generation of political candidates' images. However, some media outlets argue that these agreements only outline basic principles without specifying how and when companies will fulfill these commitments, making them seem more like a public relations exercise. Moreover, there are concerns about tech companies imposing their values on users through subjective control of their AI products.

Additionally, the effectiveness of digital watermarking technology is still in question. For instance, watermarks provided by the "Content Authenticity and Provenance Alliance" can only be used on still images and can be "easily accidentally or intentionally removed." Therefore, controlling the generation of false information at its source, limiting its spread on social media, and fostering critical thinking among the public remain significant challenges in addressing the issue of deepfakes.

The Shadow of the "AI War"

While public interest in AI has surged mainly due to developments in generative AI, military organizations worldwide have long focused on the application of AI technology on the battlefield, particularly autonomous weapon systems capable of deep learning. The US strategic community compares the development of AI to the emergence of nuclear weapons, and renowned tech entrepreneur Elon Musk believes that the proliferation of AI technology has enabled countries to "manufacture nuclear bombs in their backyards."

AI's application is evident in the two most closely watched conflicts globally. Time magazine recently described the Russia-Ukraine conflict as "the first AI war," revealing that the US tech company Palantir provided Ukraine with AI software to identify the most effective targets by analyzing satellite imagery and drone footage, learning and improving with each strike. These "AI arms dealers" have essentially turned Ukraine into a testing ground for their latest technologies. In the Israel-Palestine conflict, the Israeli military has employed artificial intelligence technology for the destruction of drones, mapping tunnel networks, and recommending targets for strikes. According to reports, an AI system named "Gospel" has increased the efficiency of target acquisition for the Israeli military by several hundred times. Concerns have been raised by many media outlets that this could mean the system not only targets military facilities but also bombs civilian residences, with "artificial intelligence possibly being used to decide the fate of Gaza residents".

Meanwhile, the United States, which first experimented with AI target recognition as early as 2020, has recently extensively utilized the technology to locate rocket launchers in Yemen, surface vessels in the Red Sea, and targets in Iraq and Syria.

According to US media reports, the US military continues to strengthen its collaboration with leading companies like OpenAI. In August 2023, shortly after the surge of generative AI, the US Department of Defense established the "Generative AI Task Force". In January of this year, OpenAI quietly updated its "Usage Policy" page, removing restrictions on "military and warfare" applications and instead changing the terms to a vaguer "prohibiting the development or use of weapons". Soon after, the company admitted to collaborating with the Pentagon on multiple projects, including the development of cybersecurity tools. Recently, senior officials from the US Department of Defense once again invited American tech companies to participate in a secret meeting, hoping to accelerate the exploration and implementation of military AI applications.

Experts in the field believe that while humanity has successfully deterred the use of nuclear weapons over the past few decades through strategic coordination among nations, there is currently a lack of international governance frameworks for military AI applications. This could easily lead to conflict escalation or trigger an arms race due to technological mishaps, highlighting the urgent need to reach a multilateral consensus as soon as possible. UN Secretary-General Guterres previously emphasized that the United Nations must reach a legally binding agreement by 2026 to prohibit the use of AI in automated warfare weapons.

International coordinated cooperation is indispensable.

Artificial intelligence possesses immense potential both positively and negatively and requires timely measures to mitigate risks. Since 2016, various countries have issued relevant policies and regulations, but progress has failed to keep pace with technological advancements. Last October, President Biden signed the first executive order on AI regulation in the United States, establishing safety and privacy protection standards for AI, but it was criticized for lacking enforceability. In the European Union, although the European Parliament passed the "AI Act" on March 13, some provisions will be phased in, with certain rules not taking effect until 2025. Japan's ruling party, the Liberal Democratic Party, only recently announced plans to propose legislation on generative AI within the year.

Whether it's the issue of misinformation brought about by generative AI or the risks of military AI applications, the impact transcends borders. Therefore, the regulation and governance of artificial intelligence should be addressed through international cooperation, with countries working together to mitigate risks and jointly build a widely agreed-upon framework and standard regulations for AI governance.

Unfortunately, the United States has not only regarded China's AI development plan as hostile from the outset but also treated China as an imaginary enemy in the deployment of military AI applications, while obstructing Chinese technological progress to maintain its technological advantage. According to reports, the US government not only prohibits US companies from exporting the most powerful AI chips to China but also requires US cloud computing companies to disclose the names of foreign clients developing AI applications on their platforms, attempting to block Chinese companies' access to data centers and servers.

The development and governance of artificial intelligence are crucial to the fate of all humanity and require concerted efforts and coordinated responses. Building "small walled gardens" in the field of artificial intelligence will only weaken humanity's collective ability to address risk challenges. The United States should earnestly respect the objective laws of technological development, respect market economy and principles of fair competition, cease malicious obstruction of other countries' technological development, and create favorable conditions for strengthening international coordinated cooperation in the field of artificial intelligence.

(By Reporter Yang Yifu)