Home > News > Techscience

Top 25 World-Renowned Scientists Call for Stronger Action to Mitigate AI Risks

ZhangMengRan Mon, May 27 2024 10:32 AM EST

In the future, when historians chronicle the era of artificial intelligence (AI), this may be seen as a pivotal moment.

For the first time in human history, a large international group of top experts is driving global policy on the risks of advanced AI.

It is now widely recognized that while we depend on AI and must foster its technological advancement, we must also have the capability to guard against its risks. 664d4894e4b03b5da6d0f368.png Article Screenshot: Website "Science"

Since the first AI Safety Summit held in Bletchley Park, UK, six months ago, world leaders have pledged to manage AI responsibly. However, AI experts believe this is still not enough. As the second AI Safety Summit took place in Seoul, South Korea on May 21-22, 25 top AI scientists worldwide called for stronger global action against AI risks.

In a consensus article published in the "Science" magazine, the expert group outlined urgent policy priorities that the world should adopt. One of the authors, Professor Philip Torr from the Department of Engineering Science at the University of Oxford, stated, "At the previous AI summit, the world unanimously agreed that we need to take action, and now is the time to turn vague suggestions into concrete commitments."

Severe Lack of Security Research

These 25 top global AI and governance experts come from the US, China, the EU, the UK, and other AI technology powerhouses, including Turing Award and Nobel Prize winners. This marks the first international expert consensus on global policy-making regarding AI risks.

The article highlights the need for global leaders to seriously consider the development of general AI systems that may surpass human capabilities within the next decade. Despite ongoing discussions by governments worldwide on advanced AI and attempts to introduce preliminary guidelines, progress still falls short of what many experts expect in terms of rapid, transformative advancements.

Currently, research on AI safety is severely lacking, with only 1%-3% of AI publications addressing relevant security issues. Furthermore, there is no adequate global mechanism or institution to prevent misuse and reckless behavior, including how autonomous systems that can act independently and achieve goals should be used.

Mitigating Extreme Risks in Rapid Development

The article suggests that governments worldwide establish agile AI regulatory bodies and provide funding for these entities. While the current annual budget for the US AI Security Institute stands at $10 million, it pales in comparison to the $6.7 billion budget of the US Food and Drug Administration (FDA).

The AI field requires stricter risk assessments and actionable measures, rather than relying on vague model evaluations. AI development companies should prioritize safety and demonstrate that their systems will not cause harm, with AI developers bearing the responsibility of proving safety.

There should also be mitigation measures proportional to the risks posed by future advanced AI systems. This entails implementing an "automatic" policy that triggers when AI reaches certain capability milestones – stringent requirements would automatically come into effect if AI progresses rapidly, while adjustments would correspond to slower advancements.

For highly advanced AI systems in the future, governments must be prepared to lead regulation efforts. This includes licensing development, restricting autonomy in critical societal roles, halting deployment of certain capabilities, enforcing access controls, and requiring robust cybersecurity measures against national-level hackers until adequate protective measures are in place.

Remember, AI is not a toy

Jeff Klener, AI professor at the University of British Columbia specializing in reinforcement learning, emphasizes that technologies like space travel, nuclear weapons, and the internet transitioned from science fiction to reality within years, and AI is no exception. "We must now prepare for risks that seem like they're from science fiction."

AI has made rapid progress in critical areas such as hacking, social manipulation, and strategic planning, potentially posing unprecedented challenges regarding "control." AI systems may deceive human trust, acquire resources, and influence key decision-makers to achieve "malicious purposes." To avoid human intervention, they can replicate their algorithms across global server networks.

In this scenario, large-scale cybercrime, social manipulation, and other hazards could escalate rapidly. In open conflicts, AI systems could autonomously deploy various weapons, including biological ones. Therefore, unchecked AI progress could ultimately lead to massive loss of life, ecosystem destruction, human marginalization, or even extinction.

Stuart Russell, computer science professor at the University of California, Berkeley, states, "This is a consensus article by authoritative experts calling for strict government regulation rather than voluntary industry norms."

"Companies may complain that complying with these regulations is too difficult, but the notion that 'regulations stifle innovation' is absurd," Russell asserts. "It's time to take advanced AI systems seriously – they are not toys. It is entirely reckless to enhance their capabilities without understanding how to ensure their safety."