Home > News > Techscience

The Rise of Large Models: Where is AI Security Heading?

ZhaoAnLi Wed, May 29 2024 10:55 AM EST

In the realm of cybersecurity, every new wave of information technology brings immense opportunities along with significant challenges. This is because new technologies inevitably introduce fresh security issues and demands, allowing "hackers" to exploit vulnerabilities swiftly, while the defense side must respond and adapt even faster.

The same holds true for large models technology. Recently, at the "C3 Security Conference 2024" themed "The Future of Secure AI," Chen Fen, Senior Vice President of Security at AsiaInfo, pointed out that since the release of ChatGPT, globally, "hackers" have begun to revel in the advancements driven by large models in artificial intelligence, leading to escalated cyberattacks and cybercrimes.

Large models technology fuels the frenzy among "hackers"

Chen Fen's words are not mere alarmism.

He explained that in the past, it took hackers months to create a malicious virus, but now, with AI tools, they can generate one in minutes, significantly boosting attack efficiency. Moreover, large models exhibit a strong understanding of programming languages, enabling attackers to swiftly identify software vulnerabilities. Additionally, there is a surge of opportunistic "hackers" using AI algorithms to mass-produce highly deceptive deepfake videos, resulting in a wave of online fraud cases. Furthermore, attackers have set their sights on AI computing infrastructure and large model platforms – these high-value computing clusters hold immense allure.

According to incomplete statistics, in just one year, over ten different types of attacks targeting large models have emerged.

The North Pole Fox Advanced Attack and Defense Lab at AsiaInfo Security discovered a "sponge sample attack" last year. This attack method is not overly complex: it involves sending a large model a special sample – replacing normal prompts with complex language content, then submitting the request. Consequently, the large language model takes over 60 seconds to respond and provide an answer.

Regarding this, Chen Fen stated in an interview, "Many core applications in the future may be driven by large models of AI, and (under such attacks) applications are essentially paralyzed. This is just a relatively basic attack scenario; intensive attacks on large models will soon follow."

At the C3 Security Conference, Zhang Yaqin, Academician of the Chinese Academy of Engineering and Dean of the Institute for Artificial Intelligence Industry Research (AIR) at Tsinghua University, expressed similar concerns.

"Artificial intelligence is becoming increasingly powerful, posing significant risks," he mentioned. Apart from information security concerns, the opacity of large models, black box issues, parameter data risks, all point towards inherent security risks in AI itself. Furthermore, issues related to AI's controllability, trustworthiness, boundaries, compel individuals to scrutinize AI's security risks more closely.

Zhang Yaqin emphasized that setting aside the issue of AI's ability to deceive, the greater risk lies in its impact on the physical world. "Imagine, (in the future) a large model could control thousands of vehicles, drones, robots, and IoT devices – some of which are directly linked to critical national systems like finance. Can we control it? What if it's exploited by malicious actors? Security will then become the most critical issue."

Security Industry Faces Innovation

Given the profound impact of AI technologies, particularly those represented by large models, on information security, technical experts at AsiaInfo Security firmly believe that large model technology will inevitably bring about a "paradigm shift" in the cybersecurity industry.

This shift will manifest in various aspects, including changes in network attack and defense roles, security product design, and the evolution of protected entities.

Chen Fen believes that from now on, network security attack and defense will transition from human-to-human confrontation to AI-to-AI confrontation. "Only AI-driven network security protection and detection technologies can identify hacker attack techniques rooted in AI," he stated, citing deepfake identification as a prime example where "human eyes are definitely inferior to AI."

Similarly, the design of future network security products will require the assistance of large models. Chen Fen mentioned that from risk detection to product experience, from risk control to security operations, large model technology will drive a comprehensive overhaul of network security products. "Without the support of large models, it may be impossible to stand on the most competitive stage."

The entities requiring protection will also undergo changes. As mentioned earlier, if a company or entity's core applications are driven by large models, then future asset maintenance for enterprises will shift from traditional asset protection to safeguarding the enterprise's "AI core."

A similar transformation will occur in smart home security. With the proliferation of smart connected cars, smart homes, etc., household information security will evolve from protecting individual terminal devices to safeguarding the household's "AI core."

"Global Action is Underway"

Against the backdrop of large model technology's paradigm-shifting impact on the cybersecurity industry, no one can afford to remain passive. In 2023, Google developed and launched the proprietary large model for cybersecurity, "Sec-PaLM," integrating it across the Google Cloud. In April of this year, Microsoft announced the official commercialization of Security Copilot, combining large language models and security-specific models to provide enterprises with an information security assistant.

Major players in the security industry are not missing out on this opportunity. Leading companies in the industry, such as Palo Alto and CrowdStrike in the United States, have integrated large model technology capabilities into their security operation platforms and continue to deepen the development of intelligent security products. Riding on the wave of large model technology, Palo Alto has become the first security company globally to surpass a market value of one trillion dollars, while CrowdStrike, not far behind, has seen its market value more than double in less than a year, nearing the trillion-dollar mark. Not only that, at the world's largest cybersecurity event, the RSA Conference, the most influential and popular event - the Innovation Sandbox competition, the champions for two consecutive editions (RSAC 2023, RSAC 2024) were both startups focusing on "AI security".

In 2023, the AI security startup HiddenLayer, which focuses on AI offense and defense and "provides visibility into the health and security of machine learning assets," claimed the top spot, making history by being the first AI security company to win the Innovation Sandbox competition. The champion of the RSAC 2024 Innovation Sandbox competition was Reality Defender, a startup that primarily offers solutions for detecting and combating fake information related to generative artificial intelligence (such as deepfake images, videos, and audio) and cognitive manipulation threats.

AI security is stealing the spotlight, with industry experts believing that this heralds a significant paradigm shift in cybersecurity, where "AI For Security and Security For AI" is becoming the main battleground for cybersecurity.

This trend has quickly made its way into China as well. According to a survey by AsiaInfo Security, over 80% of cybersecurity companies in China are integrating large-scale model technology into their cybersecurity products, with 30% of companies already conducting research on large-scale model security, leading to a wave of AI security startups.

AsiaInfo Security has also positioned "AI For Security and Security For AI" as a strategic layout for the intelligent new era, actively embracing large-scale models. At the C3 Security Conference, AsiaInfo Security launched the cybersecurity domain large-scale model "TrustCube," combining security and AI technologies to achieve intelligent detection of attack surfaces in the cybersecurity field, intelligent analysis of malicious code and threat intelligence, early warning predictions, and other functional applications. Additionally, AsiaInfo Security is recruiting AI talents extensively, collaborating with leading large-scale model companies like "Zhipu AI" to introduce, digest, absorb, and transform top-notch AI capabilities.

AI risk governance is not just a scientific or policy issue

Regarding the security risks brought about by the development of AI technology, Zhang Yaqin stated that he has spent a lot of time over the past two years "thinking and doing things," including establishing a "Special Seminar on the Development and Governance of Artificial Intelligence" with around 20 participants - including three Turing Award winners and two pioneers in the field of deep learning. They regularly gather to discuss topics related to the trends in AI development and risk governance.

"We believe that the security and risk governance brought about by the development of AI technology are not just issues of policy-making, nor are they purely scientific or technical issues. It requires policymakers and researchers to work together to integrate AI development and governance to ensure healthy growth," Zhang Yaqin said.

Based on extensive discussions, Zhang Yaqin proposed five long-term suggestions.

First, there should be hierarchical management of large-scale AI models, with higher risks associated with cutting-edge large models with trillions or more parameters requiring enhanced supervision; large models used in different fields such as intelligent driving and biological systems should also be subject to graded supervision.

Second, AI-generated content should be clearly labeled, including AI-generated digital personas, images, text, audio, etc., similar to prominent "advertisement" labels for easy identification.

Third, a physical mapping mechanism or system should be established. Intelligent entities such as autonomous vehicles and smart robots should clearly map their status as "subordinate entities" to specific managing individuals or entities, with accountability tracing back to the responsible entity in case of issues. He suggested that relevant regulations should be developed promptly.

Fourth, there should be increased investment in research on large model risk governance. He recommended that governments or enterprises allocate 10% to 30% of their investment in the development of cutting-edge large models towards the development of technologies and products related to security risks. Zhang Yaqin said, "Starting with 30% might be challenging, so beginning with 10% for this purpose could be a good start."

Fifth, setting boundaries for the application of artificial intelligence. He proposed that while AI technology itself may not have boundaries, when using large models, certain boundaries need to be established. For example, regulating that intelligent digital personas cannot be replicated without limits, and setting clear boundaries when using large model technology in critical systems like finance and military.

"This is not just a matter for one company or one country, but requires sincere cooperation between global governments and enterprises to face risks and solve problems," Zhang Yaqin concluded.