Home > News > AI

The first wave of hackers has already used large models for cyber attacks | Industry Watch on Company B

Sun, May 26 2024 08:10 AM EST
?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2F0c68911dj00sdyuz0005ad0012v00lvg.jpg&thumbnail=660x2147483647&quality=80&type=jpg The image is AI-generated.

Throughout the history of technological development and industry, security has always been a companion technology. There is always attack before defense, and large models are no exception.

"New technologies inevitably bring new security issues and demands, but the challenge lies in the fact that attackers often discover security risks and launch attacks ahead of us, requiring us to respond and adapt more quickly," said Chen Fen, Senior Vice President of AsiaInfo Security.

Even today, the practical applications of large models remain limited. Industry professionals may consider factors such as the immaturity of new technologies, return on investment, and lack of good application directions. However, hackers only need one reason to use large models as new weapons to break through higher-value defenses.

The method of attack determines the method of defense, making large models a standard option for cybersecurity companies.

Global security companies are taking swift action. In May, Microsoft officially launched Security Copilot, while Google released a proprietary large model for cybersecurity last year. Cybersecurity giants like Palo Alto and CROWDSTRIKE have integrated the security operational capabilities of large models into their security operation platforms.

According to related data, in China, over 80% of cybersecurity companies are integrating large model technology into their cybersecurity products. 30% of companies have started researching large model security, leading to a wave of security startups.

The New Security Battle

Following the release of ChatGPT, AGI technology represented by large models has sparked a frenzy among hackers worldwide. AGI technology is driving the escalation of cyber attacks and cybercrime. In the past, it took months for a hacker to produce an attack virus, but now, with AGI tools, it may only take minutes. The efficiency and scope of hacker attacks have greatly increased.

Large models also have a strong understanding of programming languages. Attackers can quickly discover software vulnerabilities using large models. Additionally, there is a wave of opportunistic hackers using AI algorithms to create deepfake videos for new waves of online fraud cases.

AsiaInfo Security has also found that attackers have shifted their focus from traditional assets to AI computing infrastructure and large model AI computing infrastructure. High-value GPU clusters are particularly tempting for attackers. In just one year, dozens of different types of attacks targeting large models have emerged. ?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2F181cf2f3j00sdyuz0004ud0012v00t6g.jpg&thumbnail=660x2147483647&quality=80&type=jpg At the beginning of the year, a cluster of thousands of servers in the United States was breached, hijacked to mine Bitcoin. Hackers quickly target high-value assets once they find them profitable, and even large models themselves could be exploited.

AsiaInfo Security also replicated an attack by submitting a specially crafted attack sample to a large model. The attack sample was not a normal prompt but a complex language. Even when AsiaInfo Security intentionally stopped the attack during the process, after stopping the attack, the same request took the large model over 60 seconds to return results, more than 20 times longer than the original response time.

It is conceivable that in the future, if core applications are driven by large models in AI-native applications, similar attacks could lead to a sharp increase in computational power consumption of large models. This could result in denial of service attacks, causing critical business functions to grind to a halt. This is just a relatively basic attack.

In the future, households and individual users could also become targets. If the penetration of artificial intelligence is high enough and computers, phones, home appliances, and cars are all smart, with even every household having an artificial intelligence system, hackers won't overlook this lucrative opportunity.

"From now on, cybersecurity defense has evolved from human-to-human confrontation to AI-to-AI confrontation. Only AI-driven cybersecurity detection technologies can identify AI-driven hacking techniques," said Chen Fen.

Is "Secure Large Models" just riding the trend?

Just as various industries are being transformed and reshaped by large models, the future of large model technology will drive a comprehensive innovation in network security products. This could bring about a fundamental shift in the design paradigm of network security products, from threat detection to product experience, and even security operations. Without the support of large models, a security company may struggle to remain competitive.

However, the generalization of the concept of "Secure Large Models" may also fall into the trap of trend-chasing, leading to doubts about whether it's just old wine in a new bottle.

There have been precedents of security companies using AI technology, such as using AI for spam detection, machine learning algorithms for automated vulnerability patching, automated operational analysis, security policy generation, alert assessment, and attack analysis.

A security industry expert mentioned that many security companies rushed to release GPT applications last year, with incomplete product constructions. Some companies only thought about using them in specific scenarios, like quickly integrating GPT for language understanding in security operations, as security incidents are a form of language, focusing more on these scenarios.

AsiaInfo Security did not immediately release large models but rather pondered whether large model capabilities could become a native product feature. While they may not have reached that point yet, it's part of their development strategy.

Therefore, AsiaInfo Security did not rush to integrate large models into their product line from the start. Instead, they invested more time in building a foundational common framework, establishing a robust MaaS service platform, and then layering on the intelligent body framework. With numerous product lines, if each one had to individually consider how to integrate large models, it would lead to significant resource wastage. ?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2Ff5047fc2j00sdyuz0004ad0012v00t6g.jpg&thumbnail=660x2147483647&quality=80&type=jpg On this basis, AsiaInfo Security has officially launched a self-developed large-scale model in the field of cybersecurity called "Xinlifang." Its capabilities are divided into five levels: the first level is the computing power scheduling platform; the second level involves intelligent hybrid scheduling based on fine-tuning open-source and closed-source large models; the third level provides MaaS service layer on top of large models, enabling rapid scheduling of large models for upper-layer security applications. The API architecture of the MaaS service fully follows OpenAI's standards, allowing for quick utilization of the global open-source tool ecosystem.

The fourth level is the flexible architecture of the cybersecurity agent. AsiaInfo Security has restructured its intelligent body development framework based on open-source references, enabling all production teams to quickly integrate the entire security large model capability. The fifth level consists of security intelligent agent applications for multiple scenarios.

Additionally, AsiaInfo Security has further introduced the "XPLAN Security Plan," which includes two parts: Security For AI and AI For Security. Security For AI focuses on protecting the infrastructure of computing power security and safeguarding the security of large models, while AI For Security concentrates on the research and development of vertical large models in the cybersecurity industry and the implementation of cybersecurity intelligent agents and security applications on top of them.

AI security requires a systemic perspective.

Zhang Yaqin, academician of the Chinese Academy of Engineering and Dean of the Tsinghua University Institute of Artificial Intelligence (AIR), stated that the arrival of large models signifies the establishment of a new ecosystem. Large models will become the new operating system, just as the PC internet and mobile internet have formed ecosystems to match. The scale of the ecosystem in the era of artificial intelligence will be at least an order of magnitude higher than that of the mobile internet.

Correspondingly, the industry is moving towards a common understanding that focusing solely on securing large models cannot address the security issues related to them. AI security needs to be approached from a systemic and holistic perspective to collectively build the ecosystem.

Wu Xiangning, Senior Vice President and Chief Digital Officer of AsiaInfo Security, mentioned that the large model industry chain is very long, and the surrounding formats will become more diverse. There will be few true manufacturers of large models in the future, possibly concentrated among a few leading companies. However, the number of enterprises or industries utilizing large models will continue to grow, posing new security requirements. ?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2F2d6d9945j00sdyuz00049d0012v00lvg.jpg&thumbnail=660x2147483647&quality=80&type=jpg The image is AI-generated.

"Securing only large models is relatively limited. For example, the original scenario for privacy computing applications was very narrow, mostly involving queries in data exchanges among ministries, financial institutions, and enterprises. However, in the future, if all companies need to send their data to computing power for processing, the potential for innovation is vast. Once the entire ecosystem opens up, many technological applications may undergo significant changes," he said.

Multiple collaborations and achievements in security ecosystems are gradually emerging. The Network Ransomware Response and Governance Center, known as the Fox Hunt Program, has been officially launched. It was jointly initiated by AsiaInfo Security, China Unicom Jiangsu Branch, Sichuan Digital Industry Co., Ltd., and People's Data Management Co., Ltd., to establish regional and industry-specific network ransomware response and governance centers. This will form a normalized and integrated network ransomware security operation, representing a typical example of collaborative security ecosystem development.

Furthermore, the East-West Secure Joint Innovation Center has also announced new achievements. AsiaInfo Security, in collaboration with Sichuan Nengtou and West Cloud Computing, has made technological breakthroughs in large model security, privacy computing, network scheduling, and full-stack cloud security. These achievements will drive the secure scheduling and circulation of computing power and data resources between the eastern and western regions, promoting the development of artificial intelligence and data industries.

AsiaInfo Security has also partnered with the HarmonyOS ecosystem. Both parties will deeply engage in HarmonyOS ecosystem construction, combining terminal security capabilities to provide users with a more secure and reliable terminal environment, ensuring user information security.

Through joint efforts in technology, policies, regulations, social consensus, and other aspects, the long-term direction of constructing a secure ecosystem for large models will be established. (This article was first published on the Titanium Media APP, Author: Zhang Shuai, Editor: Gai Hongda)