Home > News > Internet

Ant Group, OpenAI, iFlytek, and Others Jointly Develop and Release International Standards for Large Model Security

Lu Jiao Wed, Apr 24 2024 08:13 PM EST

On April 17th, during the 27th United Nations Technology Conference held in Geneva, Switzerland, the World Digital Technology Academy (WDTA) unveiled a series of groundbreaking achievements, including two international standards: "Security Testing Standards for Generative Artificial Intelligence Applications" and "Security Testing Methods for Large Language Models."

These international standards were collaboratively developed by experts and scholars from dozens of organizations such as OpenAI, Ant Group, iFlytek, Google, Microsoft, NVIDIA, Baidu, and Tencent. Notably, "Security Testing Methods for Large Language Models" was spearheaded by Ant Group.

The released standards consolidate the wisdom of global experts in AI security, addressing the gaps in security testing for large language models and generative AI applications. They provide the industry with a unified testing framework and clear testing methods, aiming to enhance the security of AI systems, promote responsible AI technology development, and bolster public trust. s_0969751a6e0047bcbd53c00b7886886c.jpg The World Digital Technology Alliance (WDTA) is an international non-governmental organization registered in Geneva, adhering to UN guiding frameworks, committed to advancing digital technology globally and promoting international cooperation.

The AI STR (Safe, Trusted, Responsible) initiative is the core advocacy of WDTA, aiming to ensure the security, trustworthiness, and accountability of artificial intelligence systems. Members of this initiative include Ant Group, Huawei, iFlytek, the International Data Spaces Association (IDSA), the Fraunhofer Institute, and China Electronics.

Publicly available information shows that Ant Group has been actively involved in trustworthy AI technology research since 2015, and has established a comprehensive governance system for large-scale models. Ant Group has also independently developed the industry's first large-scale model security integrated solution called "Ant Tianjian", used for AIGC security and authenticity assessment, large-scale model intelligent risk control, AI robustness and interpretability detection, etc.

The recently released "Large Language Model Security Evaluation Method" is based on the application practice of the "Ant Tianjian" AI security detection system, developed in collaboration with global ecosystem partners. Additionally, Ant Group has established a Technology Ethics Committee and dedicated teams internally to assess and manage the risks of generative AI. All AI products of the company must pass through a technology ethics evaluation mechanism to ensure AI security and trustworthiness. s_de5ed5ac7c284d84b2f8e242a79f4a99.jpg