Home > News > Techscience

Offering "Ten Billion Subsidy"! Wuwen Xinquan Launches Large Model Service Platform

ZhaoAnLi Tue, Apr 09 2024 10:35 AM EST

On March 31st, Wuwen Xinquan, an innovative enterprise originating from Tsinghua University, held the "Diverse Computing · Ubiquitous Connection" AI Computing Power Optimization Forum and Product Launch Conference in Shanghai. Wang Yu, a tenured professor in the Department of Electronic Engineering at Tsinghua University and the initiator of Wuwen Xinquan, made his first public appearance with the co-founder team and unveiled the "Infini-AI" large model development and service platform. 66093b86e4b03b5da6d0c15f.jpg Tsinghua University's Professor of Electronic Engineering and founder of Wuxin-Xinquan, Wang Yu, introduced the Wuxin-Xinquan platform. The image is provided by Wuxin-Xinquan?

The Infini-AI large model development and service platform, based on multi-chip computing power infrastructure, aims to effectively integrate and optimize computing resources, design better utilization methods and tools, and alleviate the computing power shortage faced by enterprises using large models. At the conference, Wuxin-Xinquan announced that the development service platform would officially open for full registration starting March 31, providing a free quota of billions of tokens to all individual and corporate users who register with real names.

Co-founder and CEO of Wuxin-Xinquan, Xia Lixue, explained that developers can experience and compare various model capabilities and chip effects on this platform. By simply dragging and dropping various parameter buttons, they can finely tune models that are more suitable for their business and deploy them on Infini-AI without difficulty, offering services to users at a very favorable price of one thousand tokens per unit.

Regarding the initial intention of launching this platform, Wang Yu explained that after the widespread social attention to large models, they believed that the overall computing power level in China still lags behind international advanced levels. Relying solely on chip process improvements or the iteration of diverse chips is far from sufficient. There is a need to establish a large model ecosystem, allowing different models to be automatically deployed on different hardware and enabling effective utilization of various computing powers.

According to reports, Infini-AI already supports more than 20 models, including Baichuan2, ChatGLM2, ChatGLM3, ChatGLM3 closed-source models, Llama2, Qwen, Qwen1.5 series, and more than 10 types of computing cards from AMD, Biren, Cambricon, Kuon, TianShuzhiXin, Muxi, Moore Thread, NVIDIA, etc., supporting joint optimization and unified deployment of multiple models and chips. Third-party platforms or models customized through training and fine-tuning can seamlessly migrate and be hosted on Infini-AI, receiving a finely customized billing scheme based on tokens.

"We will continue to improve the coverage of model brands and chip brands, and over time, the cost-effectiveness advantage of Infini-AI will become more prominent," said Xia Lixue. In the future, Infini-AI will support more models and products from computing power ecosystem partners, allowing more large model developers to "spend small money and use a large pool," continuously reducing the landing costs of AI applications.