Home > News > AI

Google Makes Bold Moves to Consolidate AI Research Teams! Aims to Focus on "AI Powerhouse"

Sat, Apr 20 2024 07:32 AM EST

According to reports from the Wisdom Financial app, Sundar Pichai, CEO of the American tech giant Google (GOOGL.US), has announced a comprehensive integration of Google's AI workforce structure. He stated that this move will help the tech giant develop artificial intelligence products and services more rapidly and efficiently. It's understood that Pichai, in a blog post and a letter to employees on Thursday, revealed that Google's Artificial Intelligence teams, focusing on AI large models and research areas related to ensuring AI technology safety, will be fully integrated into the company's flagship AI business unit - Google DeepMind.

Overall, to expedite the progress and development of Google's AI large models (Gemini and Gemma), the employees dispersed across Google Research and Google DeepMind responsible for this innovative technology will be consolidated into a larger, unified team. This also means that Google will consolidate the expensive computing power and resources required to train and build AI supercomputers and other new systems into a separate department, aiming for a more focused, simplified, and efficient advancement of large models into applications, as well as accelerating the development of Google's brand-new AI technologies.

Furthermore, Google CEO Pichai also stated that teams throughout the company responsible for ensuring AI technology safety and related areas will also fall under Google DeepMind. This ensures that experts responsible for technical safety and ethical standards can directly participate in the construction and maintenance of AI large models during the AI development process.

Pichai also mentioned that a new unified platform and device team are striving to integrate Google's hardware, software, and AI teams' efforts. This includes teams dedicated to products such as Android, Chrome, search engines, and photos. The team will also include employees working on AI features for computational photography and application terminal devices, such as Google's recent joint venture with Samsung Electronics to launch the "Instant Search" AI application tool.

Pichai wrote, "These new integrations and changes will help us to be more focused and clear in achieving our mission."

While Google has recently intensified its influence in the field of generative AI applications to catch up with Microsoft (MSFT.US) and OpenAI, which many believe to be more advanced in AI application technology, Google has also been continuously shifting its focus areas in business and reducing operational costs through integration and restructuring. Over the past few months, Google has announced a series of layoffs, bringing a harsh reality of uncertainty to its employees. In January of this year, the company significantly reduced the number of employees in areas such as digital assistants, hardware, and engineering teams.

According to sources cited by the media, the ongoing adjustments at this tech giant will affect some teams in its finance and real estate departments, with the scale of layoffs being "quite large," and some positions may be relocated abroad. In January of this year, Google CEO Pichai warned that even though the scale of layoffs may not be as large, Google will continue to steadily reduce its workforce.

Google CEO Pichai stated that this business integration and restructuring are necessary for a more focused launch of AI-based application tools and services. Earlier this year, the tech giant launched a new version of its more powerful AI large model, Gemini 1.5 Pro, which Google claims can handle larger-scale text, video, and even audio outputs compared to competitors like GPT-4. Google also renamed its chatbot as Gemini and released a more open large language model to gain favor in the open-source community.

However, Google also faces more challenges, like the widespread criticism it received globally for its AI image generation tool creating images of non-historical figures. On Thursday, Google CEO Pichai also announced the establishment of a central company trust and safety team, which will invest more in testing and evaluating AI systems - a move seemingly aimed at addressing the risks associated with launching consumer-facing AI products.

Pichai also mentioned the protests by employees this week against "Project Nimbus," a $1.2 billion joint contract with Amazon aimed at providing AI and cloud services to the Israeli government and military. Google fired 28 employees who participated in the protest on Wednesday.

Pichai wrote, "We have a dynamic, open culture of open discussion that allows us to create amazing products and turn great ideas into action." "But ultimately, we are a workplace, and our policies and expectations are clear: this is a business, not a place to disrupt colleagues or make them feel unsafe, not a place to try to use the company as a personal platform, or to argue or debate politics for disruptive purposes."

In the era of AI "arms race," Google, Amazon, and Microsoft are all pouring massive resources into "aggressively advancing AI."

At the TED conference held in Vancouver on Monday, Demis Hassabis, CEO of Google DeepMind, was asked about the hot topic of the $100 billion "Stargate" super AI computer being developed by Microsoft and OpenAI. Hassabis replied, "We don't discuss specific numbers, but I think, over time, our subsequent investments will exceed that number." He did not disclose specific details of Google's spending. He also emphasized that Google has more powerful computing capabilities than competitors like Microsoft. Hassabis co-founded DeepMind in 2010, a company successfully acquired by Google a decade ago. The groundbreaking generative AI, ChatGPT, has emerged, signaling a potential shift towards a new AI era in human society. Not only is Google persistently investing heavily in the AI field, but the global cloud computing giants Amazon Web Services (AWS) and Microsoft Azure, with a combined market share of nearly 55%, have been aggressively acquiring NVIDIA high-performance AI GPUs since the AI boom swept the globe in 2023, continuously expanding their presence in the AI domain.

AWS is strategically expanding its ecosystem for both B2B and B2C software developers, aiming to significantly lower the technical barriers for developing AI application software across various industries. Amazon plans to invest nearly $150 billion in data centers over the next 15 years to address the anticipated "explosive growth" in demand for diversified digital services related to AI. Amazon has deeply integrated generative AI core technology services into its AWS cloud computing services, showcasing a series of proactive AI advancements in integrating AI chatbots, underlying foundational large model libraries, computational enhancements, data storage, and underlying computational power platforms. This integration is a key factor driving bullish trends in Amazon's stock price according to Wall Street investment firms.

Recently, AWS unveiled the upgraded version of Amazon Bedrock, a comprehensive generative AI foundational service, allowing enterprise customers to conveniently access and freely combine pre-trained basic models (FMs) from leading AI companies using a single API. These models, applicable to various core generative AI purposes ranging from search to content creation to drug discovery, among others, are pre-trained and ready for use.

Amazon Q, on the other hand, is a fully integrated productivity assistant introduced by Amazon AWS, designed specifically for AI application developers. Through a chatbot interface, Amazon Q provides access and scheduling management capabilities for AWS cloud computing platforms, connecting systems, databases, underlying code, and cloud infrastructure. Amazon Q assists enterprise customers in quickly obtaining relevant answers to urgent technical issues, problem-solving, content generation, optimizing task distribution, and quickly accessing enterprise information repositories, generating relevant underlying code, and utilizing data and expertise from enterprise systems to respond to customer inquiries.

Meanwhile, the US tech giant Microsoft and the developer of ChatGPT, OpenAI, are engaged in detailed negotiations on a mega-scale global data center project worth up to $100 billion. This project will include an AI supercomputer temporarily named "Stargate," which will be the largest-scale supercomputing facility in a series of AI supercomputing infrastructures planned by the two AI leaders over the next six years.

Undoubtedly, this behemoth-level AI supercomputer will be equipped with millions of core hardware AI chips, aimed at providing powerful computational support for OpenAI's future more powerful GPT large models and more disruptive AI products than ChatGPT and Sora 文生视频.

In addition to AI supercomputing infrastructure, Microsoft is also continuing to strengthen its ecosystem for both B2B and B2C software application developers. Leveraging its significant ownership stake in OpenAI, Microsoft launched Azure OpenAI Studio cloud service based on ChatGPT technology last year, effectively serving as a new AI-enhanced version of Azure. Azure OpenAI Cloud Service is a strengthened cloud computing service provided by Microsoft, incorporating OpenAI's proud AI large models. It allows users to call OpenAI's incredibly powerful latest versions of AI large language models, including the GPT-4 series, GPT-4 Turbo with Vision, GPT-3.5-Turbo, and Embeddings model series, through REST APIs.