According to reports from Fast Technology on February 11th, Meta's second-generation in-house AI chip, Artemis, is officially in production this year.
The new chip is set to be utilized for inference tasks in data centers, working in tandem with GPUs from suppliers like NVIDIA.
A spokesperson for Meta had previously stated: "We believe that our independently developed accelerators will complement existing GPUs on the market, providing the best balance of performance and efficiency for Meta's tasks."
In addition to running recommendation models more efficiently, Meta also requires computational power for its proprietary generative AI applications, as well as for training the open-source competitor to GPT-4, Llama3.
Meta CEO Mark Zuckerberg previously announced plans to deploy 350,000 NVIDIA H100 GPUs by the end of this year, bringing Meta's total GPU count to around 600,000 for running and training AI systems.
Apart from Meta, OpenAI and Microsoft are also attempting to develop their own proprietary AI chips and more efficient models to combat the spiraling costs.