Meta would build its own massive infrastructure to train its

Meta would build its own massive infrastructure to train its AI models – MacGeneration

Meta wants to catch up with OpenAI in the race for language models. Although it recently introduced its new Llama 2, it is not yet at the level of OpenAI’s GPT-4. Regardless, Mark Zuckerberg’s company plans to complete a new model early next year that it hopes will be at least as effective as its direct competitor. The Wall Street Journal reports that Meta is reportedly getting his hands on numerous specialty graphics cards from NVIDIA to build a massive data center for his training.

Microsoft’s data center. Image: Microsoft.

The news might come as a surprise. Microsoft already has its own infrastructure that has cost millions of dollars and was used in particular to train ChatGPT or the first versions of Llama. The two companies recently came together for the launch of Llama 2, so we can expect Microsoft’s park to be reused for future projects. However, people familiar with the matter say Meta wants to train its AI on its own infrastructure this time. The model is then distributed in an open source style.

With this new model, Meta aims to “develop AI tools capable of mimicking human expressions.” Ultimately, chatbots should be offered in WhatsApp and Instagram: Mark Zuckerberg had mentioned the development of “AI characters” who are able to “help people in different ways”. A first version of the concept could be implemented this month.

Meta doesn’t want to be fooled by ChatGPT and will build AI into WhatsApp and Instagram

If Meta does its best to catch up with OpenAI, it may not be enough. GPT-4 was released in March and Meta’s alternative was due out almost a year later. Google is also planning a new model called Gemini, which could now see the light of day. The entire industry is in the race and rumors abound that Apple is spending millions of dollars a day to advance its AI technologies. Amazon is working on a new, smarter version of Alexa.