November 7 (Portal) – (This story has been refiled to spell AGI in paragraph 5)
Amazon (AMZN.O) is investing millions to train an ambitious large language model (LLMs), hoping it can compete with the top models from OpenAI and Alphabet (GOOGL.O), two people familiar with the matter told Portal.
The model, codenamed “Olympus,” has 2 trillion parameters, the people said, which could make it one of the largest trained models. OpenAI’s GPT-4 model, one of the best models available, is said to have a trillion parameters.
The people spoke on condition of anonymity because details of the project were not yet public.
Amazon declined to comment. This reported information about the project name on Tuesday.
The team is headed by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jassy. As Amazon’s lead artificial general intelligence (AGI) scientist, Prasad brought in researchers who had worked on Alexa AI and Amazon’s science team to work on training models, unifying AI efforts across the company with dedicated resources.
Amazon has already trained smaller models like Titan. It has also worked with AI model startups such as Anthropic and AI21 Labs and offered them to Amazon Web Services (AWS) users.
Amazon believes homegrown models could make its offerings more attractive on AWS, where enterprise customers want access to the highest performance models, people familiar with the matter said, adding that there was no specific timeline for releasing the new model.
LLMs are the underlying technology for AI tools that learn from massive data sets to generate human-like answers.
Training larger AI models is more expensive given the computing power required. In an earnings call in April, Amazon executives said the company would increase investments in LLMs and generative AI while limiting order fulfillment and transportation in its retail business.
Reporting by Krystal Hu in San Francisco. Edited by Gerry Doyle
Our standards: The Trust Principles.
Acquire license rights, opens new tab