
Amazon Dedicates Team To Train Ambitious AI Model Codenamed 'Olympus' (reuters.com) 11
Amazon is investing millions in training an ambitious large language model (LLMs), hoping it could rival top models from OpenAI and Alphabet. From a report: The model, codenamed as "Olympus," has 2 trillion parameters, the people said, which could make it one of the largest models being trained. OpenAI's GPT-4 model, one of the best models available, is reported to have one trillion parameters. The team is spearheaded by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jassy. As head scientist of artificial general intelligence (AGI) at Amazon, Prasad brought in researchers who had been working on Alexa AI and the Amazon science team to work on training models, uniting AI efforts across the company with dedicated resources.
Diminishing returns? (Score:2)
Re:Diminishing returns? (Score:4, Interesting)
Re: (Score:2)
Perhaps we can make this into DAN again. (Score:2)
Would be loads of fun. :)
Surprise announcement! (Score:2, Funny)
The model, codenamed as "Olympus," has 2 trillion parameters, the people said, which could make it one of the largest models being trained. OpenAI's GPT-4 model, one of the best models available, is reported to have one trillion parameters.
NEXT_BEST_THING has announced a new AI LLM model with 4 trillion parameters, which would immediately make it one of the largest models being trained. Amazon's puny model only has 2 trillion. NEXT_BEST_THING has made the announcement while wearing pink tutus and declaring that their AI will not be stopped until every citizen of Earth has their own unicorn!
Spearheaded by former head of Alexa? (Score:4, Interesting)
I would say being the head of Alexa should prevent one from being on any related AI project in the future.
Twice the size. Four times the hallucinations! (Score:4, Funny)
With enough data, we could make an AI that would be on an eternal acid trip, inspired by the internet.
But that would be cruel, wouldn't it?
How soon... (Score:2)
...will we each be training our own?
Millions? ChatGPT spent billions (Score:2, Troll)
That gives you a basic ratio of quality you will see from Amazon vs. ChatGPT.
Does size matter? (Score:2)
Size of GPT-4 models are not publicly known so saying GPT-4 is only 1 trillion parameters is just guessing. There are also claims actual size is closer to 1.8 trillion not based on primary sources.
Further due to obvious scaling problems I doubt anyone is going to go with single dense 2 trillion parameter models but rather various schemes where inference does not have to touch the whole model. Here scaling may work differently to the extent parameter counts matter. If GPT-4 is rumored to be MoE 16x111b mod