Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Cloud

Amazon Announces 'Olympus' LLM to Compete With OpenAI and Google (reuters.com) 17

Amazon "is investing millions in training an ambitious large language model," reports Reuters, "hoping it could rival top models from OpenAI and Alphabet, two people familiar with the matter told Reuters." The model, codenamed as "Olympus", has 2 trillion parameters, the people said, which could make it one of the largest models being trained. OpenAI's GPT-4 model, one of the best models available, is reported to have one trillion parameters...

The team is spearheaded by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jass... Amazon believes having homegrown models could make its offerings more attractive on AWS, where enterprise clients want to access top-performing models, the people familiar with the matter said, adding there is no specific timeline for releasing the new model.

"While the parameter count doesn't automatically mean Olympus will outperform GPT-4, it's probably a good bet that it will, at minimum, be very competitive with its rival from OpenAI," argues a financial writer at the Motley Fool — as well as Googles nascent AI projects. Amazon could have a key advantage over its competition, one that CEO Andy Jassy alluded to in the company's third-quarter earnings call. Jassy said, "Customers want to bring the models to their data, not the other way around. And much of that data resides in AWS [Amazon Web Services] as the clear market segment leader in cloud infrastructure...."

Amazon will likely also leverage Olympus in other ways. For example, the company could make its CodeWhisperer generative AI coding companion more powerful. Jassy noted in the Q3 call that all of Amazon's "significant businesses are working on generative AI applications to transform their customer experiences." Olympus could make those initiatives even more transformative.

They point out that Amazon's profits more than tripled in the third quarter of 2023 from where they were in 2022.

And Amazon's stock price has already jumped more than 40% in 2023.
This discussion has been archived. No new comments can be posted.

Amazon Announces 'Olympus' LLM to Compete With OpenAI and Google

Comments Filter:
  • RIP Amazon AI (Score:4, Insightful)

    by Cyberax ( 705495 ) on Sunday November 19, 2023 @04:59PM (#64016785)
    Alexa is _the_ most ineptly executed Amazon product, maybe excepting the FirePhone. The Alexa App is a mess, and they _removed_ the web version (morons). They failed to deliver multilingual support, after promising it 6 years ago. The Echo hardware is dated, and the smart home integration just sucks.

    At least we won't have to worry about the AI takeover by the Amazon AI.
    • They said, "shit we have no idea what we are doing and can't seem to hire anyone that can figure it out, better put AI to work on this one."

      It is like tech companies playing chicken, and they are all going full speed ahead off the cliff. I'm not sure whether to be entertained or horrified.

  • Amazon jumps on the enshittification bandwagon.

  • Jeff Bezos' company decided to name their LLM after a Japanese optics company that holds a 70% market share in the endoscope business. What are Amazon trying to say about their clients?
  • by thegarbz ( 1787294 ) on Sunday November 19, 2023 @07:10PM (#64016939)

    Will the extra trillion parameters solve it from hallucinating basic mistakes? The other day I asked (for fun) and AI image generator to generate a logo for a brewery called Hopps.

    It drew a nice logo, complete with the word Hoppps [sic] in it and the word Brewerr [sic] in it. This isn't a case of AI not knowing how many knuckles in a finger, it's case of it misspelling words given to it in the brief.

    I don't think the number of parameters are the problem.

    • Will the extra trillion parameters solve it from hallucinating basic mistakes?

      The parameter count of GPT-4 is not public knowledge. Most of the guessing and "leaks" are in the 1.8T range. In any case the answer is no.

      The other day I asked (for fun) and AI image generator to generate a logo for a brewery called Hopps.
      It drew a nice logo, complete with the word Hoppps [sic] in it and the word Brewerr [sic] in it. This isn't a case of AI not knowing how many knuckles in a finger, it's case of it misspelling words given to it in the brief.

      AI image generators are not LLMs and don't even pretend to have any high level understanding of the elements in the scenes they render. It's a bit like an artistic rendition of a rocket motor or circuit board. The artist may superficially draw something that looks like a motor complete with piping and various doohickeys yet they have no understanding of what the thin

  • Amazon owns the entire site purchasing and browsing history of all of it's customers, going back a couple of decades. Their review comments and interactions with sellers. Their payment history, returns, music and movie viewing preferences. In total it is a titanic amount of detailed data about almost everyone.

    Call me cynical, but it sure does seem like they will or have fed all of that into their LLM. Or maybe not into their model "codenamed as Olympus" which probably will offer some form of public access.

  • Presumably they're naming it for the Greek myth and not the Japanese company that today is now best known for a ¥376 billion scandal from 2011.

    Since y'know Amazon are squeeky clean and LLMs aren't a blight on an information economy that values integrity and accuracy of information. R-right?

  • The name seems an odd choice. In Greek mythology, humans who tried to climb Mount Olympus were punished for their hubris.

    Aside from the poor name choice, the other choice that looks pretty iffy is the size. Many, if not most, of the recent papers on the scaling of LLMs suggest that making them bigger does not necessarily make them better and often make them worse. Researchers like Emily Bender and Gary Marcus have been saying this for a while; last year DeepMind demonstrated that training smaller models on [arxiv.org]

  • Quick delivery. Looks god, haven't tried it yet.

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...