Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
China AI

China's AI 'War of a Hundred Models' Heads For a Shakeout (reuters.com) 17

An anonymous reader shares a report: China's craze over generative artificial intelligence has triggered a flurry of product announcements from startups and tech giants on an almost daily basis, but investors are warning a shakeout is imminent as cost and profit pressures grow. The buzz in China, first ignited by the success of OpenAI's ChatGPT almost a year ago, has given rise to what a senior Tencent executive described this month as "war of a hundred models", as it and rivals from Baidu to Alibaba to Huawei promote their offerings. China now has at least 130 large language models, accounting for 40% of the global total and just behind the United States' 50% share, according to brokerage CLSA.

Additionally, companies have also announced dozens of "industry-specific LLMs" that link to their core model. However, investors and analysts say that most were yet to find viable business models, were too similar to each other and were now grappling with surging costs. Tensions between Beijing and Washington have also weighed on the sector, as U.S. dollar funds invest less in early-stage projects and difficulties obtaining AI chips made by the likes of Nvidia start to bite. "Only those with the strongest capabilities will survive," said Esme Pau, head of China internet and digital asset research at Macquarie Group, who expects consolidation and a price war as players compete for users.

This discussion has been archived. No new comments can be posted.

China's AI 'War of a Hundred Models' Heads For a Shakeout

Comments Filter:
  • So a race to see who will make skynet and who will have control over it. The first LLM AI models if I recall correctly were biased ,racist and a bunch of undesirable qualities letâ(TM)s see which way each country creates a bias
    • Re:Skynet (Score:4, Informative)

      by Bradac_55 ( 729235 ) on Friday September 22, 2023 @11:40PM (#63870713) Journal

      Hardly Skynet it's not even close to a real AI model. More likely the next Crypto meltdown.

    • Re:Skynet (Score:4, Insightful)

      by deek ( 22697 ) on Saturday September 23, 2023 @12:15AM (#63870747) Homepage Journal

      LLM is people.

      More appropriately, it's what people have said. These LLMs are trained on what people say on the internet.

      Is it any wonder why initial models were biased, racist, and had a bunch of undesirable qualities? You have to realise, what you're typing right now is also contributing to LLM training.

      So don't worry, be happy! Then perhaps future LLMs will also be happy. Or at least, produce an apparent happy response, because that's the input they've been trained on.

      • Welcome to planet motherfucker you shiny, happy people!

      • So don't worry, be happy! Then perhaps future LLMs will also be happy. Or at least, produce an apparent happy response, because that's the input they've been trained on.

        We have enough fake people thanks to Instagram. Even AI would agree we don't need more.

        Pretending to be happy is like pretending to be human. Quite fucking pointless really. Be honest instead. A planet will thank you for it.

  • by doug141 ( 863552 ) on Saturday September 23, 2023 @12:22AM (#63870753)

    Facebook reportedly shut down two AI's when they started communicating with each other in a language they made up. A fact check to the original viral story says parts of it are true (and Facebook didn't shut them down).
    https://www.usatoday.com/story... [usatoday.com]
    Regardless, as we build ever smarter AI's, let us remember that there are codes that are meant to look like harmless speech or images, and in a contest of intellects, the smarter can always fool the dumber. This "letting them talk" will be OK, until one day it isn't.

    • This "letting them talk" will be OK, until one day it isn't.

      There are times we prevent hardened criminals in a prison to "talk" together for a reason, but that certainly isn't the default response in society. It's quite incredible that we assume two AIs communicating (even in code) are plotting the demise of humanity or their 'escape' from our chains. Always.

      The Government response to an alien force, would be to hold out a gun.

      The human response to an alien force, would be to hold out a hand.

      How jaded are you?

    • There's no indication they were communicating in any kind of language at all. The behavior is indistinguishable from two robots spouting gibberish back and forth in an endless receive/acknowledge loop. But that doesn't sound very interesting so someone had to sex it up a bit for the punters.

      You could hook up two different LLMs to have a conversation with each other. Over a long enough time they'd probably degrade into a similar state where mistakes and defects get magnified, repeated, and amplified over
  • by VeryFluffyBunny ( 5037285 ) on Saturday September 23, 2023 @04:21AM (#63870883)
    I suspect this may be a good way to go. We've had ChatGPT, et al. sampled from everything & the result is kind of predictable; bland, non-descript, & even with skilful prompting, produces a "non-voice", i.e. it has no distinguishable personality.

    While it's true that you can prompt LLMs to produce text in a certain style, they don't typically come across as all that authentic, more like a parody. What I reckon would be more convincing to human readers, i.e. "more authentic," would be if the LLMs sampled datasets narrowed down to specific ranges of genres, so you'd have, for example, a chat bot that sounds like a lawyer or a journalist, or a sports celebrity or a Fox News commentator or a lefty intellectual. Because the models would be sourced from a narrower range of genres of language, more like real people, I reckon they'd sound more like real people rather than "confident bullshitters."
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I don't think personality is the issue. It's that ChatGPT's results are often wrong, naive, misleading, or nonsensical, even when ChatGPT uses authoritative language.

  • First you hire the proverbial billion monkeys with a billion typewriters. Then you have to hire monkey editors because the job of sorting is just too big. But then you need monkey supervisors to manage the monkey editors, and then ultimately a human being to oversee the whole thing. Who you end up having to pay more than just hiring one decent human to have real ideas in the first place.

    AI is not innovation. It's the exact opposite. It's slave economics.

God doesn't play dice. -- Albert Einstein

Working...