China's AI 'War of a Hundred Models' Heads For a Shakeout (reuters.com) 17
An anonymous reader shares a report: China's craze over generative artificial intelligence has triggered a flurry of product announcements from startups and tech giants on an almost daily basis, but investors are warning a shakeout is imminent as cost and profit pressures grow. The buzz in China, first ignited by the success of OpenAI's ChatGPT almost a year ago, has given rise to what a senior Tencent executive described this month as "war of a hundred models", as it and rivals from Baidu to Alibaba to Huawei promote their offerings. China now has at least 130 large language models, accounting for 40% of the global total and just behind the United States' 50% share, according to brokerage CLSA.
Additionally, companies have also announced dozens of "industry-specific LLMs" that link to their core model. However, investors and analysts say that most were yet to find viable business models, were too similar to each other and were now grappling with surging costs. Tensions between Beijing and Washington have also weighed on the sector, as U.S. dollar funds invest less in early-stage projects and difficulties obtaining AI chips made by the likes of Nvidia start to bite. "Only those with the strongest capabilities will survive," said Esme Pau, head of China internet and digital asset research at Macquarie Group, who expects consolidation and a price war as players compete for users.
Additionally, companies have also announced dozens of "industry-specific LLMs" that link to their core model. However, investors and analysts say that most were yet to find viable business models, were too similar to each other and were now grappling with surging costs. Tensions between Beijing and Washington have also weighed on the sector, as U.S. dollar funds invest less in early-stage projects and difficulties obtaining AI chips made by the likes of Nvidia start to bite. "Only those with the strongest capabilities will survive," said Esme Pau, head of China internet and digital asset research at Macquarie Group, who expects consolidation and a price war as players compete for users.
Skynet (Score:2)
Re:Skynet (Score:4, Informative)
Hardly Skynet it's not even close to a real AI model. More likely the next Crypto meltdown.
Re:Skynet (Score:4, Insightful)
LLM is people.
More appropriately, it's what people have said. These LLMs are trained on what people say on the internet.
Is it any wonder why initial models were biased, racist, and had a bunch of undesirable qualities? You have to realise, what you're typing right now is also contributing to LLM training.
So don't worry, be happy! Then perhaps future LLMs will also be happy. Or at least, produce an apparent happy response, because that's the input they've been trained on.
Re: (Score:2)
Welcome to planet motherfucker you shiny, happy people!
Re: (Score:2)
So don't worry, be happy! Then perhaps future LLMs will also be happy. Or at least, produce an apparent happy response, because that's the input they've been trained on.
We have enough fake people thanks to Instagram. Even AI would agree we don't need more.
Pretending to be happy is like pretending to be human. Quite fucking pointless really. Be honest instead. A planet will thank you for it.
The Dragon Smells Food. (Score:2)
Time to eat!!
AI's talking to each other (Score:4, Interesting)
Facebook reportedly shut down two AI's when they started communicating with each other in a language they made up. A fact check to the original viral story says parts of it are true (and Facebook didn't shut them down).
https://www.usatoday.com/story... [usatoday.com]
Regardless, as we build ever smarter AI's, let us remember that there are codes that are meant to look like harmless speech or images, and in a contest of intellects, the smarter can always fool the dumber. This "letting them talk" will be OK, until one day it isn't.
Re: (Score:2)
This "letting them talk" will be OK, until one day it isn't.
There are times we prevent hardened criminals in a prison to "talk" together for a reason, but that certainly isn't the default response in society. It's quite incredible that we assume two AIs communicating (even in code) are plotting the demise of humanity or their 'escape' from our chains. Always.
The Government response to an alien force, would be to hold out a gun.
The human response to an alien force, would be to hold out a hand.
How jaded are you?
Re: (Score:3)
You could hook up two different LLMs to have a conversation with each other. Over a long enough time they'd probably degrade into a similar state where mistakes and defects get magnified, repeated, and amplified over
"industry-specific LLMs" (Score:4, Interesting)
While it's true that you can prompt LLMs to produce text in a certain style, they don't typically come across as all that authentic, more like a parody. What I reckon would be more convincing to human readers, i.e. "more authentic," would be if the LLMs sampled datasets narrowed down to specific ranges of genres, so you'd have, for example, a chat bot that sounds like a lawyer or a journalist, or a sports celebrity or a Fox News commentator or a lefty intellectual. Because the models would be sourced from a narrower range of genres of language, more like real people, I reckon they'd sound more like real people rather than "confident bullshitters."
Re: (Score:2, Insightful)
I don't think personality is the issue. It's that ChatGPT's results are often wrong, naive, misleading, or nonsensical, even when ChatGPT uses authoritative language.
Re: "industry-specific LLMs" (Score:2)
Monkeys with typewriters. (Score:2)
AI is not innovation. It's the exact opposite. It's slave economics.