Comment Re: It shows monopolies have already formed (Score 1) 54
Agree, it's a lot of maybe. My point is, it's not easy to attribute such a decision to one root cause (monopoly).
Agree, it's a lot of maybe. My point is, it's not easy to attribute such a decision to one root cause (monopoly).
The distinction between growing and building is both irrelevant and false.
The idea that "growing" an invention is brand new, is false because engineers have "grown" inventions for a long time. Engineered crops, crystals, self-healing plastics, nanobots, replacement human organs. The fact that AI is "grown" (more precisely, *trained*) doesn't make it any less engineered.
It's irrelevant because AI is not mysterious to those who design AI systems. They can and do solve problems that occur in LLMs, everything from efficiency (DeepSeek) to inaccurate renderings of historical figures (Google Gemini).
As for having "their own goals" I say, that's pure hogwash and a Sci-Fi horror plot line, nothing else.
So of the 45% that had problems:
- 31% had attribution errors. Yeah, we know, AI is terrible at attributions.
- 20% had accuracy issues, including outdated information and hallucinated details. The proportion of these two types of errors is important. "Outdated information" is everywhere on the internet, AI or not. I wouldn't blame AI for that problem. Hallucinated details are a lot worse. What portion of the 20% was hallucinated? I'd say that something less than 20% having hallucinated details isn't as bad as I would have guessed.
The details matter.
Were they asking:
- What is today's most important news?
- What is news from my country?
Or were the questions more specific, like:
- What caused the AWS outage Monday?
- Whatever happened to the couple caught on the jumbotron at the Cold Play concert?
I would expect AI to do much better with the latter, than the former.
Or maybe, just maybe, the "AI" teams *were* actually bloated.
Maybe they were staffed with people who claimed they knew how to build AI products but couldn't actually deliver.
Or maybe they figured out that the stuff they were promising to accomplish, was mostly vapor.
Or maybe it was just politics in a big, bureaucratic organization.
Maybe it was a hot potato at GM, but Rivian seems to be doing just fine with its Electric Delivery Van, used by Amazon.
Remember how hospital beds were going to fold up with patients inside when the Y2K apocalypse hit?
Maybe the doomsayers were just a few years too early.
First we have to figure out what superintelligence *is*. It's just a made-up scary word, nothing more.
First, it has to be defined. What exactly is "super" intelligence? "Super" is nothing but an advertising prefix.
At least we know what guns are.
Banning "superintelligence" is more like trying to ban "superweapons."
There's one big difference. Everything in your list has a definition. We know what fire, the wheel, religion, art, and cryptography is.
There is no definition for "superintelligence." It's entirely made up. It's somehow "more" intelligent than regular AI, I suppose? Whatever, the word is only useful to marketers.
To most laypeople, "regular" software is just as mysterious and powerful as AI, while to those of us who practice software engineering know full well what makes it work, and what makes it fail. They know how to make it do what they want it to do.
AI isn't different from regular software in this regard. The goals of AI are *always* determined by people. It may seem magical to those who don't actually develop AI systems, but it's not magical at all. There's a reason why AI has gotten so much better over the last couple of years: Human engineers still have to build the thing and give it its goals. As mysterious as it may seem to those who haven't engineered LLMs or other AI systems, the engineers themselves do fully understand and control their creations.
Well, that's what AI wants us to believe, anyway!
And what exactly is "it" again?
We speak of "superintelligence" as if it were a thing with an actual definition.
The prefix "super" is pretty much *always* an advertising term. And that means that it never means what people think it means.
Yes, indeed, it's possible to self-host in a resilient way. It's just highly unusual. Those who complain about the costs of cloud hosting, often forget how difficult and expensive it is to get the equivalent disaster preparedness that you get with big cloud providers. As we have seen though, even the big boys fail sometimes.
That's all good, deal with smaller companies. But that doesn't help you when the whole data center burns, as happened with the smaller company EV1 data center in Houston a few years ago. https://www.datacenterknowledg... For situations like that, good customer service isn't going to get your system up and running right now. You need failover capabilities to data centers located in some other region. How many companies that self-host, do that?
The trouble with the rat-race is that even if you win, you're still a rat. -- Lily Tomlin