Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:State of the art (Score 1) 25

Simple: There is no rational reason to think LLMs can do general intelligence at all. At the same time, it clearly is an extraordinary claim and so would need extraordinary proof to takle that claim seriously. There is not even simple proof for that claim, so it is clearly complete bullshit at this time. At the same time LLMs is mature tech and the only real improvement over, say, IBM Watson (15 years old) is a better natural language interface and a larger training data. Hence it is not rational to expect great improvements either.

Oh, and the physicalist approach you push there is not Science, it is pure belief. We do not know how human minds work when they are actually using General Intelligence. Which, to be fair is somewhere between "rarely" and "never" for many people.

Comment Re:State of the art (Score 2) 25

Well, besides the obvious fact that Altmann stands to profit massively from a lie here, I really doubt he knows as much about AGI as I do. He does not strike me as nearly as smart or educated as I am and he decidedly has not followed AI research for something like 35 years, unlike me. But he does not need to understand what he is promising, as he is just pushing a scam. He just needs to know what people want to hear.

Why he pushes AGI as a (fake) goal is quite clear: His hype AI is quite pathetic and cannot be fixed. So to keep the hype going a bit longer (and making a few more billions), he needs to claim it is just a stepping-stone to something even greater. This uses elements of the "Big Lie" approach (https://en.wikipedia.org/wiki/Big_lie) and the dream of robotic slaves and robotic "friends" that apparently many people entertain. The same baseless claims were used, incidentally, in the last few AI hypes. Yes, I have seen a few. They always come with the same empty promises. So what Altman does is not even original.

Comment Re:Oil companies are scum (Score 0) 68

True. Although "church" is a ridiculous example. These just have good lies that appeal to many people but are quite obviously all about accumulation of power and controlling what people think. Same for governments.

What makes the oil business special is that they are evil at all sizes and are more openly evil, hence setting a bad example in addition.

Comment Re:Every Day Someone Discovers the Pareto Principl (Score 3, Interesting) 71

You should always go for "good", because that typically means "good value for money" and typically does not include short-term "solutions" causing long-term problems. "Good enough" is the same as "good" if done right, i.e. when looking at all angles. But fake "good enough", were typically long(er)-term effects and often also risks and side-effects are ignored, is actually not "good enough" and not "good" either.

Comment Re:Perfect is not wanted (Score 2) 71

That is also not really the question. The question is "good enough" vs. "good". "Good enough" will often be done in the stupid way. i.e. with short-term thinking only. That way you accumulate technological (and other) debt. If you do not clean that up pretty fast, you end up with a house of cards that will eventually collapse. On the other hand "good" typically does not have that problem. Hence selecting "good" (when you can get it) actually solves problems long-term and not only for a limited time making things worse in the long run.

Comment "It is thinking". No, not always. (Score 2) 71

For humans with limited memory, it is. For AI of the LLM variant, with much larger memory, it is not. It is just using statistical correlations without any insight or understanding. That needs to be understood, or people will think that LLMs can do things they decidedly cannot do.

Comment Re:The ensittification of health care (Score 1) 104

I believe that AI in medicine means the end of progress in medicine UNLESS AI can genuinely make leaps of intuitive discovery, which I doubt. For doing routine stuff, sure, but otherwise? Nah.

That is pretty much what I get from following AI research for > 30 years now and trying LLMs out on some exams I gave to students. LLMs are pretty good at looking stuff up (although they sometimes invent stuff), but they have zero insight and zero understanding (they cannot even "understand" slightly non-standard descriptions of things), hence they cannot "discover" anything, except maybe when it is just looking through masses of data with entirely conventional means. That will bring a bit of progress, as humans cannot handle mass-scale data very well, but after that that was it. Any discoveries that require understanding or intuition will be human-only for the foreseeable future.

I believe that western civilization peaked on 20 July 1969 and it's been in decline ever since, alas.

Probably.

Slashdot Top Deals

Many people write memos to tell you they have nothing to say.

Working...