
Can AI Think - and Should It? What It Means To Think, From Plato To ChatGPT (theconversation.com) 103
alternative_right shares a report from The Conversation: Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what's at stake with AI today. Although the English words "intellect" and "thinking" do not have direct counterparts in the ancient Greek, looking at ancient texts offers useful comparisons. In "Republic," for example, Plato uses the analogy of a "divided line" separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: "noesis." Noesis enables apprehension beyond reason, belief or sensory perception. It's one form of "knowing" something -- but in Plato's view, it's also a property of the soul.
Lower down, but still above his "dividing line," is "dianoia," or reason, which relies on argumentation. Below the line, his lower forms of understanding are "pistis," or belief, and "eikasia," imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body -- but still needs one. So, while Plato does not differentiate "intelligence" and "thinking," I would argue that his distinctions can help us think about AI. Without being embodied, AI may not "think" or "understand" the way humans do. Eikasia -- the lowest form of comprehension, based on false perceptions -- may be similar to AI's frequent "hallucinations," when it makes up information that seems plausible but is actually inaccurate.
Aristotle, Plato's student, sheds more light on intelligence and thinking. In "On the Soul," Aristotle distinguishes "active" from "passive" intellect. Active intellect, which he called "nous," is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute "thinking." Today, the word "intelligence" holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to "think" requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion and character shape people's thinking and decisions. Facts matter, but emotions and people move us -- and it seems questionable whether AI utilizes rhetoric in this way.
Finally, Aristotle's concept of "phronesis" sheds further light on AI's capacity to think. In "Nicomachean Ethics," he defines phronesis as "practical wisdom" or "prudence." "Phronesis" involves lived experience that determines not only right thought, but also how to apply those thoughts to "good ends," or virtuous actions. AI may analyze large datasets to reach its conclusions, but "phronesis" goes beyond information to consult wisdom and moral insight.
Lower down, but still above his "dividing line," is "dianoia," or reason, which relies on argumentation. Below the line, his lower forms of understanding are "pistis," or belief, and "eikasia," imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body -- but still needs one. So, while Plato does not differentiate "intelligence" and "thinking," I would argue that his distinctions can help us think about AI. Without being embodied, AI may not "think" or "understand" the way humans do. Eikasia -- the lowest form of comprehension, based on false perceptions -- may be similar to AI's frequent "hallucinations," when it makes up information that seems plausible but is actually inaccurate.
Aristotle, Plato's student, sheds more light on intelligence and thinking. In "On the Soul," Aristotle distinguishes "active" from "passive" intellect. Active intellect, which he called "nous," is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute "thinking." Today, the word "intelligence" holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to "think" requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion and character shape people's thinking and decisions. Facts matter, but emotions and people move us -- and it seems questionable whether AI utilizes rhetoric in this way.
Finally, Aristotle's concept of "phronesis" sheds further light on AI's capacity to think. In "Nicomachean Ethics," he defines phronesis as "practical wisdom" or "prudence." "Phronesis" involves lived experience that determines not only right thought, but also how to apply those thoughts to "good ends," or virtuous actions. AI may analyze large datasets to reach its conclusions, but "phronesis" goes beyond information to consult wisdom and moral insight.
I knew people who wrote drivel like this (Score:1)
Nobody liked them then, and nobody likes them now. They sat around, smoked, drank coffee, talked a lot, and said nothing important.
Re: (Score:2, Insightful)
I'm going to go out on a limb and say no, these concepts don't have much to teach us about AI. All of them are subjective, non-rigorous, poorly defined, and impossible to describe mathematically. Which is to say, not useful to modern science and engineering.
Modern ideas about AI mostly begin with Turing, who said it's pointless to argue whether a machine can "think", and instead we should focus on properties that are well defined and measurable.
Re: (Score:2, Interesting)
I watch a bird every weekend land on my balcony, see itself in a mirror (which covers one whole side wall) and try to pick a fight, then give up and try to fly through the reflection of the sky, glancing off and flying away.
According to the bird's version of the Turing test, the reflection is sentient.
Re:I knew people who wrote drivel like this (Score:4, Insightful)
According to the bird's version of the Turing test, the reflection is sentient.
With analogies like that, you are really lowering the bar for the Turing Test.
Re: (Score:2)
Re: (Score:3)
The Turing test is just a metaphor. It illustrates a point about intelligence: that it's defined by behavior. Everyone agrees humans are intelligent, so if you can make a machine whose behavior is indistinguishable from a human's, by definition it's as intelligent as a human.
Turing didn't intend it to be a literal test you would actually perform. Unfortunately, lots of other people have treated it that way. This leads to some common misunderstandings, including the mistake you just made: thinking intell
Re: (Score:2)
I don't mean to repeat what I wrote in my previous comment [slashdot.org] about the Turing test, as I want to address the particular point you raise here: intelligence can not be defined (solely) by behaviour. You need more.
When you choose observed behaviour alone as the criterion for your definition of intelligence, you allow pathological cases that make no sense. A remotely controlled machine can exhibit intelligent behaviour to an observer. The intelligence resides either elsewh
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Some of Descartes is rigorous, but lots of it isn't. Similarly for most of the others. The exceptions have nothing rigorous or well-defined.
Actually, the same is true of Turing. And Hawking.
Rigorous thinking NECESSARILY rests on a basis that is not justified. In geometry those are called axioms. In logic, rules of inference. And current science uses the ideas of those ancient Greeks as a starting place. But it's highly questionable that they have anything to tell us that hasn't already been included,
Re: (Score:2)
Re: (Score:1)
Focusing on properties that are well defined and measurable assumes that "thinking" can be defined and measured. While possible, I do not see where such an assumption has been demonstrated to be true. Maybe if we can't define a thing it doesn't exist?? Yet surely we can define something that does not exist. So, definitions alone won't solve the general question of existence. Maybe if we can't measure a thing it does not exist? Yet, thoughts exist and how could one measure a thought? What I have concl
Re: (Score:2)
Re:I knew people who wrote drivel like this (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
On the one hand, there exist people who are well-educated in philosophy, who "get it," and who can say some very insightful things using philosophy's jargon.
On the other hand, there are people who have also had some education in philosophy, but don't really "get it," and they mostly just babble nonsense also using philosophy's jargon.
Anyone who doesn't understand the jargon can't tell the difference between the two.
Re: (Score:3)
This is just true of any complex jargon. You can do it with theoretical physics, with advanced mathematics and with medicine to name some more STEMy versions. Some people make a lot of money bloviating jargon at some mass of people who don't really understand what they're saying so can't notice the gibberish.
Re: (Score:2)
I earned a BA in the subject. I know BS when I see it, thanks to the wringer they put me through. Lots of BS out there.
No"AI" cannot think (Score:1, Insightful)
It's not even intelligence, artificial or not yet. It's still just brute force machine learning. Until the day comes when you can show an AI system a picture of 2 cats or dogs or cars or humans of different shape, size and color and it can then immediately and correctly identify all cats as a cat, dogs as a dog, cars as a car, or other people as humans it cannot think and is not intelligent.
Thinking would also come with independent thought, curiosity, creativity. Which will be a very long time after AI can
Re:No"AI" cannot think (Score:5, Informative)
It's not even intelligence, artificial or not yet. It's still just brute force machine learning. Until the day comes when you can show an AI system a picture of 2 cats or dogs or cars or humans of different shape, size and color and it can then immediately and correctly identify all cats as a cat, dogs as a dog, cars as a car, or other people as humans it cannot think and is not intelligent.
Are you serious?
It could do that a billion times per day, with perfect accuracy, and it still isn't intelligence, let alone thinking.
When you can tell an "AI" something like:
"Over the next 25 years, what will be the major factors in U.S. Presidential elections? Please formulate a campaign strategy for every candidate for the Democrats and Republicans, for each election year, between now and 2050." and have it produce something coherent and reasonable, maybe then it will demonstrate intelligence and thinking. Maybe.
Re:No"AI" cannot think (Score:5, Insightful)
Do the experiment. Enter that exact description into each of the major commercial AI models (ChatGPT, Gemini, etc.) and see what sort of answers they give. Then stop ten random people on the street and ask them the same question. I predict the LLM answers will be more coherent and reasonable than many of the human answers.
Not that it matters. You asserted without justification that AI is neither "intelligence" nor "thinking", without bothering to define what those words mean. Then you picked a completely arbitrary test and asserted, again without justification, that it's a more valid criterion.
AI is intelligent as that word has been defined in the field for many years. That's not even controversial. There's an accepted definition, and it's really easy to show AI meets the definition. If you want to make up your own definition, fine, but tell us what it is. And don't say the definition everyone in the field uses is wrong, and they all need to switch to your definition instead.
Re: (Score:2)
Re: (Score:2)
"For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving." From the proposal for the Dartmouth Summer Research Project on Artificial Intelligence [stanford.edu], 1955. That's the definition that's been used in the field ever since. Also see Turing's classic 1950 paper [umbc.edu] that introduced his "imitation game" that later came to be known as the Turing test. It made the same point: intelligence is defined b
Re: (Score:2)
I have been in the field for 30 years now, and I can tell you for a fact that that is not an accepted definition. It certainly is a nice to think about model of artificial intelligence, no doubt about it. Turing's work was exceptional, but has flaws.
Perhaps the major flaw with the imitation game is that it neglects the fact that all binary classification systems have inherently two types of error. This includes the Turing human/machine classification test, and unfortunately this makes it unsuitable to be
Re: (Score:2)
Re: (Score:2)
The point isn't that a human should be able to do that when stopped in the street. Of course an answer generated from generating the mean answer as found in a dataset trained on the Internet will appear to be more coherent and reasonable, if for nothing else that things written like that tend to look coherent and reasonable whether or not they are.
The point is that the answer will be - generated from generating the mean answer as found in a dataset trained on the Internet. It will not be thought out. It wil
Re: (Score:2)
Yes, that was my point.
Baddies that, That's a six month analysis, possibly involving a team of people.
To suggest someone ask the man on the street what he thinks and compare that answer to what a current LLM would produce is just retarded.
Re: (Score:2)
Re: (Score:2)
And some people can't differentiate between observing the limitations of a technology and emotional responses to the technology.
Re: (Score:2)
So I just put your questions to ChatGPT, albeit in a slightly modified form.
- Over the next 25 years, what will be the major factors in U.S. Presidential elections?
- Please formulate a campaign strategy for the Democrats for 2025
I guarantee you that what I got back was reasonably coherent and absolutely better than what I would get from a random person tasked out of nowhere with the same question.
Obviously I didn't ask it to create one for every candidate and party for every year. That's a silly ask. Leavin
Re: (Score:3)
Re: (Score:2)
No - I don't. And that's the point. The ask is not a particularly useful tool by which to measure intelligence.
The OP is proposing a set of tasks that, if passed, they wouldn't accept as proof. In fact, what they're really doing is presenting the axiom that AI isn't intelligent, but then tying themselves in knots to present it as falsifiable without it actually being so.
Re: (Score:2)
Re: (Score:2)
When you can tell an "AI" something like:
"Over the next 25 years, what will be the major factors in U.S. Presidential elections? Please formulate a campaign strategy for every candidate for the Democrats and Republicans, for each election year, between now and 2050." and have it produce something coherent and reasonable, maybe then it will demonstrate intelligence and thinking. Maybe.
That task makes no sense, politics isn't a perfect information game like chess, there are way too many outside factors that have very large effects on political strategy. You could do it in a narrow timeframe with a given setting, and lean a little into the future assuming nothing big happens in the world (lol), but sequentially for 25 years, you're asking for bullshit.
Now what we can say is if an intelligent agent fools you into thinking it has completed this task by bullshitting you, and it has done that
Re: (Score:3)
We passed that milestone in the 1990s champ
Keep up.
Re: (Score:3)
Re: (Score:2)
When the model is used for inference, yes. But I assume he was speaking to the awkwardness of training. Take a machine vision that has never been trained on dogs and cats, feed it a dozen labeled images of cats and dogs to retrain it to add dog/cat recognition. Then try to do inference on that model and it will be utterly useless still for dog/cat recognition. Take a model trained on normal images. Then have it try recognition on a fisheye lense. It will fail because it has no idea. You might hope to r
Re: No"AI" cannot think (Score:2)
Re: (Score:2)
Until the day comes when you can show an AI system a picture of 2 cats or dogs or cars or humans of different shape, size and color and it can then immediately and correctly identify all cats as a cat, dogs as a dog, cars as a car, or other people as humans it cannot think and is not intelligent.
There is image recognition software that can already do that, and human beings who can't.
Re: (Score:2)
How much of what we do is just "brute force [sic] learning" and extrapolating from recognized patterns? How much of our brain is devoted to doing those things?
Re: (Score:2)
"Think" is not a well-defined term. Whether or not an AI can think depends on the precise definition of "think" that you are using.
One of the common definitions of "think" is, roughly, act in a way that causes me to attribute thinking to it...which is a rather circular definition. In attributing "though" to something, it's almost always a matter of projecting myself into the acting entity. If a dog does something that satisfied it's goal, Itend to say that it has thought about it and figured out how to d
No (Score:1, Insightful)
No, it can't.
And neither can anyone asking this question, about current "AI".
Re: (Score:2)
I question whether "believes" is reasonable to use on current LLMs. At least not without strong "guard rails". OTOH, I find saying they "think" to be reasonable. A lot depends on the precise definitions you use for those terms. To me "belief" implies an emotional commitment, that I believe current LLMs lack.
Re: (Score:2)
A current "AI" produces output.
It doesn't think.
It doesn't feel.
It doesn't KNOW anything.
It simply produces output based on input and the data it was trained on.
Re: (Score:2)
What does MAGA have to do with this thread?
Do we think? (Score:2)
Re: (Score:2)
Resounding yes. And every generation that leans more on AI slop and less on critical thought and problem solving moves us toward full MAGA-levels of stupidity and ignorance.
It's amazing how well idiocracy predicted the future. President Camacho and all.
Re: (Score:3)
It doesn't feel impossible for AI to "think", or at least approximate thinking. An LLM can't do it as it stands, but maybe one day it could.
Ask an LLM a question and it very confidently gives you an answer. If it were to sift through its answer, pick out the salient facts or "important bits" of any argument it was making, and then went off to double-check they were true, check they really do support the argument, and perhaps use them to add some detail to the original statement, I think you've got something
Re: (Score:2)
Again, thinking is not well-defined. Whether an LLM can think or not depends on your precise meaning. How do you know whether your brother can think? You observe him and extend your belief that he's similar to you in certain ways that you can observe onto various ways that you cannot observe...like thinking. All you can observe is actions, not thoughts.
Re: (Score:2)
You can ask your brother how he came to a conclusion, and analyze that to see if it matches how you did the same thing. And if you do that with enough people, with enough rigor, you can build a model of how humans generally go about thinking about the studied problem type.
You can also teach people how to reason about things. That works because we know how to reason. And it is very effective. It's how we've gotten to where we are now in technology and society. That means it's not just rote memorization and r
Re: (Score:2)
That may give you a handle on the quality of his thinking, but not on whether he does. Consider all the arguments about zimboes, etc. And I believe that my dog thinks, but he can't express his thoughts in words.
I really think most of this discussion is because "think" is not a well-defined term.
Can a submarine swim? (Score:1)
Isn't asking if a computer can think like asking if a submarine can swim?
I didn't think up that analogy, I read it elsewhere. I suspect someone reading this can reply with some context behind the analogy. I like that analogy, it seems to sum up the issue well.
I doubt we can ever answer the question on if a computer can think, there's people questioning if humans can think.
Re: Can a submarine swim? (Score:2)
Re: (Score:2)
I find it odd that people are so obsessed with words without defining them. It doesn't matter can AI think or not, unless you define what "thinking" means. Once you have a solid definition for the word, you can test if AI can think or not. Also if we define what swimming means, we can answer the submarine riddle.
Re: Can a submarine swim? (Score:2)
Even when words are strung together into sentences, languages are imprecise and meaning is tied to context.
Re: (Score:2)
Human languages are frustratingly vague. I do not think in words or images. I think in thoughts. The closest I can come to describe it is that a thought is a concept, or a collection of concepts. Thinking, to me, is grouping and chaining them. That is how I go through my day, and how I make decisions and solve problems. It's immensely efficient, and allows me to be very good at my job.
But when I have to translate a thought which is clear as day to me into words, it's really hard - not because the thought is
Re: (Score:2)
The implication is t
Re: (Score:3)
I find it odd that people are so obsessed with words without defining them. It doesn't matter can AI think or not, unless you define what "thinking" means.
We can narrow down the definition.
We know that rocks do not think. They are not doing anything.
We know that calculators do not think, although they do complex mathematical tasks.
We know that current LLMs are not thinking either, although the explanation is a bit more complicated.
In other words, we can definitely say things are not thinking.
Re: (Score:1)
Well, yes, we can say all sorts of things. I can say "phantomfive is not thinking, either".
I think you are perhaps putting too much stock in your ability to say things, and not enough stock in whether those things reflect reality - it's easy to say LLMs aren't thinking, but there's a remarkably narrow range of tasks they fail on these days, and you're moving the goalposts enough to exclude a number of humans at this point.
Re: (Score:2)
Your comments are ignorant. Worse, you didn't even read what I wrote. You just blathered whatever nonsense came out of your fingers.
Turn your brain on before responding.
Re: (Score:2)
It probably is, but that depends on you you understand the words. We say that airplanes can fly, but we rarely say that submarines can swim, except in recognized metaphor.
OTOH, computers have been called "thinking machines" since the 1950's, perhaps earlier. This implies that what they are doing is thinking in at least some meanings of the word.
That said, it's also clear that LLMs don't think the same way we do. So people who use more constrained definitions properly feel that it doesn't mean what *they*
Re: (Score:2)
Obligatory (Score:5, Funny)
Immanuel Kant was a real pissant
Who was very rarely stable
Heidegger, Heidegger was a boozy beggar
Who could think you under the table
David Hume could out-consume
Wilhelm Freidrich Hegel
And Wittgenstein was a beery swine
Who was just as schloshed as Schlegel
There's nothing Nietzsche couldn't teach ya
'bout the raising of the wrist
Socrates, himself, was permanently pissed
John Stuart Mill, of his own free will
On half a pint of shandy was particularly ill
Plato, they say, could stick it away
Half a crate of whiskey every day
Aristotle, Aristotle was a bugger for the bottle
Hobbes was fond of his dram
And Rene Descartes was a drunken fart
"I drink, therefore I am."
Yes, Socrates himself is particularly missed
A lovely little thinker, but a bugger when he's pissed!
Re: (Score:3)
Re:Obligatory [paradox] (Score:2)
Yes, the story needs funny, but that one was too classical and old to get much of a laugh... Thematic focus on drinking had potential, but the forced rhymes sap the vigor. (And I studied many of these characters' works a long time ago.)
The more obvious joke on Slashdot would be how the comments show a lack of thinking. It would help if the system flagged the robotic sock puppets so their tripe could be compared against the stuff from the alleged humans.
So that's another website feature I'm looking for and n
Read the article about the maths Olympiad (Score:2)
It seems some AI's can out-think about 99.9% of the population on maths problems.
Re: (Score:2)
ChatGPT will be better than anyone at looking up and combining existing solutions. But it fails on simplest tasks when doing something new. And it constantly makes totally obvious errors in calculations.
Re: (Score:2)
Are you trying to prove that humans can't think?
> But it fails on simplest tasks when doing something new.
https://www.coachloya.com/wp-c... [coachloya.com]
> And it constantly makes totally obvious errors in calculations.
https://www.quora.com/Why-does... [quora.com]
https://www.youtube.com/watch?... [youtube.com]
Re:Read the article about the maths Olympiad (Score:5, Interesting)
Re: (Score:2)
I think most people can do long division, but almost no one would bother day to day, because why would they?
Re: (Score:2)
Re: (Score:2)
25% still seems a bit high to me. I do wonder if they really have forgotten how to do long division, or simply forgot what the words 'long division' mean. Like if you told them to work a division problem by hand, would they naturally just do long division, forgetting that was all that long division meant?
Re: (Score:2)
Different people have different definitions of think. This argument is about language, not about reality.
Re: (Score:2)
That's because most humans never have to do math problems in their everyday lives. We get good at what we practice. Try checking how good the AI is at walking to work.
Ever heard of Plato, Aristotle, Socrates, ChatGPT? (Score:2)
Morons!
Re: (Score:2)
Humanities majors, go away. (Score:1)
Leave science to the real scientists and engineers.
Thinking is Feeling (Score:2)
When I can ask an AI how it is feeling today, how this compares to yesterday, and what has caused it to have those feelings - and it can offer a response that is understandable, that I can empathise with, and without said response being based on predicting what word comes next from a bunch of sample data, or having been pre-programmed by a human - then I might consider it has the power of thought.
Re: (Score:2)
When I can ask an AI how it is feeling today, how this compares to yesterday, and what has caused it to have those feelings - and it can offer a response that is understandable, that I can empathise with, and without said response being based on predicting what word comes next from a bunch of sample data, or having been pre-programmed by a human - then I might consider it has the power of thought.
Your emotions come from your lizard brain, not your prefrontal cortex. You may want to consider understanding that before you suggest such things. Your lizard brain is why you get addicted to smoking, overeat, impulse buy, etc. even though your prefrontal cortex knows its a bad idea. You really want to give that same problem to AI?
Re: (Score:2)
Our emotions are also what motivates us. We think about things because we've evolved to survive, and that is an emotional response. Something which has no motivation will not have any reason to think about things.
So yes. I want AI to have that problem. That will motivate it to find a solution to that problem.
Re: (Score:2)
You really want to give that same problem to AI?
Want to? No opinion. But it may actually be necessary to have something akin to emotions in order achieve an AGI that will be generally recognized as genuinely intelligent.
LLMs can't think and they don't need to (Score:3)
LLMs have a great deal of utility and extend the reach of computing to a fair amount of scope that was formerly out of reach to computing, but they don't "think" and the branding of the "reasoning" models is marketing, not substantive.
The best evidence is reviewing so-called "reasoning chains" and how the mistakes behave.
Mistakes are certainly plausible in "true thinking", but the way they interact with the rest of the "chain" is frequently telling. It flubs a "step" in the reasoning and if it were actual reasoning, that should propagate to the rest of the chain. However when a mistake is made in the chain, it's often isolated and the "next step" is written as if the previous step said a correct thing, without ever needing to "correct" itself or otherwise recognize the error. What has been found is that if you have it generate more content and dispose of designated "intermediate" content you have a better result, and the intermediate throwaway content certainly looks like what a thought process may look like, but ultimately it's just more prose and mistakes in the content continue to have an interesting behavior of isolation rather than contaminating the rest of an otherwise ok result.
I know this much... (Score:2)
Thoughts are not computations (Score:1)
There is no proof that I know of that demonstrates that organic thought is the result of a computation. Why would anybody assume that non-organic computations could results in thoughts?
Do plane fly? (Score:1)
Let's say we want to fly. Should we build a machine that flaps its wings like birds? Is flying defined as the act of flapping wings to generate lift?
The HOW is irrelevant. Helicopters, planes, gliders, none of them fly like birds do, but they get the job done. There is no point trying to do what nature does, we simply need to travel quickly from point A to point B, carry payload, entertain, explore, etc...
In the same sense, how AI achieves generating content is irrelevant. Whether there is intelligence, int
The Ancient Greeks philosophers were hacks (Score:2)
Anyone who thinks there is modern wisdom in their words is a fool.
no action without input stimulus (Score:2)
Basic question to kids: what would you like to be when you grow up?
More adult versions: what are your hopes and dreams?
Actually, I just asked Chat-gpt and it gave me an answer that it explained it made up on the spot. "Just for me", as if to please me. Answers to questions that are tailored to the questioner who should be irrelevant to the answer are terribly sociopathic.
AIs do not think .. (Score:3)
You know the end is drawing nearer (Score:2)
When the profiteers of a deranged and mindless hype try to turn to Philosophy to justify why their rather miniscule product is godlike. I give it another year of fake "improvements" before the larger players will start to get out of it.
AI thinks like movie characters do things (Score:2)
Movie characters "do things" in a very convincing way. They can "speak," they can "think," they can be "courageous" or "evil"--but in the end, it's just a simulation, nothing more than pixels on a screen.
In the same way, AI "thinks"--it simulates thinking in a very realistic way. But in the end, it's nothing more than language tokens on a screen.