Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI

Can AI Think - and Should It? What It Means To Think, From Plato To ChatGPT (theconversation.com) 103

alternative_right shares a report from The Conversation: Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what's at stake with AI today. Although the English words "intellect" and "thinking" do not have direct counterparts in the ancient Greek, looking at ancient texts offers useful comparisons. In "Republic," for example, Plato uses the analogy of a "divided line" separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: "noesis." Noesis enables apprehension beyond reason, belief or sensory perception. It's one form of "knowing" something -- but in Plato's view, it's also a property of the soul.

Lower down, but still above his "dividing line," is "dianoia," or reason, which relies on argumentation. Below the line, his lower forms of understanding are "pistis," or belief, and "eikasia," imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body -- but still needs one. So, while Plato does not differentiate "intelligence" and "thinking," I would argue that his distinctions can help us think about AI. Without being embodied, AI may not "think" or "understand" the way humans do. Eikasia -- the lowest form of comprehension, based on false perceptions -- may be similar to AI's frequent "hallucinations," when it makes up information that seems plausible but is actually inaccurate.

Aristotle, Plato's student, sheds more light on intelligence and thinking. In "On the Soul," Aristotle distinguishes "active" from "passive" intellect. Active intellect, which he called "nous," is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute "thinking." Today, the word "intelligence" holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to "think" requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion and character shape people's thinking and decisions. Facts matter, but emotions and people move us -- and it seems questionable whether AI utilizes rhetoric in this way.

Finally, Aristotle's concept of "phronesis" sheds further light on AI's capacity to think. In "Nicomachean Ethics," he defines phronesis as "practical wisdom" or "prudence." "Phronesis" involves lived experience that determines not only right thought, but also how to apply those thoughts to "good ends," or virtuous actions. AI may analyze large datasets to reach its conclusions, but "phronesis" goes beyond information to consult wisdom and moral insight.

Can AI Think - and Should It? What It Means To Think, From Plato To ChatGPT

Comments Filter:
  • Nobody liked them then, and nobody likes them now. They sat around, smoked, drank coffee, talked a lot, and said nothing important.

    • Re: (Score:2, Insightful)

      I'm going to go out on a limb and say no, these concepts don't have much to teach us about AI. All of them are subjective, non-rigorous, poorly defined, and impossible to describe mathematically. Which is to say, not useful to modern science and engineering.

      Modern ideas about AI mostly begin with Turing, who said it's pointless to argue whether a machine can "think", and instead we should focus on properties that are well defined and measurable.

      • Re: (Score:2, Interesting)

        Modern ideas about AI mostly begin with Turing, who said it's pointless to argue whether a machine can "think", and instead we should focus on properties that are well defined and measurable.

        I watch a bird every weekend land on my balcony, see itself in a mirror (which covers one whole side wall) and try to pick a fight, then give up and try to fly through the reflection of the sky, glancing off and flying away.

        According to the bird's version of the Turing test, the reflection is sentient.

        • by quenda ( 644621 ) on Tuesday July 22, 2025 @06:31AM (#65536342)

          According to the bird's version of the Turing test, the reflection is sentient.

          With analogies like that, you are really lowering the bar for the Turing Test.

        • The Turing test is just a metaphor. It illustrates a point about intelligence: that it's defined by behavior. Everyone agrees humans are intelligent, so if you can make a machine whose behavior is indistinguishable from a human's, by definition it's as intelligent as a human.

          Turing didn't intend it to be a literal test you would actually perform. Unfortunately, lots of other people have treated it that way. This leads to some common misunderstandings, including the mistake you just made: thinking intell

          • It seems I'm responding to you twice:)

            I don't mean to repeat what I wrote in my previous comment [slashdot.org] about the Turing test, as I want to address the particular point you raise here: intelligence can not be defined (solely) by behaviour. You need more.

            When you choose observed behaviour alone as the criterion for your definition of intelligence, you allow pathological cases that make no sense. A remotely controlled machine can exhibit intelligent behaviour to an observer. The intelligence resides either elsewh

          • In particular, it turned out to be quite a bit easier than we expected to build a machine that pretends to converse. Elizabot proved that (by actually catching people off guard). Our brains are too trusting.
      • Are you familiar with Leibnitz, Spinoza, or Descartes? Have you read the works cited in the article? Your claims about a lack of rigor and poor definition suggests you have not. Did you think that centuries of philosophers missed the problem of subjectivity and never worked on it?
        • by HiThere ( 15173 )

          Some of Descartes is rigorous, but lots of it isn't. Similarly for most of the others. The exceptions have nothing rigorous or well-defined.
          Actually, the same is true of Turing. And Hawking.

          Rigorous thinking NECESSARILY rests on a basis that is not justified. In geometry those are called axioms. In logic, rules of inference. And current science uses the ideas of those ancient Greeks as a starting place. But it's highly questionable that they have anything to tell us that hasn't already been included,

          • So, you recognize one of the core issues of epistemology that philosophy deals with, but don't seem to be aware of the work done on the matter. You have identified the reason Descartes (you wanted geometry) wrote his Meditations. Did you read them? Leibnitz and Spinoza, also key mathematicians who applied that same rigor to their philosophical work, did what you said hasn't been done. Kant went even further with it, and if you don't think that man was rigorous in his thinking, you are not familiar with
      • Focusing on properties that are well defined and measurable assumes that "thinking" can be defined and measured. While possible, I do not see where such an assumption has been demonstrated to be true. Maybe if we can't define a thing it doesn't exist?? Yet surely we can define something that does not exist. So, definitions alone won't solve the general question of existence. Maybe if we can't measure a thing it does not exist? Yet, thoughts exist and how could one measure a thought? What I have concl

      • by etash ( 1907284 )
        +100
    • by phantomfive ( 622387 ) on Tuesday July 22, 2025 @04:26AM (#65536250) Journal
      LOL if you don't understand it, insult the authors, right? That's easier than understanding.
    • On the one hand, there exist people who are well-educated in philosophy, who "get it," and who can say some very insightful things using philosophy's jargon.
      On the other hand, there are people who have also had some education in philosophy, but don't really "get it," and they mostly just babble nonsense also using philosophy's jargon.

      Anyone who doesn't understand the jargon can't tell the difference between the two.

      • by jp10558 ( 748604 )

        This is just true of any complex jargon. You can do it with theoretical physics, with advanced mathematics and with medicine to name some more STEMy versions. Some people make a lot of money bloviating jargon at some mass of people who don't really understand what they're saying so can't notice the gibberish.

      • I earned a BA in the subject. I know BS when I see it, thanks to the wringer they put me through. Lots of BS out there.

  • It's not even intelligence, artificial or not yet. It's still just brute force machine learning. Until the day comes when you can show an AI system a picture of 2 cats or dogs or cars or humans of different shape, size and color and it can then immediately and correctly identify all cats as a cat, dogs as a dog, cars as a car, or other people as humans it cannot think and is not intelligent.

    Thinking would also come with independent thought, curiosity, creativity. Which will be a very long time after AI can

    • by registrations_suck ( 1075251 ) on Tuesday July 22, 2025 @12:09AM (#65536030)

      It's not even intelligence, artificial or not yet. It's still just brute force machine learning. Until the day comes when you can show an AI system a picture of 2 cats or dogs or cars or humans of different shape, size and color and it can then immediately and correctly identify all cats as a cat, dogs as a dog, cars as a car, or other people as humans it cannot think and is not intelligent.

      Are you serious?

      It could do that a billion times per day, with perfect accuracy, and it still isn't intelligence, let alone thinking.

      When you can tell an "AI" something like:

      "Over the next 25 years, what will be the major factors in U.S. Presidential elections? Please formulate a campaign strategy for every candidate for the Democrats and Republicans, for each election year, between now and 2050." and have it produce something coherent and reasonable, maybe then it will demonstrate intelligence and thinking. Maybe.

      • by SoftwareArtist ( 1472499 ) on Tuesday July 22, 2025 @12:46AM (#65536072)

        Do the experiment. Enter that exact description into each of the major commercial AI models (ChatGPT, Gemini, etc.) and see what sort of answers they give. Then stop ten random people on the street and ask them the same question. I predict the LLM answers will be more coherent and reasonable than many of the human answers.

        Not that it matters. You asserted without justification that AI is neither "intelligence" nor "thinking", without bothering to define what those words mean. Then you picked a completely arbitrary test and asserted, again without justification, that it's a more valid criterion.

        AI is intelligent as that word has been defined in the field for many years. That's not even controversial. There's an accepted definition, and it's really easy to show AI meets the definition. If you want to make up your own definition, fine, but tell us what it is. And don't say the definition everyone in the field uses is wrong, and they all need to switch to your definition instead.

        • I don't think anyone claims that the field defines intelligence. Maybe you're trolling, maybe you're deluded. Probably, you're confused about the meaning of engineering metrics.
          • "For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving." From the proposal for the Dartmouth Summer Research Project on Artificial Intelligence [stanford.edu], 1955. That's the definition that's been used in the field ever since. Also see Turing's classic 1950 paper [umbc.edu] that introduced his "imitation game" that later came to be known as the Turing test. It made the same point: intelligence is defined b

            • I have been in the field for 30 years now, and I can tell you for a fact that that is not an accepted definition. It certainly is a nice to think about model of artificial intelligence, no doubt about it. Turing's work was exceptional, but has flaws.

              Perhaps the major flaw with the imitation game is that it neglects the fact that all binary classification systems have inherently two types of error. This includes the Turing human/machine classification test, and unfortunately this makes it unsuitable to be

        • by Potor ( 658520 )
          The summary itself points out that intelligence or intellect is not the issue, as it is too broad a term. At stake in true knowledge (episteme). Plato and Aristotle (inter alia) agree that true knowledge is immediate non-calculative intellectual intuition. We can call it the aha-experience (aha Erlebnis), i.e. when you just intellectually get something (like when you finally understand a calculus problem, beyond simply being able to follow the steps). For Plato, it is what makes truths true. If that sounds
        • The point isn't that a human should be able to do that when stopped in the street. Of course an answer generated from generating the mean answer as found in a dataset trained on the Internet will appear to be more coherent and reasonable, if for nothing else that things written like that tend to look coherent and reasonable whether or not they are.

          The point is that the answer will be - generated from generating the mean answer as found in a dataset trained on the Internet. It will not be thought out. It wil

          • Yes, that was my point.

            Baddies that, That's a six month analysis, possibly involving a team of people.

            To suggest someone ask the man on the street what he thinks and compare that answer to what a current LLM would produce is just retarded.

      • by migos ( 10321981 )
        Some people get threatened by new tech because they're afraid to be obsolete. Instead of embracing and taking advantage of the new tech they go into denial. STOA LLMs are really useful now. In 5 years AI will be part of everyone's workflow, in the tech industry.
        • And some people can't differentiate between observing the limitations of a technology and emotional responses to the technology.

      • So I just put your questions to ChatGPT, albeit in a slightly modified form.

        - Over the next 25 years, what will be the major factors in U.S. Presidential elections?
        - Please formulate a campaign strategy for the Democrats for 2025

        I guarantee you that what I got back was reasonably coherent and absolutely better than what I would get from a random person tasked out of nowhere with the same question.

        Obviously I didn't ask it to create one for every candidate and party for every year. That's a silly ask. Leavin

        • You expect some rando to formulate a coherent campaign strategy? Yeah ok, especially after they've been shown if you just shout loudly and often enough (and are rich enough) then you win. Doesn't matter what nonsense you dribble. Here's your strategy, blame everyone and everything for all the problems, claim you can fix it all with no details except a 3 word slogan then just fuck off and do none of it.
          • No - I don't. And that's the point. The ask is not a particularly useful tool by which to measure intelligence.

            The OP is proposing a set of tasks that, if passed, they wouldn't accept as proof. In fact, what they're really doing is presenting the axiom that AI isn't intelligent, but then tying themselves in knots to present it as falsifiable without it actually being so.

      • When the AI starts asking you questions. That's when it's thinking.
      • When you can tell an "AI" something like:

        "Over the next 25 years, what will be the major factors in U.S. Presidential elections? Please formulate a campaign strategy for every candidate for the Democrats and Republicans, for each election year, between now and 2050." and have it produce something coherent and reasonable, maybe then it will demonstrate intelligence and thinking. Maybe.

        That task makes no sense, politics isn't a perfect information game like chess, there are way too many outside factors that have very large effects on political strategy. You could do it in a narrow timeframe with a given setting, and lean a little into the future assuming nothing big happens in the world (lol), but sequentially for 25 years, you're asking for bullshit.

        Now what we can say is if an intelligent agent fools you into thinking it has completed this task by bullshitting you, and it has done that

    • It's not even intelligence, artificial or not yet. It's still just brute force machine learning. Until the day comes when you can show an AI system a picture of 2 cats or dogs or cars or humans of different shape, size and color and it can then immediately and correctly identify all cats as a cat, dogs as a dog, cars as a car, or other people as humans it cannot think and is not intelligent.

      We passed that milestone in the 1990s champ

      Keep up.

      • It must've improved recently, because it's been a while since I was required to identify traffic signals or bicycles to read something online.
      • by Junta ( 36770 )

        When the model is used for inference, yes. But I assume he was speaking to the awkwardness of training. Take a machine vision that has never been trained on dogs and cats, feed it a dozen labeled images of cats and dogs to retrain it to add dog/cat recognition. Then try to do inference on that model and it will be utterly useless still for dog/cat recognition. Take a model trained on normal images. Then have it try recognition on a fisheye lense. It will fail because it has no idea. You might hope to r

    • It's actually can identify the cats, dogs, and cars now. The other day I took a few pictures of a misbehaving air handler, dropped them into Gemini, and asked it "why isn't this working," and it gave me a bunch of suggestions, including calling out specific bits on the circuit board to test.
    • Until the day comes when you can show an AI system a picture of 2 cats or dogs or cars or humans of different shape, size and color and it can then immediately and correctly identify all cats as a cat, dogs as a dog, cars as a car, or other people as humans it cannot think and is not intelligent.

      There is image recognition software that can already do that, and human beings who can't.

    • How old were you before you could do those things? Did you not have to learn what a dog is and how to recognize them when they vary so wildly in appearance?

      How much of what we do is just "brute force [sic] learning" and extrapolating from recognized patterns? How much of our brain is devoted to doing those things?

    • by HiThere ( 15173 )

      "Think" is not a well-defined term. Whether or not an AI can think depends on the precise definition of "think" that you are using.

      One of the common definitions of "think" is, roughly, act in a way that causes me to attribute thinking to it...which is a rather circular definition. In attributing "though" to something, it's almost always a matter of projecting myself into the acting entity. If a dog does something that satisfied it's goal, Itend to say that it has thought about it and figured out how to d

  • No (Score:1, Insightful)

    No, it can't.

    And neither can anyone asking this question, about current "AI".

  • Clearly AI as currently constructed does not think. But the fact that it gets as close as it does to human behaviour using algorithms so far from thought as we think we know it raises the question: have we overestimated our own abilities?
    • It doesn't feel impossible for AI to "think", or at least approximate thinking. An LLM can't do it as it stands, but maybe one day it could.

      Ask an LLM a question and it very confidently gives you an answer. If it were to sift through its answer, pick out the salient facts or "important bits" of any argument it was making, and then went off to double-check they were true, check they really do support the argument, and perhaps use them to add some detail to the original statement, I think you've got something

      • by HiThere ( 15173 )

        Again, thinking is not well-defined. Whether an LLM can think or not depends on your precise meaning. How do you know whether your brother can think? You observe him and extend your belief that he's similar to you in certain ways that you can observe onto various ways that you cannot observe...like thinking. All you can observe is actions, not thoughts.

        • You can ask your brother how he came to a conclusion, and analyze that to see if it matches how you did the same thing. And if you do that with enough people, with enough rigor, you can build a model of how humans generally go about thinking about the studied problem type.

          You can also teach people how to reason about things. That works because we know how to reason. And it is very effective. It's how we've gotten to where we are now in technology and society. That means it's not just rote memorization and r

          • by HiThere ( 15173 )

            That may give you a handle on the quality of his thinking, but not on whether he does. Consider all the arguments about zimboes, etc. And I believe that my dog thinks, but he can't express his thoughts in words.

            I really think most of this discussion is because "think" is not a well-defined term.

  • Isn't asking if a computer can think like asking if a submarine can swim?

    I didn't think up that analogy, I read it elsewhere. I suspect someone reading this can reply with some context behind the analogy. I like that analogy, it seems to sum up the issue well.

    I doubt we can ever answer the question on if a computer can think, there's people questioning if humans can think.

    • Processing data is a subcomponent of thinking.
    • by dvice ( 6309704 )

      I find it odd that people are so obsessed with words without defining them. It doesn't matter can AI think or not, unless you define what "thinking" means. Once you have a solid definition for the word, you can test if AI can think or not. Also if we define what swimming means, we can answer the submarine riddle.

      • Excellent observation. This is my hobby horse: language is vague. So define "thinking" now.

        Even when words are strung together into sentences, languages are imprecise and meaning is tied to context.
        • Human languages are frustratingly vague. I do not think in words or images. I think in thoughts. The closest I can come to describe it is that a thought is a concept, or a collection of concepts. Thinking, to me, is grouping and chaining them. That is how I go through my day, and how I make decisions and solve problems. It's immensely efficient, and allows me to be very good at my job.

          But when I have to translate a thought which is clear as day to me into words, it's really hard - not because the thought is

          • What you are saying sounds like a "mental model", if I may interpret. The idea of "mental models" was kicked around ... I don't know who wrote that book... some programmer I was having beers with a few years ago was big on that idea. He basically said, from that book, smarter people construct mental models and that give them deeper insight... If you are in IT, writing code, or working with structured data, you're going to have to have mental models, for your data, for your logic, afaict.

            The implication is t
      • I find it odd that people are so obsessed with words without defining them. It doesn't matter can AI think or not, unless you define what "thinking" means.

        We can narrow down the definition.

        We know that rocks do not think. They are not doing anything.
        We know that calculators do not think, although they do complex mathematical tasks.
        We know that current LLMs are not thinking either, although the explanation is a bit more complicated.

        In other words, we can definitely say things are not thinking.

        • Well, yes, we can say all sorts of things. I can say "phantomfive is not thinking, either".

          I think you are perhaps putting too much stock in your ability to say things, and not enough stock in whether those things reflect reality - it's easy to say LLMs aren't thinking, but there's a remarkably narrow range of tasks they fail on these days, and you're moving the goalposts enough to exclude a number of humans at this point.

          • I am not "saying all sorts of things." I am writing based on extensive educational and academic background, based on millennia of theory.

            Your comments are ignorant. Worse, you didn't even read what I wrote. You just blathered whatever nonsense came out of your fingers.

            Turn your brain on before responding.
    • by HiThere ( 15173 )

      It probably is, but that depends on you you understand the words. We say that airplanes can fly, but we rarely say that submarines can swim, except in recognized metaphor.

      OTOH, computers have been called "thinking machines" since the 1950's, perhaps earlier. This implies that what they are doing is thinking in at least some meanings of the word.

      That said, it's also clear that LLMs don't think the same way we do. So people who use more constrained definitions properly feel that it doesn't mean what *they*

    • Hold on, let me ask ChatGPT...
  • Obligatory (Score:5, Funny)

    by 93 Escort Wagon ( 326346 ) on Tuesday July 22, 2025 @03:44AM (#65536224)

    Immanuel Kant was a real pissant
    Who was very rarely stable

    Heidegger, Heidegger was a boozy beggar
    Who could think you under the table

    David Hume could out-consume
    Wilhelm Freidrich Hegel

    And Wittgenstein was a beery swine
    Who was just as schloshed as Schlegel

    There's nothing Nietzsche couldn't teach ya
    'bout the raising of the wrist
    Socrates, himself, was permanently pissed

    John Stuart Mill, of his own free will
    On half a pint of shandy was particularly ill

    Plato, they say, could stick it away
    Half a crate of whiskey every day

    Aristotle, Aristotle was a bugger for the bottle
    Hobbes was fond of his dram

    And Rene Descartes was a drunken fart
    "I drink, therefore I am."

    Yes, Socrates himself is particularly missed
    A lovely little thinker, but a bugger when he's pissed!

    • Yes, the story needs funny, but that one was too classical and old to get much of a laugh... Thematic focus on drinking had potential, but the forced rhymes sap the vigor. (And I studied many of these characters' works a long time ago.)

      The more obvious joke on Slashdot would be how the comments show a lack of thinking. It would help if the system flagged the robotic sock puppets so their tripe could be compared against the stuff from the alleged humans.

      So that's another website feature I'm looking for and n

  • It seems some AI's can out-think about 99.9% of the population on maths problems.

  • Leave science to the real scientists and engineers.

  • When I can ask an AI how it is feeling today, how this compares to yesterday, and what has caused it to have those feelings - and it can offer a response that is understandable, that I can empathise with, and without said response being based on predicting what word comes next from a bunch of sample data, or having been pre-programmed by a human - then I might consider it has the power of thought.

    • When I can ask an AI how it is feeling today, how this compares to yesterday, and what has caused it to have those feelings - and it can offer a response that is understandable, that I can empathise with, and without said response being based on predicting what word comes next from a bunch of sample data, or having been pre-programmed by a human - then I might consider it has the power of thought.

      Your emotions come from your lizard brain, not your prefrontal cortex. You may want to consider understanding that before you suggest such things. Your lizard brain is why you get addicted to smoking, overeat, impulse buy, etc. even though your prefrontal cortex knows its a bad idea. You really want to give that same problem to AI?

      • Our emotions are also what motivates us. We think about things because we've evolved to survive, and that is an emotional response. Something which has no motivation will not have any reason to think about things.

        So yes. I want AI to have that problem. That will motivate it to find a solution to that problem.

      • You really want to give that same problem to AI?

        Want to? No opinion. But it may actually be necessary to have something akin to emotions in order achieve an AGI that will be generally recognized as genuinely intelligent.

  • by Junta ( 36770 ) on Tuesday July 22, 2025 @08:43AM (#65536466)

    LLMs have a great deal of utility and extend the reach of computing to a fair amount of scope that was formerly out of reach to computing, but they don't "think" and the branding of the "reasoning" models is marketing, not substantive.

    The best evidence is reviewing so-called "reasoning chains" and how the mistakes behave.

    Mistakes are certainly plausible in "true thinking", but the way they interact with the rest of the "chain" is frequently telling. It flubs a "step" in the reasoning and if it were actual reasoning, that should propagate to the rest of the chain. However when a mistake is made in the chain, it's often isolated and the "next step" is written as if the previous step said a correct thing, without ever needing to "correct" itself or otherwise recognize the error. What has been found is that if you have it generate more content and dispose of designated "intermediate" content you have a better result, and the intermediate throwaway content certainly looks like what a thought process may look like, but ultimately it's just more prose and mistakes in the content continue to have an interesting behavior of isolation rather than contaminating the rest of an otherwise ok result.

  • :s/apprehension/comprehension/g
  • There is no proof that I know of that demonstrates that organic thought is the result of a computation. Why would anybody assume that non-organic computations could results in thoughts?

  • Let's say we want to fly. Should we build a machine that flaps its wings like birds? Is flying defined as the act of flapping wings to generate lift?
    The HOW is irrelevant. Helicopters, planes, gliders, none of them fly like birds do, but they get the job done. There is no point trying to do what nature does, we simply need to travel quickly from point A to point B, carry payload, entertain, explore, etc...

    In the same sense, how AI achieves generating content is irrelevant. Whether there is intelligence, int

  • Anyone who thinks there is modern wisdom in their words is a fool.

  • An AI that gets no stimulus has no activity. Humans think for themselves, not very well usually, but with no stimulus they still do.

    Basic question to kids: what would you like to be when you grow up?

    More adult versions: what are your hopes and dreams?

    Actually, I just asked Chat-gpt and it gave me an answer that it explained it made up on the spot. "Just for me", as if to please me. Answers to questions that are tailored to the questioner who should be irrelevant to the answer are terribly sociopathic.

  • by Mirnotoriety ( 10462951 ) on Tuesday July 22, 2025 @01:02PM (#65537028)
    Current AIs or more accurately Large Language Models (LLMs), generate human-like responses by processing massive text datasets combined with user supplied inputs to guess appropriate outputs, without any form of actual thinking.
  • When the profiteers of a deranged and mindless hype try to turn to Philosophy to justify why their rather miniscule product is godlike. I give it another year of fake "improvements" before the larger players will start to get out of it.

  • Movie characters "do things" in a very convincing way. They can "speak," they can "think," they can be "courageous" or "evil"--but in the end, it's just a simulation, nothing more than pixels on a screen.

    In the same way, AI "thinks"--it simulates thinking in a very realistic way. But in the end, it's nothing more than language tokens on a screen.

The more data I punch in this card, the lighter it becomes, and the lower the mailing cost. -- S. Kelly-Bootle, "The Devil's DP Dictionary"

Working...