Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

VP and Head Scientist of Alexa at Amazon: 'The Turing Test is Obsolete. It's Time To Build a New Barometer For AI' 180

Rohit Prasad, Vice President and Head Scientist of Alexa at Amazon, writes: While Turing's original vision continues to be inspiring, interpreting his test as the ultimate mark of AI's progress is limited by the era when it was introduced. For one, the Turing Test all but discounts AI's machine-like attributes of fast computation and information lookup, features that are some of modern AI's most effective. The emphasis on tricking humans means that for an AI to pass Turing's test, it has to inject pauses in responses to questions like, "do you know what is the cube root of 3434756?" or, "how far is Seattle from Boston?" In reality, AI knows these answers instantaneously, and pausing to make its answers sound more human isn't the best use of its skills. Moreover, the Turing Test doesn't take into account AI's increasing ability to use sensors to hear, see, and feel the outside world. Instead, it's limited simply to text.

To make AI more useful today, these systems need to accomplish our everyday tasks efficiently. If you're asking your AI assistant to turn off your garage lights, you aren't looking to have a dialogue. Instead, you'd want it to fulfill that request and notify you with a simple acknowledgment, "ok" or "done." Even when you engage in an extensive dialogue with an AI assistant on a trending topic or have a story read to your child, you'd still like to know it is an AI and not a human. In fact, "fooling" users by pretending to be human poses a real risk. Imagine the dystopian possibilities, as we've already begun to see with bots seeding misinformation and the emergence of deep fakes. Instead of obsessing about making AIs indistinguishable from humans, our ambition should be building AIs that augment human intelligence and improve our daily lives in a way that is equitable and inclusive. A worthy underlying goal is for AIs to exhibit human-like attributes of intelligence -- including common sense, self-supervision, and language proficiency -- and combine machine-like efficiency such as fast searches, memory recall, and accomplishing tasks on your behalf. The end result is learning and completing a variety of tasks and adapting to novel situations, far beyond what a regular person can do.
This discussion has been archived. No new comments can be posted.

VP and Head Scientist of Alexa at Amazon: 'The Turing Test is Obsolete. It's Time To Build a New Barometer For AI'

Comments Filter:
  • by RelaxedTension ( 914174 ) on Wednesday December 30, 2020 @04:35PM (#60879946)
    Alexa is pretty stupid and would fail a turing test. Going to have to show something much better than that to talk about redefining the measuring stick.
    • by phantomfive ( 622387 ) on Wednesday December 30, 2020 @04:46PM (#60879996) Journal

      Ironically, these people dislike the Turing test because it is too hard. "you aren't looking to have a dialogue."

      OK, but the goal is to make a computer that has human-level intelligence. The Turing test is one idea for how to measure that.

      • by narcc ( 412956 ) on Wednesday December 30, 2020 @06:32PM (#60880302) Journal

        Very odd, isn't it? Eliza was thought to demonstrate how foolish the standard was as it purported to show that even a very simple program could pass the Turing test!

        We live in the future, so I should probably explain. Eliza was an early "chat bot" type program written by Joe Weizenbaum at the MIT AI lab back in the 60's. Eliza simulates a Rogerian therapist, mostly just rephrasing statements you make as questions and mixing in a few canned responses like "I see" and "can you elaborate". Joe's secretary was famously taken in by the program and thought her conversations should be kept private.

        Oh, but we're too sophisticated to be taken in by a computer program like that today, right?

        We're still being fooled in to attributing human qualities to computers. Surprisingly, by even simpler tricks than Eliza used! If you've ever installed Windows 10, you'll have seen things like messages like "Hi" and "We're getting things ready for you" that try to make you feel like your computer is friendly and is trying to help you out. Cortana even talks to you to try to make the setup process feel less frightening to normal users.

        Even though no one is truly fooled by these silly tricks, they still work. That is, even though it doesn't make anyone think the computer is a sentient being with its own thoughts, feelings, and emotions, it's more than enough to put users at ease and let them relate emotionally to their computer.

        Then we have things like computer games [time.com] designed to make players form a strong emotional bond with one of the characters. The illusion is good enough for many people to use it as a substitute for a real-world relationship. That particular game (Love Plus) was even popular enough to get real-world retreats [cnet.com] to cater to users and their virtual companions.

        I'm with Ol' Joe Weizenbaum here. The Turing test isn't too hard -- it's way too easy! Who would have thought that it was trivial to write a program that makes you fall in love with it and makes you feel that it loves and cares for you?

        More importantly, it isn't being too hard or too difficult that matters anyway. Those sorts of tests, no matter how you structure them, are just inadequate to measure what it is they want to measure.

        • by Rademir ( 168324 ) on Wednesday December 30, 2020 @06:51PM (#60880370) Homepage

          I remember AIs in the early text-based MUDs. I always thought the point of so many people easily believing these were real people was that a proper Turing test should not use naive testers, who do not even know they are in a test. Eliza works because we are easily drawn into talking about ourselves. Any decent Turing tester is going to ask the other party questions about themselves, invite creativity in novel areas, shift contexts, refer back to things mentioned earlier in the conversation, there are a bunch of ways that attempts to pass the Turing test can fail easily.

          It's like in testing any software. Whether or not you have tests written for what you hope is "everything" you also want some people who understand the code and the real-world use scene and can intentionally generate possible failure scenarios.

          • Indeed. It is easy to ask questions that require "world knowledge" that a modern AI can't easily answer.

            "The guitar would not fit in the case because it was too big. What was too big?"

            "The drum would not fit in the box because it was too small. What was too small?"

            Winograd Schema Challenge [wikipedia.org]

        • by green1 ( 322787 )

          Complete and utter BS.

          I've never seen a chatbot that took more than about 30 seconds to show was not a real person. You get so frustrated talking to them in about that much time because they obviously can't even remember what they said 2 lines ago, let alone manage to come off as human.

          After all the hype about Eliza I tried, I got through about 3 sentences before I gave up as there was nothing even vaguely human sounding about it. I've tried many others over the years that all claim to pass the turing test,

      • Re: (Score:3, Insightful)

        by Riceballsan ( 816702 )
        Is the goal really to have human level intelligence? See that's the problem right there... See bottom line is, it's not apples to apples. The fact is there's some things we're better at than existing AI systems, and some things an AI blows us out of the park. I think the key problem with the turing test, is right now... passing it is mostly by going backwards. Its causing us to teach the machines to imitate the weaknesses of humans rather than the stregnth of AI. Much like when I ask alexa the time... I don
        • If you want to build something useful, then build something useful. That is fine. Forklifts are useful, but not particularly intelligent.

          If you want to build general AI, then it should have intelligence.

        • Human level does not necessarily mean human like. The entire article seems to completely misunderstand the entire point Turing was trying to make about intelligence. The point of the test is not to trick anybody, but to make a system which is capable of a mutual exchange like two intelligent beings could have... and it's more of a thought experiment which asks the question that if a simulcrum can be a perfect mimic then is it actually different than the real thing.
          • yes, but in practice the way the turing test works is by having humans try to differentiate the AIs from humans, and the fact is it's easier to spot AIs by catching what they are too good at, moreso than where they are bad at. IMO the goal for AI shouldn't be trying to be human, but trying to be as good or better than humans in all worthwhile categories.
        • by dwpro ( 520418 )

          Is the goal really to have human level intelligence?

          Well, yes, that's goal for now until we can achieve it, but not what you're suggesting. Having human level intelligence means being able to understand sarcasm, imperfect responses, recognizing fakes, perhaps even some variation on boredom so that it seeks out new places to learn. The failings of current AI don't appear to be that they're too good, as in 'the matrix" movie sense where the architect creates a too-perfect response. It's more that current iterations of AI are very impressive calculators vs a

      • by Myrdos ( 5031049 )

        Agreed. It seems that everyone automatically assumes that the Turing test is invalid.

        The emphasis on tricking humans means that for an AI to pass Turing's test

        If the Turing test IS valid, then it should not be possible to trick humans into thinking something's smart when it isn't. The only way would be to build a genuinely intelligent system.

        • by narcc ( 412956 )

          I think most people agree that the Turing test is "invalid". I think you agree:

          If the Turing test IS valid, then it should not be possible to trick humans into thinking something's smart when it isn't.

          It is trivial to trick humans into thinking something is smart when it isn't.

          • by green1 ( 322787 )

            And yet, nobody has ever managed to make a computer program that's capable of tricking people in to thinking it's a person. Nobody has ever managed to design a system that can pass the touring test, so obviously it's not "trivial" to do so.

      • Theoretically an AI could be orders of magnitude 'smarter' than a human at many tasks but dumber at some simple things. If we could have an AI that could help advance human civilization by figuring out problems humans have a hard time at and if that AI can do it without us specifically programming it (by learning from assimilating available knowledge) then that's good enough. Why does a computer have to have human-like intelligence to be considered smart?
    • by Chas ( 5144 ) on Wednesday December 30, 2020 @04:56PM (#60880032) Homepage Journal

      What Amazon is pushing isn't "AI".
      It's simply a smart system running on responses to a limited number of parameters.
      That only works until the dataset for the parameters grows too large and contradictory.
      The current setup ALREADY shows signs of parameter failure.
      It's artificial. But it isn't actually "intelligent".

      • Yeah, current AI is more like "Artificial Stupidity". If a person needed a giant number of examples of a process to learn the process (how deep learning works) we would call them "stupid" and they would perform the process in a "stupid" way that follows the examples by rote even if they don't apply.

        I prefer to call what we have now "purpose-optimized semi-arbitrary algorithms" because as far as I can tell that's what they are.

        • Well, current "AI" is extremely stupid. We can even put a number on it: For every neuron in a human, you need a 100 "neurons" in their simulation. Because a neural link is represented as just a weight. Which is the best real-world example of that "simulating a horse race with a perfectly spherical horse on a sinusoidal trajectory" joke I have ever seen.

          Real neurons have so many features that they try to simulate with just more weights. But that doesn't work, just like 9 women cannot make a baby in a month.

          • That's smart, it's like saying my phone is better than yours because I measure it's storage in GB and you measure it in bytes. If a biological neuron is 100x more complex, then artificial neurons need just to scale up, or to solve the problem more efficiently. And it's possible to surpass humans in many tasks, even in perception.
        • We benefited from an long and expensive evolutionary process. You should include that.
      • If anything I think my Alexa has regressed in ability. I now only use it as a hands free cooking timer.

      • "The" definition for AI keep changing and has never been clear.. Furthermore, I think Amazon has just been going for "voice control" with a slight glint of personality on the edges.

        --Matthew C. Tedder

    • The Turing test never made any sense. It says more about the person interviewing the subject than it does about the subject.

      • It says more about the person interviewing the subject than it does about the subject.

        Primarily because the candidates for testing are so far below the level of "intelligent" that they can only trick very unaware people.

      • You're certainly right that we are not a precision instrument and hard to calibrate for accuracy. But the 'sense' of the test, is that the only measure we currently seem to have of 'intelligence' is ourselves. Consciousness remains a difficult thing to measure, and even the most recent tests I'm aware of (the Marcello Massimini test) requires a human brain to zap... which doesn't offer much to measure a digital consciousness. The article is basically saying this isn’t good enough, which I certainly
    • Yes! We need to start using the expression Artificial Stupidity just so that we can compel AS developers to aspire to AI. Intelligence is not what they're delivering!

    • by swell ( 195815 )

      Alexa has found a purpose as a timer in my house. "Alexa, set timer for 5 minutes." Something like that happens most every day. Nothing else seems to work. Today I felt hopeful: "Alexa, start stopwatch." But Alexa instead showed me an internet search for 'stopwatch'. I tried some variations to no avail. My Casio watch of 25 years ago understood both timer and stopwatch and it didn't need any intelligence to do that.

  • Loebner Prize (Score:4, Interesting)

    by dpille ( 547949 ) on Wednesday December 30, 2020 @04:35PM (#60879948)
    You just need to be more specific about what you mean by "AI."

    Yes, a chatbot will eventually win the Loebner Prize. But what if, instead of calling that "AI," we all said, that's nice, but your chatbot needs to be able to distinguish between other winners, copies of itself, and humans. By definition, humans can't do that (thus the chatbot(s) winning), so anything able to do this Loebner Prize+ test is actual, real, artificial intelligence, regardless of its ability on whatever other tasks you set before it.
  • by Frobnicator ( 565869 ) on Wednesday December 30, 2020 @04:41PM (#60879972) Journal

    He's right about the obvious truth that the test was written to answer questions from a different age. Everybody who has done significant work with AI knows this.

    He's wrong about stopping it driving the popular imagination.

    To scientists and academics the Turing Test is exactly what it is: the first viable proposed method to discover if a machine is exhibiting behavior which is indistinguishable from human behavior as identified by another human.

    To lay people, the Turing Test is the gold standard where a robot can invisibly infiltrate and integrate itself with other humans.

    Good luck getting the average idiot to revise their definitions, they get their facts from pop culture and movie references, not scientific journals.

  • Point well taken, the Turing Test is not a good measure of AI. Simply fooling humans by imitating conversation says little about machine "intelligence."
    It seems to me that to look for a simplistic "test" to measure AI is mistaken. There are already various tests and competitions, from object recognition to language generation. Why assume that one test or metric can encompass all aspects of computing?

    • by flink ( 18449 )

      The problem is that we do not have a good rigorous definition of "intelligence", because we don't really understand what intelligence is. All we have are some empirical examples of what we would describe as intelligent behaviour.

      Given this lack, the best we can do to determine if an entity has human-like intelligence is to point to a behavior that we think is associated with human intelligence and see if the entity can replicate that behaviour. In the case of the Turing test, the proxy for intelligence is

    • I propose a simple replacement. Be able to answer correctly any question that a child can answer.

      We can start with a 1st grader:
      What color is the sky?
      How many letters are in the word cat?
      Does a microwave make things hotter or colder?

      and move up to a 5th grader:
      What is the 3rd letter of the first president's last name?

      We could even make it easier with yes/no questions like:

    • > Why assume that one test or metric can encompass all aspects of computing?

        I'm not sure that "all aspects of computing" is what we're trying to measure here.

      I think the Turing Test was designed as a measurement of a specific concept branded Artificial Intelligence (tm). There is a ton of useful computation that is not AI.

      • by green1 ( 322787 )

        There is a ton of useful computation that is not AI.

        There used to be. Now everything "on a computer" has been re-defined as "AI" even when it bears no resemblance to anything anyone who doesn't have a marketing degree would believe is AI.

    • by narcc ( 412956 )

      Why assume that one test or metric can encompass all aspects of computing?

      The problem is bigger than that. We've known for 40 years that purely computationalist approaches to so-called strong AI are doomed to failure. I don't know that anyone takes that old idea seriously anymore. Certainly no one qualified is doing legitimate work along those lines.

    • Yeah its not that useful. GPT3 can pass Turing tests maybe 50% of the time. But its mostly just a big old language model, albeit one with an unnervingly human sense of context (Well its not that surprising , it was trained on human text). But is it AGI? Well no. It still doesnt *really* understand what its doing, its just hitting a database of interactions via a neural net.

      So if a Turing Test passing GPT3 isn't true AGI, then we better come up with a better test.

      • by green1 ( 322787 )

        50% of the time? I've seen many examples of GPT3, and I've never seen one that could even remotely be considered to have passed the turing test. It's mostly gibberish word salad. It can't carry on a full conversation convincingly.

  • by mhkohne ( 3854 ) on Wednesday December 30, 2020 @04:42PM (#60879976) Homepage

    The fact that we can do useful things with AIs that don't pass the Turing test doesn't mean the test is obsolete. It just means we've found one more class of problems that don't require it's solution. We've got LOTS of problems that don't require passing a Turing test.

    Someone please stop giving these bozos airtime.

    • The nice thing of AI is that whatever it's level of competence it's scaleable.
      You want a cheap AI minder for everyone on the planet, you got it. They can monitor and escalate whenever it detects signes the human is goiing to become uppity. It does not have to be all that smart by itself.

      • by narcc ( 412956 )

        The nice thing of AI is that whatever it's level of competence it's scaleable.

        What makes you think that?

    • by green1 ( 322787 )

      No, we can't do ANYTHING with AI at all, because nobody has yet come up with AI. Marketing people have redefined all sorts of things to be "AI" but that doesn't actually make them so.

      Just because a computer does something useful, doesn't make it AI. Useful is good, it doesn't need to be AI to do something useful.

  • by JustAnotherOldGuy ( 4145623 ) on Wednesday December 30, 2020 @04:46PM (#60879994) Journal

    To paraphrase a Park Ranger speaking about the difficulty of making a bear-proof container:

    "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."

    So it will be with AI posing "are you human?" questions. Some people won't be able to convince the AI that they're human (I know a couple likely candidates to be honest).

    I wanna see battling AIs trying convince each other that they're human.

  • by plate_o_shrimp ( 948271 ) on Wednesday December 30, 2020 @04:47PM (#60880008)
    My understanding of the Turing test is this: If a computer (AI) is indistinguishable from a human being, by a human being, then it cannot be said that the AI does not think or is not conscious. Has it graduated from being a programmed machine to being a thinking intelligence. You can't say for sure that it /is/, but you also can't rule it out. Prasad seems to see it as: Can we build a machine specifically designed to fool humans. I see these as two fundamentally different viewpoints: my view is as a test to see if the AI has woken up, and he sees it as a test of how clever engineers are. One of us misunderstands the test.
    • turing test has nothing to do with being conscious or the ability to think, It is purely about a computers ability to imitate a human interaction to the extent that in a blind test where one is a computer and the other a human a person interacting with them cannot reliably identify the computer.
      • >turing test has nothing to do with being conscious or the ability to think,

        The very first sentence of Turing's paper is:
        "I propose to consider the question, ‘Can machines think?’"
  • The Turning test has no relationship to AI and Turning had nothing to say about the topic.

    This sounds like the musings of the tech-equivalent of a New-ager. Gullible. accepting and uncritical.
    • In fact, the phrase "Artificial Intelligence" was coined five years after Turing presented his paper about this test (which he called "The Imitation Game.") And not by Alan Turing.

      Turing's paper was more philosophical than technical anyway. He started out with a declaration of intent to address the question "Do computers think?" Referring, of course, to computers of his day. This question had no practical nor legal value at that time, it was really just philosophical musing.

      He proceeded to reject this f

      • by narcc ( 412956 )

        in much the same way that Daniel Dennett and other modern physicalist philosophers do.

        Dan Dennett is a populist hack. You'd do well to ignore anything he says.

  • Sales pitch (Score:5, Insightful)

    by ChatHuant ( 801522 ) on Wednesday December 30, 2020 @05:44PM (#60880156)

    If you're asking your AI assistant to turn off your garage lights, you aren't looking to have a dialogue. Instead, you'd want it to fulfill that request and notify you with a simple acknowledgment, "ok" or "done."

    Sounds like the author is trying to muddy the definition of AI to me. I wouldn't call speech recognition of a few simple commands AI. By this definition my wall switch is AI: I request lights to be on via a simple mechanical action (flip the switch on), and it acknowledges the command via a mechanical click. Neither the switch nor Alexa understand anything more than that.

    And who in the world would run a Turing test by asking "how much is square root of X"? That's not the point of the test at all - a calculator will find the result much faster than any regular human can, but this doesn't make it artificial intelligence.

    The point of the test is that it can run the gamut of subjects, reasoning, understanding, context, that a regular human is expected to be able to provide. You should be able to tell the machine a joke, and it could explain why it's funny. You should be able to speak metaphorically, and it should understand you. It should be able to interpret and respond to the same words or sentences differently, depending on context (and with the context encompassing a very wide range: for example, the current conversation, past conversations with you, the local or international situation, and so on). You should be able to discuss an ethical problem with it, and it should provide meaningful reasoning. Can Alexa do any of that? Until it can, declaring the Turing test obsolete is nothing more than a sales pitch for Amazon's spyware.

    • by jythie ( 914043 )
      Eh, TBH you get this a lot in AI research today. Ever since machine learning took over there has been been a bit of a pattern of trying to define anything that can be solved by a narrow range of matrix equations as 'AI', and anything that isn't as not. The whole idea is to frame all problems as things their solution works for, and since money has been pouring into that class of solution, there is a bit of an incentive to frame it as the entirety of AI.
    • by narcc ( 412956 )

      You should be able to discuss an ethical problem with it, and it should provide meaningful reasoning. Can Alexa do any of that?

      Hell, you can't get meaningful reasoning about an ethical problem on slashdot. What hope does my toaster have?

      • Hell, you can't get meaningful reasoning about an ethical problem on slashdot. What hope does my toaster have?

        Fair enough - though I'd say that picking the average slashbot as the reference point for human intelligence does give your toaster a fighting chance.

  • What proven tests does he recommend replace the Turing test, or is he simply spouting off? In business, you need to know how to fix a problem, or keep your trap shut.
    • by narcc ( 412956 )

      The only "business" here is in keeping the AI "research" money flowing. I'd say he's doing a bang-up job.

    • by green1 ( 322787 )

      His solution is to redefine AI to mean simple speech recognition, database lookup, and command execution. Probably because people are willing to shell out big bucks if they think something is "AI" and it's far easier to redefine "AI" to what they already have, than it is to actually develop AI (Something that nobody has yet succeeded at, despite MANY people trying to redefine it in similar ways to this clown)

    • In the article, he recommends solving logic problems the AI hasn't seen before [kaggle.com]. Somewhat hypocritically, he also recommends Amazon's own chatbot challenge [amazon.com].

  • The basic premise of the Turing Test is still valid. Can you build a program that is indistinguishable from a human?
    Questions that prove it's a computer are kind of cheating. The point isn't to find a question a computer can answer
    that a human can't. The point is to find a question that a human can answer that a computer can't.
    If you want to level the playing field though, let the human have access to a calculator and/or the internet.
    There are still plenty of questions that humans can easily answer that

    • Questions like "What is the third letter of the first president's last name?" are simple for almost any child
      but computers can't answer because computers are not actually understanding what is being said.

      Is that hard? I thought Watson solved that kind of problem.

  • Call me in 50 years when we have to pass the computer test or weâ(TM)ll be selected for annihilation.

  • by FunkSoulBrother ( 140893 ) on Wednesday December 30, 2020 @06:18PM (#60880250)

    My personal test is whether the AI assistant has value or not is to compare it to a real-life human assistant. In the sense that that real-life assistant has a memory of past items discussed, can understand pronouns and other placeholder language, and can grok multiple requests in the same query. If I'm being ambiguous, the real life assistant can ask clarifying questions to get at what I really want.

    For example, I want to be like:

    "Hey Alexa, remember the blonde girl who I had a meeting with on Tuesday Morning? Please schedule a reservation for 7PM tonight at a sushi restaurant near the office in her name for both of us"

    as opposed to:

    "Please list attendees of 2PM meeting"
    (list of names response)
    "Tell me Chinese restaurant nearby"
    (list of restaurants response)
    "Book a table at 7PM for (name from list)"

    etc etc.

    It's fine to ask me back "you had 3 meetings Tuesday morning, which did you mean?"

    (caveat: I haven't used any of these services other than Siri to ask directions while keeping my eyes on the road or maybe to set an alarm. So maybe they are already this good, but I doubt it?)

  • None of this commercial glitter is Artificial Intelligence. It is unable to pass a test under the crucial constraint that was handed to me repeatedly since at least 8th grade: "show your work".
  • Call me when I can tell a robot to go fetch me a beer, fix me a sandwich, and take out the trash. That's something useful a human can do. And no the Boston Dynamics entertainment robots can't do it. They don't have a dextrous hand for one thing.

  • Tranlation: (Score:5, Insightful)

    by Chris Mattern ( 191822 ) on Wednesday December 30, 2020 @06:31PM (#60880300)

    "This target is too hard. Let's paint the bulls-eye around the arrow."

  • Sarcasm.

    Call me when it can detect sarasm like a human.

    (You can only detect sarcasm right, if you grew up and lived a life in the society it comes from. Since SWM [shitty weight matrices, "AI"] didn't do that, they cannot properly detect sarcasm. ... Or be able to understand references, memes, inside jokes, etc, by the way.)

  • The metric by which you could measure AI's intelligence is the accuracy to which it can correctly identify someone as human or not, or if it is communicating with another AI.

    When an AI is talking to an AI, you can create a feedback loop on the AI being used for input, so that one AI gets better at convincing the other AI that it is is human, while the other AI always gets feeback on how it did, to get better at detecting if it is talking to a human or another AI.

    Ensure that for any given test, whether

  • It is not as if we are short of real humans. They just need better training.

    • by PPH ( 736903 )

      I suspect that a lot of these real humans would struggle to pass the Turing test.

  • He claims: The emphasis on tricking humans means that for an AI to pass Turing's test, it has to inject pauses in responses to questions like, "do you know what is the cube root of 3434756?"

    That would only work with the most gullible humans - anybody with a modicum of savvy tasked to find out whether the other party in the dialog is a human or machine would probably even try the approach above, on the grounds that programs can trivially address it. No, state-of-the-art chatbots get quickly confused when you

  • Comment removed based on user account deletion
  • > Turing Test all but discounts [misused label]'s machine-like attributes of fast computation and information lookup.

    So what should it be: Computers that talk?

    Amazon wants to market Alexa as AI. It is not.

    Alexa's "intelligence" is barely above that of Eliza. Below the intelligence of a Parrot.

    Does this work:

    My brothers name is Ted.
    Ted's favorite color is Red.
    What is my brothers favorite color?

    • by narcc ( 412956 )

      What's wrong with that? AI is all about marketing!

      Real AI work has absolutely nothing to do with what everyone else thinks AI is all about. That confusion is what sells products and secures funding.

      It's been like that for a very, very, long time. AI hype as always come and gone in waves. I'm absolutely stunned that the current wave has lasted as long as it has.

  • Reading this, it sounds like what they want to do is what companies like Intel, AMD, and others have been accused of doing for years and years: tailoring performance tests to give them an unfair advantage and/or put the competition at an unfair disadvantage. Moving the goalposts -- for themselves.
    To be more direct about it: they sound like what they want to do is redefine what a 'Turing Test' is so that their crappy, half-assed, non-cognitive excuse for 'AI' looks better than it really is -- so they can se
  • While it has been incredibly profitable, I think this piece is an example of the rising sense of hopelessness in the AI community that it has produced. The ML end of AI has pretty much given up on AI, gone are the days of talking about knowledge representation or symbolic reasoning, replaced with, if we throw enough math into a blender, we can get lucky enough to sell movies and ads!'. Which is what I see in this statement.. resignation and perhaps insecurity about the turning test, so like so much in ML
  • And yet not a single AI comes close to passing the Turing Test, what does that say about your tech if Turing test is obsolete?
    • by narcc ( 412956 )

      Depends on your version of "the" Turing test. I, along with many others, have always maintained that Turing tests, no matter the criteria, are completely useless for identifying "true" AI. (That is, the populist conception of AI.) The whole behaviorist approach was foolish from the start.

      Now, if we just want to trick a human into thinking a program has its own thoughts and feelings, well, that's trivial. I gave some examples earlier in this thread.

      I thought about it for a bit, and I'll bet that a lot us

      • by green1 ( 322787 )

        Now, if we just want to trick a human into thinking a program has its own thoughts and feelings, well, that's trivial. I gave some examples earlier in this thread.

        I've read the entire thread, I haven't found a single example of an AI that can trick a human into thinking a program has its own thoughts and feelings.

        It's "Trivial" as you've said many times, yet nobody has ever managed to do it. You keep using that word, I do not think it means what you think it does.

  • Is the AI human enough to sacrifice itself for the benefit of humanity?
  • 1. Fetch requested drink from fridge
    2. Suck my wanker
    3. Wash all the dishes found in the sink and on the table
    4. Do the laundry
    5. Filter out telemarketers from real calls
    6. F with Mormons & Joho's who knock on my door
    8. Let me know when I mis-number a list
    7. Don't say "Profit!"

  • How good is the VP of AI if he still has a job?

  • Comment removed based on user account deletion
  • The Turing Test has always been nothing more than foolish entertainment... Honestly, it's embarrassed me since the first I heard of it, back in 1989. The Turing Test is more nebulous than the constantly changing definition of AI, even, almost as bad as the definition of "robot" for which now, a remote controlled toy car fits. And an "android" is now phone?? How did any of this happen?

    Deep Learning isn't much beyond inferential statistical methods and far, far, from a biologically valid simulation of neu

  • by bb_matt ( 5705262 ) on Thursday December 31, 2020 @03:58AM (#60881318)

    say so very, very quietly.

    Very, very quietly, the door murmured, ``I can hear you.''

    ``Good. Now, in a moment, I'm going to ask you to open. When you open I do not want you to say that you enjoyed it, OK?''

    ``OK.''

    ``And I don't want you to say to me that I have made a simple door very happy, or that it is your pleasure to open for me and your satisfaction to close again with the knowledge of a job well done, OK?''

    ``OK.''

    ``And I do not want you to ask me to have a nice day, understand?''

    ``I understand.''

    ``OK,'' said Zaphod, tensing himself, ``open now.''

    The door slid open quietly. Zaphod slipped quietly through. The door closed quietly behind him.

    ``Is that the way you like it, Mr Beeblebrox?'' said the door out loud.

  • by CptJeanLuc ( 1889586 ) on Thursday December 31, 2020 @05:59AM (#60881426)

    I will file this under Amazon VP says things that an intelligent human would not.

In every non-trivial program there is at least one bug.

Working...