VP and Head Scientist of Alexa at Amazon: 'The Turing Test is Obsolete. It's Time To Build a New Barometer For AI' 180
Rohit Prasad, Vice President and Head Scientist of Alexa at Amazon, writes: While Turing's original vision continues to be inspiring, interpreting his test as the ultimate mark of AI's progress is limited by the era when it was introduced. For one, the Turing Test all but discounts AI's machine-like attributes of fast computation and information lookup, features that are some of modern AI's most effective. The emphasis on tricking humans means that for an AI to pass Turing's test, it has to inject pauses in responses to questions like, "do you know what is the cube root of 3434756?" or, "how far is Seattle from Boston?" In reality, AI knows these answers instantaneously, and pausing to make its answers sound more human isn't the best use of its skills. Moreover, the Turing Test doesn't take into account AI's increasing ability to use sensors to hear, see, and feel the outside world. Instead, it's limited simply to text.
To make AI more useful today, these systems need to accomplish our everyday tasks efficiently. If you're asking your AI assistant to turn off your garage lights, you aren't looking to have a dialogue. Instead, you'd want it to fulfill that request and notify you with a simple acknowledgment, "ok" or "done." Even when you engage in an extensive dialogue with an AI assistant on a trending topic or have a story read to your child, you'd still like to know it is an AI and not a human. In fact, "fooling" users by pretending to be human poses a real risk. Imagine the dystopian possibilities, as we've already begun to see with bots seeding misinformation and the emergence of deep fakes. Instead of obsessing about making AIs indistinguishable from humans, our ambition should be building AIs that augment human intelligence and improve our daily lives in a way that is equitable and inclusive. A worthy underlying goal is for AIs to exhibit human-like attributes of intelligence -- including common sense, self-supervision, and language proficiency -- and combine machine-like efficiency such as fast searches, memory recall, and accomplishing tasks on your behalf. The end result is learning and completing a variety of tasks and adapting to novel situations, far beyond what a regular person can do.
To make AI more useful today, these systems need to accomplish our everyday tasks efficiently. If you're asking your AI assistant to turn off your garage lights, you aren't looking to have a dialogue. Instead, you'd want it to fulfill that request and notify you with a simple acknowledgment, "ok" or "done." Even when you engage in an extensive dialogue with an AI assistant on a trending topic or have a story read to your child, you'd still like to know it is an AI and not a human. In fact, "fooling" users by pretending to be human poses a real risk. Imagine the dystopian possibilities, as we've already begun to see with bots seeding misinformation and the emergence of deep fakes. Instead of obsessing about making AIs indistinguishable from humans, our ambition should be building AIs that augment human intelligence and improve our daily lives in a way that is equitable and inclusive. A worthy underlying goal is for AIs to exhibit human-like attributes of intelligence -- including common sense, self-supervision, and language proficiency -- and combine machine-like efficiency such as fast searches, memory recall, and accomplishing tasks on your behalf. The end result is learning and completing a variety of tasks and adapting to novel situations, far beyond what a regular person can do.
Not if Alexa is your example (Score:5, Insightful)
Re:Not if Alexa is your example (Score:5, Insightful)
Ironically, these people dislike the Turing test because it is too hard. "you aren't looking to have a dialogue."
OK, but the goal is to make a computer that has human-level intelligence. The Turing test is one idea for how to measure that.
Re:Not if Alexa is your example (Score:5, Interesting)
Very odd, isn't it? Eliza was thought to demonstrate how foolish the standard was as it purported to show that even a very simple program could pass the Turing test!
We live in the future, so I should probably explain. Eliza was an early "chat bot" type program written by Joe Weizenbaum at the MIT AI lab back in the 60's. Eliza simulates a Rogerian therapist, mostly just rephrasing statements you make as questions and mixing in a few canned responses like "I see" and "can you elaborate". Joe's secretary was famously taken in by the program and thought her conversations should be kept private.
Oh, but we're too sophisticated to be taken in by a computer program like that today, right?
We're still being fooled in to attributing human qualities to computers. Surprisingly, by even simpler tricks than Eliza used! If you've ever installed Windows 10, you'll have seen things like messages like "Hi" and "We're getting things ready for you" that try to make you feel like your computer is friendly and is trying to help you out. Cortana even talks to you to try to make the setup process feel less frightening to normal users.
Even though no one is truly fooled by these silly tricks, they still work. That is, even though it doesn't make anyone think the computer is a sentient being with its own thoughts, feelings, and emotions, it's more than enough to put users at ease and let them relate emotionally to their computer.
Then we have things like computer games [time.com] designed to make players form a strong emotional bond with one of the characters. The illusion is good enough for many people to use it as a substitute for a real-world relationship. That particular game (Love Plus) was even popular enough to get real-world retreats [cnet.com] to cater to users and their virtual companions.
I'm with Ol' Joe Weizenbaum here. The Turing test isn't too hard -- it's way too easy! Who would have thought that it was trivial to write a program that makes you fall in love with it and makes you feel that it loves and cares for you?
More importantly, it isn't being too hard or too difficult that matters anyway. Those sorts of tests, no matter how you structure them, are just inadequate to measure what it is they want to measure.
Re:Not if Alexa is your example (Score:4, Insightful)
I remember AIs in the early text-based MUDs. I always thought the point of so many people easily believing these were real people was that a proper Turing test should not use naive testers, who do not even know they are in a test. Eliza works because we are easily drawn into talking about ourselves. Any decent Turing tester is going to ask the other party questions about themselves, invite creativity in novel areas, shift contexts, refer back to things mentioned earlier in the conversation, there are a bunch of ways that attempts to pass the Turing test can fail easily.
It's like in testing any software. Whether or not you have tests written for what you hope is "everything" you also want some people who understand the code and the real-world use scene and can intentionally generate possible failure scenarios.
Re: (Score:2)
Indeed. It is easy to ask questions that require "world knowledge" that a modern AI can't easily answer.
"The guitar would not fit in the case because it was too big. What was too big?"
"The drum would not fit in the box because it was too small. What was too small?"
Winograd Schema Challenge [wikipedia.org]
Re: (Score:2)
Complete and utter BS.
I've never seen a chatbot that took more than about 30 seconds to show was not a real person. You get so frustrated talking to them in about that much time because they obviously can't even remember what they said 2 lines ago, let alone manage to come off as human.
After all the hype about Eliza I tried, I got through about 3 sentences before I gave up as there was nothing even vaguely human sounding about it. I've tried many others over the years that all claim to pass the turing test,
Re: (Score:3, Insightful)
Re: (Score:3)
If you want to build something useful, then build something useful. That is fine. Forklifts are useful, but not particularly intelligent.
If you want to build general AI, then it should have intelligence.
Re: Not if Alexa is your example (Score:2)
Re: (Score:2)
Re: (Score:2)
Is the goal really to have human level intelligence?
Well, yes, that's goal for now until we can achieve it, but not what you're suggesting. Having human level intelligence means being able to understand sarcasm, imperfect responses, recognizing fakes, perhaps even some variation on boredom so that it seeks out new places to learn. The failings of current AI don't appear to be that they're too good, as in 'the matrix" movie sense where the architect creates a too-perfect response. It's more that current iterations of AI are very impressive calculators vs a
Re: (Score:2)
Agreed. It seems that everyone automatically assumes that the Turing test is invalid.
The emphasis on tricking humans means that for an AI to pass Turing's test
If the Turing test IS valid, then it should not be possible to trick humans into thinking something's smart when it isn't. The only way would be to build a genuinely intelligent system.
Re: (Score:2)
I think most people agree that the Turing test is "invalid". I think you agree:
If the Turing test IS valid, then it should not be possible to trick humans into thinking something's smart when it isn't.
It is trivial to trick humans into thinking something is smart when it isn't.
Re: (Score:2)
And yet, nobody has ever managed to make a computer program that's capable of tricking people in to thinking it's a person. Nobody has ever managed to design a system that can pass the touring test, so obviously it's not "trivial" to do so.
AI Human Intelligence (Score:2)
Re: (Score:2)
Same answer as this comment [slashdot.org]. Useful is fine.
Re: (Score:2)
It's worse than that. I've never seen one of these that can even remember the context of its own conversation more than 1-2 replies further down. They haven't even managed the "smart" part, lest alone the context part.
Re:Not if Alexa is your example (Score:5, Insightful)
What Amazon is pushing isn't "AI".
It's simply a smart system running on responses to a limited number of parameters.
That only works until the dataset for the parameters grows too large and contradictory.
The current setup ALREADY shows signs of parameter failure.
It's artificial. But it isn't actually "intelligent".
Re: (Score:2)
Yeah, current AI is more like "Artificial Stupidity". If a person needed a giant number of examples of a process to learn the process (how deep learning works) we would call them "stupid" and they would perform the process in a "stupid" way that follows the examples by rote even if they don't apply.
I prefer to call what we have now "purpose-optimized semi-arbitrary algorithms" because as far as I can tell that's what they are.
Re: Not if Alexa is your example (Score:2)
Well, current "AI" is extremely stupid. We can even put a number on it: For every neuron in a human, you need a 100 "neurons" in their simulation. Because a neural link is represented as just a weight. Which is the best real-world example of that "simulating a horse race with a perfectly spherical horse on a sinusoidal trajectory" joke I have ever seen.
Real neurons have so many features that they try to simulate with just more weights. But that doesn't work, just like 9 women cannot make a baby in a month.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
If anything I think my Alexa has regressed in ability. I now only use it as a hands free cooking timer.
AI has a Nebulous Definition (Score:2)
"The" definition for AI keep changing and has never been clear.. Furthermore, I think Amazon has just been going for "voice control" with a slight glint of personality on the edges.
--Matthew C. Tedder
Re: (Score:3)
The Turing test never made any sense. It says more about the person interviewing the subject than it does about the subject.
Re: (Score:2)
It says more about the person interviewing the subject than it does about the subject.
Primarily because the candidates for testing are so far below the level of "intelligent" that they can only trick very unaware people.
Re: (Score:2)
Re: (Score:2)
Yes! We need to start using the expression Artificial Stupidity just so that we can compel AS developers to aspire to AI. Intelligence is not what they're delivering!
Re: (Score:2)
Alexa has found a purpose as a timer in my house. "Alexa, set timer for 5 minutes." Something like that happens most every day. Nothing else seems to work. Today I felt hopeful: "Alexa, start stopwatch." But Alexa instead showed me an internet search for 'stopwatch'. I tried some variations to no avail. My Casio watch of 25 years ago understood both timer and stopwatch and it didn't need any intelligence to do that.
Re: (Score:2)
Loebner Prize (Score:4, Interesting)
Yes, a chatbot will eventually win the Loebner Prize. But what if, instead of calling that "AI," we all said, that's nice, but your chatbot needs to be able to distinguish between other winners, copies of itself, and humans. By definition, humans can't do that (thus the chatbot(s) winning), so anything able to do this Loebner Prize+ test is actual, real, artificial intelligence, regardless of its ability on whatever other tasks you set before it.
Karma on slashdot (Score:2)
How about counting the number of fans and freaks the AI userID gets on slashdot. And also how often the AI fans another AI.
Re: (Score:2)
I'm with you there. The whole idea that we can use a behaviorist test to detect "true AI" is just silly.
He's right as far as it goes, but it won't work. (Score:4, Insightful)
He's right about the obvious truth that the test was written to answer questions from a different age. Everybody who has done significant work with AI knows this.
He's wrong about stopping it driving the popular imagination.
To scientists and academics the Turing Test is exactly what it is: the first viable proposed method to discover if a machine is exhibiting behavior which is indistinguishable from human behavior as identified by another human.
To lay people, the Turing Test is the gold standard where a robot can invisibly infiltrate and integrate itself with other humans.
Good luck getting the average idiot to revise their definitions, they get their facts from pop culture and movie references, not scientific journals.
Re:He's right as far as it goes, but it won't work (Score:5, Funny)
To be fair, I know people who probably couldn't pass a Turing Test.
Re: (Score:2)
I spent a lot of time reading transcripts of Turing test trials a while back, especially from candidates that managed to trick people. They are often scenarios where one chatter is a person, and one is a computer, and the tester has to determine which one is a computer and which one is a human.
I found that often, the human chatter will act like a computer.
Replaced by what? (Score:2)
Point well taken, the Turing Test is not a good measure of AI. Simply fooling humans by imitating conversation says little about machine "intelligence."
It seems to me that to look for a simplistic "test" to measure AI is mistaken. There are already various tests and competitions, from object recognition to language generation. Why assume that one test or metric can encompass all aspects of computing?
Re: (Score:2)
The problem is that we do not have a good rigorous definition of "intelligence", because we don't really understand what intelligence is. All we have are some empirical examples of what we would describe as intelligent behaviour.
Given this lack, the best we can do to determine if an entity has human-like intelligence is to point to a behavior that we think is associated with human intelligence and see if the entity can replicate that behaviour. In the case of the Turing test, the proxy for intelligence is
are you smarter than a 5th grader? (Score:2)
I propose a simple replacement. Be able to answer correctly any question that a child can answer.
We can start with a 1st grader:
What color is the sky?
How many letters are in the word cat?
Does a microwave make things hotter or colder?
and move up to a 5th grader:
What is the 3rd letter of the first president's last name?
We could even make it easier with yes/no questions like:
Re: (Score:2)
> Why assume that one test or metric can encompass all aspects of computing?
I'm not sure that "all aspects of computing" is what we're trying to measure here.
I think the Turing Test was designed as a measurement of a specific concept branded Artificial Intelligence (tm). There is a ton of useful computation that is not AI.
Re: (Score:2)
There is a ton of useful computation that is not AI.
There used to be. Now everything "on a computer" has been re-defined as "AI" even when it bears no resemblance to anything anyone who doesn't have a marketing degree would believe is AI.
Re: (Score:2)
Why assume that one test or metric can encompass all aspects of computing?
The problem is bigger than that. We've known for 40 years that purely computationalist approaches to so-called strong AI are doomed to failure. I don't know that anyone takes that old idea seriously anymore. Certainly no one qualified is doing legitimate work along those lines.
Re: (Score:2)
Yeah its not that useful. GPT3 can pass Turing tests maybe 50% of the time. But its mostly just a big old language model, albeit one with an unnervingly human sense of context (Well its not that surprising , it was trained on human text). But is it AGI? Well no. It still doesnt *really* understand what its doing, its just hitting a database of interactions via a neural net.
So if a Turing Test passing GPT3 isn't true AGI, then we better come up with a better test.
Re: (Score:2)
50% of the time? I've seen many examples of GPT3, and I've never seen one that could even remotely be considered to have passed the turing test. It's mostly gibberish word salad. It can't carry on a full conversation convincingly.
Says the guy who'se AI can't pass the Turning Test (Score:5, Insightful)
The fact that we can do useful things with AIs that don't pass the Turing test doesn't mean the test is obsolete. It just means we've found one more class of problems that don't require it's solution. We've got LOTS of problems that don't require passing a Turing test.
Someone please stop giving these bozos airtime.
Re: (Score:2)
The nice thing of AI is that whatever it's level of competence it's scaleable.
You want a cheap AI minder for everyone on the planet, you got it. They can monitor and escalate whenever it detects signes the human is goiing to become uppity. It does not have to be all that smart by itself.
Re: (Score:2)
The nice thing of AI is that whatever it's level of competence it's scaleable.
What makes you think that?
Re: (Score:2)
No, we can't do ANYTHING with AI at all, because nobody has yet come up with AI. Marketing people have redefined all sorts of things to be "AI" but that doesn't actually make them so.
Just because a computer does something useful, doesn't make it AI. Useful is good, it doesn't need to be AI to do something useful.
To paraphrase (Score:5, Funny)
To paraphrase a Park Ranger speaking about the difficulty of making a bear-proof container:
"There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
So it will be with AI posing "are you human?" questions. Some people won't be able to convince the AI that they're human (I know a couple likely candidates to be honest).
I wanna see battling AIs trying convince each other that they're human.
Re: (Score:3)
I wanna see battling AIs trying convince each other that they're human.
Did you miss the debates this year?
Re: To paraphrase (Score:2)
Thankfully, yes. ;)
I prefer to keep m all my hair, thank you very much.
Misunderstanding the Turing test (Score:5, Interesting)
Re: (Score:2)
Re: (Score:2)
The very first sentence of Turing's paper is:
"I propose to consider the question, ‘Can machines think?’"
AI? (Score:2)
This sounds like the musings of the tech-equivalent of a New-ager. Gullible. accepting and uncritical.
Re: (Score:3)
In fact, the phrase "Artificial Intelligence" was coined five years after Turing presented his paper about this test (which he called "The Imitation Game.") And not by Alan Turing.
Turing's paper was more philosophical than technical anyway. He started out with a declaration of intent to address the question "Do computers think?" Referring, of course, to computers of his day. This question had no practical nor legal value at that time, it was really just philosophical musing.
He proceeded to reject this f
Re: (Score:2)
in much the same way that Daniel Dennett and other modern physicalist philosophers do.
Dan Dennett is a populist hack. You'd do well to ignore anything he says.
Sales pitch (Score:5, Insightful)
If you're asking your AI assistant to turn off your garage lights, you aren't looking to have a dialogue. Instead, you'd want it to fulfill that request and notify you with a simple acknowledgment, "ok" or "done."
Sounds like the author is trying to muddy the definition of AI to me. I wouldn't call speech recognition of a few simple commands AI. By this definition my wall switch is AI: I request lights to be on via a simple mechanical action (flip the switch on), and it acknowledges the command via a mechanical click. Neither the switch nor Alexa understand anything more than that.
And who in the world would run a Turing test by asking "how much is square root of X"? That's not the point of the test at all - a calculator will find the result much faster than any regular human can, but this doesn't make it artificial intelligence.
The point of the test is that it can run the gamut of subjects, reasoning, understanding, context, that a regular human is expected to be able to provide. You should be able to tell the machine a joke, and it could explain why it's funny. You should be able to speak metaphorically, and it should understand you. It should be able to interpret and respond to the same words or sentences differently, depending on context (and with the context encompassing a very wide range: for example, the current conversation, past conversations with you, the local or international situation, and so on). You should be able to discuss an ethical problem with it, and it should provide meaningful reasoning. Can Alexa do any of that? Until it can, declaring the Turing test obsolete is nothing more than a sales pitch for Amazon's spyware.
Re: (Score:2)
Re: (Score:2)
You should be able to discuss an ethical problem with it, and it should provide meaningful reasoning. Can Alexa do any of that?
Hell, you can't get meaningful reasoning about an ethical problem on slashdot. What hope does my toaster have?
Re: (Score:2)
Hell, you can't get meaningful reasoning about an ethical problem on slashdot. What hope does my toaster have?
Fair enough - though I'd say that picking the average slashbot as the reference point for human intelligence does give your toaster a fighting chance.
If you bring up a problem, offer a solution (Score:2)
Re: (Score:2)
The only "business" here is in keeping the AI "research" money flowing. I'd say he's doing a bang-up job.
Re: (Score:2)
His solution is to redefine AI to mean simple speech recognition, database lookup, and command execution. Probably because people are willing to shell out big bucks if they think something is "AI" and it's far easier to redefine "AI" to what they already have, than it is to actually develop AI (Something that nobody has yet succeeded at, despite MANY people trying to redefine it in similar ways to this clown)
Re: (Score:2)
In the article, he recommends solving logic problems the AI hasn't seen before [kaggle.com]. Somewhat hypocritically, he also recommends Amazon's own chatbot challenge [amazon.com].
The 3rd letter of the 1st president's last name? (Score:2)
The basic premise of the Turing Test is still valid. Can you build a program that is indistinguishable from a human?
Questions that prove it's a computer are kind of cheating. The point isn't to find a question a computer can answer
that a human can't. The point is to find a question that a human can answer that a computer can't.
If you want to level the playing field though, let the human have access to a calculator and/or the internet.
There are still plenty of questions that humans can easily answer that
Re: (Score:2)
Questions like "What is the third letter of the first president's last name?" are simple for almost any child
but computers can't answer because computers are not actually understanding what is being said.
Is that hard? I thought Watson solved that kind of problem.
Computer test (Score:2)
Call me in 50 years when we have to pass the computer test or weâ(TM)ll be selected for annihilation.
My threshold for AI usefulness (Score:4, Interesting)
My personal test is whether the AI assistant has value or not is to compare it to a real-life human assistant. In the sense that that real-life assistant has a memory of past items discussed, can understand pronouns and other placeholder language, and can grok multiple requests in the same query. If I'm being ambiguous, the real life assistant can ask clarifying questions to get at what I really want.
For example, I want to be like:
"Hey Alexa, remember the blonde girl who I had a meeting with on Tuesday Morning? Please schedule a reservation for 7PM tonight at a sushi restaurant near the office in her name for both of us"
as opposed to:
"Please list attendees of 2PM meeting"
(list of names response)
"Tell me Chinese restaurant nearby"
(list of restaurants response)
"Book a table at 7PM for (name from list)"
etc etc.
It's fine to ask me back "you had 3 meetings Tuesday morning, which did you mean?"
(caveat: I haven't used any of these services other than Siri to ask directions while keeping my eyes on the road or maybe to set an alarm. So maybe they are already this good, but I doubt it?)
Sophistry (Score:2)
Beer from fridge test (Score:2)
Call me when I can tell a robot to go fetch me a beer, fix me a sandwich, and take out the trash. That's something useful a human can do. And no the Boston Dynamics entertainment robots can't do it. They don't have a dextrous hand for one thing.
Tranlation: (Score:5, Insightful)
"This target is too hard. Let's paint the bulls-eye around the arrow."
That's easy! (Score:2)
Sarcasm.
Call me when it can detect sarasm like a human.
(You can only detect sarcasm right, if you grew up and lived a life in the society it comes from. Since SWM [shitty weight matrices, "AI"] didn't do that, they cannot properly detect sarcasm. ... Or be able to understand references, memes, inside jokes, etc, by the way.)
I suggest an arms race (Score:2)
The metric by which you could measure AI's intelligence is the accuracy to which it can correctly identify someone as human or not, or if it is communicating with another AI.
When an AI is talking to an AI, you can create a feedback loop on the AI being used for input, so that one AI gets better at convincing the other AI that it is is human, while the other AI always gets feeback on how it did, to get better at detecting if it is talking to a human or another AI.
Ensure that for any given test, whether
Why do you want artificial humans? (Score:2)
It is not as if we are short of real humans. They just need better training.
Re: (Score:2)
I suspect that a lot of these real humans would struggle to pass the Turing test.
How naive is this fellow? (Score:2)
He claims: The emphasis on tricking humans means that for an AI to pass Turing's test, it has to inject pauses in responses to questions like, "do you know what is the cube root of 3434756?"
That would only work with the most gullible humans - anybody with a modicum of savvy tasked to find out whether the other party in the dialog is a human or machine would probably even try the approach above, on the grounds that programs can trivially address it. No, state-of-the-art chatbots get quickly confused when you
Re: (Score:2)
Re: (Score:2)
Hot tip: never buy anything from a place selling carrot gold.
Alexa, Eliza, and Parrot's (Score:2)
> Turing Test all but discounts [misused label]'s machine-like attributes of fast computation and information lookup.
So what should it be: Computers that talk?
Amazon wants to market Alexa as AI. It is not.
Alexa's "intelligence" is barely above that of Eliza. Below the intelligence of a Parrot.
Does this work:
My brothers name is Ted.
Ted's favorite color is Red.
What is my brothers favorite color?
Re: (Score:2)
What's wrong with that? AI is all about marketing!
Real AI work has absolutely nothing to do with what everyone else thinks AI is all about. That confusion is what sells products and secures funding.
It's been like that for a very, very, long time. AI hype as always come and gone in waves. I'm absolutely stunned that the current wave has lasted as long as it has.
Moving the goalposts (Score:2)
To be more direct about it: they sound like what they want to do is redefine what a 'Turing Test' is so that their crappy, half-assed, non-cognitive excuse for 'AI' looks better than it really is -- so they can se
ML Ennui (Score:2)
nothing yet passes the turing test. (Score:2)
Re: (Score:2)
Depends on your version of "the" Turing test. I, along with many others, have always maintained that Turing tests, no matter the criteria, are completely useless for identifying "true" AI. (That is, the populist conception of AI.) The whole behaviorist approach was foolish from the start.
Now, if we just want to trick a human into thinking a program has its own thoughts and feelings, well, that's trivial. I gave some examples earlier in this thread.
I thought about it for a bit, and I'll bet that a lot us
Re: (Score:2)
Now, if we just want to trick a human into thinking a program has its own thoughts and feelings, well, that's trivial. I gave some examples earlier in this thread.
I've read the entire thread, I haven't found a single example of an AI that can trick a human into thinking a program has its own thoughts and feelings.
It's "Trivial" as you've said many times, yet nobody has ever managed to do it. You keep using that word, I do not think it means what you think it does.
The New Turing Test (Score:2)
TRLT: Tablizer's Real Life Test: (Score:2)
1. Fetch requested drink from fridge
2. Suck my wanker
3. Wash all the dishes found in the sink and on the table
4. Do the laundry
5. Filter out telemarketers from real calls
6. F with Mormons & Joho's who knock on my door
8. Let me know when I mis-number a list
7. Don't say "Profit!"
VP of AI (Score:2)
How good is the VP of AI if he still has a job?
Re: (Score:2)
Better Tests (Score:2)
The Turing Test has always been nothing more than foolish entertainment... Honestly, it's embarrassed me since the first I heard of it, back in 1989. The Turing Test is more nebulous than the constantly changing definition of AI, even, almost as bad as the definition of "robot" for which now, a remote controlled toy car fits. And an "android" is now phone?? How did any of this happen?
Deep Learning isn't much beyond inferential statistical methods and far, far, from a biologically valid simulation of neu
Door, if you can hear me... (Score:3)
say so very, very quietly.
Very, very quietly, the door murmured, ``I can hear you.''
``Good. Now, in a moment, I'm going to ask you to open. When you open I do not want you to say that you enjoyed it, OK?''
``OK.''
``And I don't want you to say to me that I have made a simple door very happy, or that it is your pleasure to open for me and your satisfaction to close again with the knowledge of a job well done, OK?''
``OK.''
``And I do not want you to ask me to have a nice day, understand?''
``I understand.''
``OK,'' said Zaphod, tensing himself, ``open now.''
The door slid open quietly. Zaphod slipped quietly through. The door closed quietly behind him.
``Is that the way you like it, Mr Beeblebrox?'' said the door out loud.
Amazon VP does not pass Turing test (Score:3)
I will file this under Amazon VP says things that an intelligent human would not.
Re:This guy makes no sense (Score:5, Interesting)
why would I want to define "artificial intelligence" to include them?
Imagine you were an Amazon executive who's bonus depended on achieving AI. Now imagine, after a few years of hype, you suddenly realised that deep learning is not actually likely to deliver general AI within the next few years and your bonus is really at risk. Wouldn't it be really useful to redefine "AI", so that instead of meaning general AI, it instead means something quite difficult that few other people on the planet had done, but that your team had already achieved. Think about how easily you could get yourself a new Tesla S-Series with the benefits of defining AI to include those type of things.
Amazon can't get rid of fake reviews. (Score:2)
Re:Time for a [humanity] test for Amazon execs (Score:5, Informative)
Well, for what little it's apparently worth, I think your Subject was quite promising. Presumably the brevity of your body was due to the race for FP. So the discussion went elsewhere?
I did go ahead and modify your Subject to focus slightly differently. Largely a reaction to Talk to Me: Amazon, Google, Apple and the Race for Voice-Controlled AI by James Vlahos, as amplified by Zucked by Roger McNamee. Flipping it sideways, I think certain people would fail Turing tests. Such people, including some executives at Amazon and Facebook, could not prove they are not computers. Some monkeys would do better at acting human.
Do you know about the cucumber experiments? I'll try to recap briefly. Some monkeys were trained to do a task with cucumbers as rewards. Another group of monkeys was trained with grapes for the same tasks. Both groups of monkeys were quite content to perform their tasks for their respective rewards, even though monkeys love grapes much more than cucumbers. Then the cucumber monkeys were allowed to watch the grape monkeys and they went apeshit. Started throwing the cucumbers at the scientists! They wanted grapes! 'Equal pay for equal work! NOT FAIR.' Can't actually ask the monkeys to explain why, but it seems that fairness among monkeys is a genetic thing.
But we can ask the Libertarians. They'll explain that the cucumber monkeys deserve the lousy cucumbers for being lazy, worthless monkeys!
Re: (Score:3)
And that bothers you why?
Personally, I feel the closer we get to me being able to meet all my purchasing needs of any type at one online spot with as little time spent as possible, the better. The only way I approve of not heading toward one place with everything I could ever possibly want from a can of green beans to a house is to move toward having AI sophisticated enough for me to tell it to get me something and trust that it will get the best price available in a reasonable time.
Re: (Score:2)
So how about an AI agent that imitates how you think? One that could do your shopping for you, doing lots of research to find the products that you really want to buy? Including negotiating a cut of the profits (disguised as discount sales) to be made in the deals? But can you even run a Vickey auction in reverse?
(Just my final conclusion from the book I cited earlier. However I think Apple is the least harmful of the major corporate cancers, so I'd rather see that brand on it. Can't imagine ever trusting a
Re: (Score:2)
To me artificial intelligence means that something is self aware and can interpret and understand its environment and contexts
Yes, that is exactly what most people think. That's precisely what the term should be exclusively used to describe.
Unfortunately, that is not what AI means in either the business or academic worlds. AI today is squarely in the boring old data science and machine learning realm. No one is working on what you, and everyone else, considers to be AI. (Trust me, I have the necessary grad credits to say that with authority.)
I've never cared for the Turing test. I think its trivial to pass, at least for a li
Re: (Score:2)
If the turing test is "trivial to pass", how come nobody has yet managed to develop a system that can actually do so?