Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Stats

Study Indicates Americans Don't Trust AI (digitaltrends.com) 151

Taco Cowboy writes: It may be brilliant, but it's not all that trustworthy. That appears to be the opinion Americans hold when it comes to Artificial Intelligence systems... And while we may be interacting with AI systems more frequently than we realize (hi, Siri), a new study from Time etc suggests that Americans don't believe the AI revolution is quite here yet, with 54 percent claiming to have never interacted with such a system

The more interesting finding reveals that 26 percent of respondents said they would not trust an AI with any personal or professional task. Sure, sending a text message or making a phone call is fine, but 51 percent said they'd be uncomfortable sharing personal data with an AI system. Moreover, 23 percent of Americans who say they have interacted with an AI reported being dissatisfied with the experience.

I thought it was interesting that 66% of the respondents said they'd be uncomfortable sharing financial data with an AI, while 53% said they'd be uncomfortable sharing professional data.
This discussion has been archived. No new comments can be posted.

Study Indicates Americans Don't Trust AI

Comments Filter:
    • by khasim ( 1285 )

      And using Siri as a demonstration of AI is stupid. :)

      Siri is voice recognition + standardized queries. I'm in Seattle and I say "whether" to Siri. The reply is the current weather report.

      So, of course people aren't going to trust AI yet. Because AI isn't here yet.

      • by Sique ( 173459 )
        Voice recognition is AI. Connection the term "weather" with the location given by GPS and requesting the weather report is AI. So you were saying?
        • Voice recognition is AI. Connection the term "weather" with the location given by GPS and requesting the weather report is AI. So you were saying?

          I'm an AI researcher working on strong AI.

          Knowing what AI actually is would greatly help me in my research, but I'm having trouble interpreting your meaning, and was wondering if you could explain.

          Did you mean to say:

          Possible meaning A:

          • Intelligent people recognize voice input.
          • Siri recognizes voice input.
          • Therefore, Siri is intelligent!

          Possible meaning B:

          • Everything and the kitchen sink can be considered AI.

          We can extend possible meaning A above to include all sorts of intelligent behaviours. For example:

          • Intel
          • I think we should consider it in terms of the brain.

            The Thalamus, Hippocampus, Amygdala, Striatum, Auditory Cortex, Prefrontal Cortex, Corpus Callosu, Reticular Nucleus, Intralaminar nuclei, Basal Ganglia are not intelligence.

            Siri essentially is several parts of the of the brain (auditory cortext, parts of broca's area, etc.).

            ---

            When discussing A.I. people seem to have two standards.

            When something is impossible, they say that's A.I.

            As soon as we succeed at implementing that behavior, they say that's not A.I

          • by Jeremi ( 14640 )

            The actual definition of AI is "something that people can do, that computers cannot do, that we would like computers to be able to do".

            So playing chess was AI up until 1950, and Siri was AI up until the day Siri shipped. ;)

        • by khasim ( 1285 )

          No. Voice recognition is voice recognition. As with my "whether" / "weather" example.

          Intelligence can distinguish between the two based upon context.

          Siri cannot.

          • by WarJolt ( 990309 )

            You're missing the point. Trust is earned. I don't care if machines figure out natural language. I don't even trust some of the most advanced human intelligence because all humans make mistakes. It's simply too risky to trust someone who's mistakes can cost you big. Intelligence isn't enough. AI really hasn't advanced to the point where it can mitigate risk on its own. We have to think about the risks and explicitly train it properly. One could argue that truely trusting an AI requires some form of sentien

          • by ceoyoyo ( 59147 )

            Sure she can.

            If you say "whether" as a single word to me, I'm going to assume you meant "weather?" too. If you use the word in a sentence I'll judge which one you meant by context. Siri does that too.

            The Siri system is voice recognition coupled with natural language processing. It's also a learning system, not a programmed one.

        • Voice recognition is AI.

          That's like saying my TI34 calculator adding 64 plus 76 is AI. You could buy voice recognition or the TRS-80 in the 1970's.

        • No, its not. AI is an EMERGENT behavior when you have built the correct environment. Siri is canned responses and little more.
        • by AK Marc ( 707885 )
          By your definition, AI would include a 1950 telephone. You put in a number, and if maps that to a destination, and notifies them you are trying to reach them, and if they accept, connects the call.

          Stong AI has a simple test. Can it write an AI smarter than itself?

          Voice recognition isn't AI. Or has "AI" come to mean all "weak AI", and "weak AI" is now defined as " anything a dumb human thinks is hard"?
      • Any person that heard the single word request "whether" would interpret it the same. It's the only way that makes sense.
  • by ZorinLynx ( 31751 ) on Sunday May 29, 2016 @03:35PM (#52206419) Homepage

    Nearly every single movie featuring an AI has shown it eventually trying to destroy/enslave humanity. So we've had it programmed into our heads from when we were kids to distrust AIs. Even if the movies have little to no grounding in reality, seeing ways things can go wrong depicted can be pretty powerful on our overal psyche.

    • by Sique ( 173459 )
      You definitely watch the wrong movies. Or too many movies of the same theme. Counter example: Futurama. Another counter example: Hitchhiker's Guide.
    • Or maybe it's just because all AI we have seen so far is blatantly superficial. To the general public, AI is a lot more then just winning at a game like 'Go', which doesn't even incorporate human psychology as an aspect of play.
      • by ceoyoyo ( 59147 )

        The real AI (for definitions of AI that include systems that learn) is hidden, or more subtle. Google image search, and probably Google's regular search, are now sophisticated deep learning networks. Google probably also has a lot of targeted ad and business decision type stuff that uses deep learning.

        Alpha Go was a public demonstration, like an Indy race. Siri is a toy for the public to feed Apple data they'll use to build something more sophisticated.

        • Meh, still not that impressive.
        • by fred911 ( 83970 )

          " Google image search, and probably Google's regular search, are now sophisticated deep learning networks.

          Absolutely. Suggested query completion, not just spell correction are an example. Results displaying, "did you mean", "users also searched for" and the "results also include (a corrected query)" are also examples of AI.

          On the mobile platform, it's much more prevalent. The mobile platform is slowly linking relations between previous and current queries to attempt to provide results useful to the

          • by MrL0G1C ( 867445 )

            But that's not intelligence other than on the database designers part, that's just DB queries / fuzzy logic. Apples and oranges.

            • by ceoyoyo ( 59147 )

              No, it's not. Google is neck deep in deep learning. They use it for everything. The key is half of the name: learning. You don't program a deep learning system, or construct a database for it, you teach it.

              People are going to keep moving the goal posts on "intelligence" until skynet exterminates them, but a system that can learn (in some cases all by itself) to do the things we take for granted today was absolutely considered AI, and pretty miraculous AI, twenty years ago. It's certainly not human level

    • Counterexamples: "A.I.", "Bicentennial Man".

    • by djinn6 ( 1868030 )
      The AI depicted in the movies are completely different from the AI that we're using. Movie AI is sentient, more intelligent than humans and has self-preservation instincts built in. Real AI is more like a table saw or a drill. It's only when a programmer comes along and wires the different tools together that the whole system starts behaving like it's intelligent. But really it's the intelligence of the programmer you're seeing.
    • So we've had it programmed into our heads from when we were kids to distrust AIs.

      Maybe that's the last vestigial traces of something bad that happened a long time ago. Something like a race memory.

  • AI simply isn't there yet. Wait 20 years (TM) and then we'll see. (I'm only half joking.)

    • by Kobun ( 668169 )
      No kidding. Voice dialing doesn't reliably work on my brand-new Android phone and top-of-the-line headset. No chance in hell I would trust OK Google to accurately record an appointment, let alone anything bigger.
      • OS/2 4.0 had voice dictation in 1994 on a 25MHz 486. I think it was supposed to be something like 97% accurate after you trained it, but still annoying. YOU --- HAD --- TO -- TALK -- LIKE -- THIS -- FOR -- IT -- TO -- WORK -- REASONABLY -- WELL -- PERIOD At the time there were numerous articles on every news outlet about how this was going to put a large number of people out of work.
        • by Kobun ( 668169 )
          I'm hugely looking forward to highly-accurate voice accuracy. I routinely have to drive 6+ hours for work, and being able to dictate emails and legible notes while on the road would be valuable. Right now the speech-to-text transcriptions of my audio notes require significant touch-up effort.
    • What people don't know is that AI is already there, routing your phone calls, responding to your e-mails, advising your investment decisions, sorting your photographs, profiling you as a security risk and a potential customer, making front line decisions about whether or not your job application gets considered by humans...

      It doesn't have to be implemented in a neural net to be AI. Some simplistic algorithms are still "smarter" and more efficient than your average employee...

      • Back in the '80s, I worked for a little startup that sold a program similar to a document generator to the Legal Industry. It helped you create templates for legal instruments that you used repeatedly then helped you fill them out from a database of facts related to the current case. The routines that helped you create the template and fill them out looked incredibly smart, but all they really did was suggest the same thing you'd used last time.
      • by djinn6 ( 1868030 )
        Pattern recognition and algorithms are not AI. Many animals capable of doing both are not intelligent. Flies for example, can recognize suspended food molecules in the air, then follow a genetically programmed algorithm to fly toward places with higher density of those molecules. It can then use a different algorithm to land on the food and ingest it. The process is by no means simple. However, most people would not consider flies to be intelligent.
        • 40 years ago, I was taught in school that Homo sapiens was the only intelligent species on the planet, because of tool use, self awareness and language.

          It isn't all that clear-cut. I liked my bio-professor's definition of "is it cruel?" - "if it doesn't try to get away from you while you do it, it's not cruel" said about passing an electrical current through sea urchins causing them to release their eggs and sperm into the water...

          Is the fly intelligent? More than many people seem to give it credit for.

  • So, 54% do not trust AI? The same way a couple hundred years ago people did no trust the science and medicine.

    Do they trust traffic signal? You know, that one that shows red to stop and green to drive. It is controlled by AI.

    Do they trust to fly a plane? Many people do. So, those planes are manned with the people who listen when AI tells them to change the altitude to avoid the collision. Well, a human pilot is not necessary, but people feel safer when human is managed by AI.

    They do no trust AI. But probabl

    • Do they trust traffic signal? You know, that one that shows red to stop and green to drive. It is controlled by AI.

      The traffic signal has an algorithm (i.e., a series of instructions). Artificial Intelligence for a single traffic signal would be waste of resources. A network of traffic signals for an entire metropolitan area may one day be controlled by an Artificial Intelligence.

      • by AK Marc ( 707885 )
        I want one AI per traffic signal. They should have cameras in all 4 directions, and measure the number of cars (and speeds and such) in each way, and use a weighting algorithm to minimize the "cost" of those trying to pass through the intersection. Link the AIs from all the intersections for predictive signaling, and visualize 100% of the traffic in real-time to minimize travel cost of the network. 10,000 AIs at each intersection linked to each other could be called a single AI. But a central AI that's
        • by Jeremi ( 14640 )

          It's all fun and games until an AI realizes that a great way to increase the throughput of its intersection would be to keep the lights green in both directions at the same time.

          Of course, what the AI doesn't (yet) realize is that human drivers don't have the ability to reliably "zipper" through the holes in a perpendicular stream of traffic that is traveling at full speed. Maybe the AI will figure out the problem once it has seen the resulting mayhem, but in the meantime I'd prefer to use the old "dumb" i

          • It's all fun and games until an AI realizes that a great way to increase the throughput of its intersection would be to keep the lights green in both directions at the same time.

            That was the basis for "The Two Faces of Tomorrow" by James P. Hogan, where "dumb" computer networks started making logical shortcuts to increase productivity that put human lives in jeopardy, making the powers to be reluctant to upgrade to the "smart" network that might become a hostile Artificial Intelligence.

          • by AK Marc ( 707885 )
            AI does fully understand that humans don't zipper. So long as the base parameters are set properly.

            Humans have base parameters. Infants who have never seen or experienced a fall are still scared of heights. Some parameters are included at "birth", so humans building AI should include those as well.

            Yes, the worst possible AI will be pretty bad at doing things. But the worst possible anything is pretty bad at doing things.

            Likely, the AI would fire itself in 5 years. It'd take that long to get a good c
    • by NotAPK ( 4529127 )

      "Do they trust traffic signal? You know, that one that shows red to stop and green to drive. It is controlled by AI."

      In South Africa the common slang term for a traffic light is "Robot".

      I think it's kind of cute :)

    • by AK Marc ( 707885 )

      Do they trust traffic signal? You know, that one that shows red to stop and green to drive. It is controlled by AI.

      The traffic signals here are not controlled by an AI. And all the "AI" that is used to control traffic signals is such weak AI that it shouldn't count.

    • So, 54% do not trust AI? The same way a couple hundred years ago people did no trust the science and medicine..

      What the hell are you talking about? Americans still don't trust science.
      https://www.washingtonpost.com... [washingtonpost.com]

  • by Anonymous Coward

    Open the pod bay door, Hal.

    • Don't forget SkyNet, the MCP, ARIA, and the Tet. There's also Ash from Alien, countered somewhat by Bishop in Aliens, who is then countered by David in Prometheus.

      In comics, you have Ultron, Master Mold, and HARDAC

      In games, you have GlaDOS, SHODAN, Red Queen, Sovereign (and the rest of the reapers), the Geth, and Mother Brain

      ...and that's just coming up with a list from memory when thinking about it for a few minutes.

    • The entire 2001 storyline regarding HAL could have been a short story by Asimov and would have been a better example of the dangers of absolutes. Dr Chandra was a hack who didnt understand humans (or power) at all.
    • Can't blame that one on HAL. It was doing what it was told to do.
  • by Anonymous Coward

    I can look at it's code, neural nets, binary, and I can understand it (hopefully), what do I need trust for, I know how it works.

    • I would think it would be pretty hard to understand neural networks except for blatantly obvious hard-coded things.
  • by kbonin ( 58917 ) on Sunday May 29, 2016 @03:56PM (#52206513)

    I love how every new cool thing HAS to live offsite in some cloud, i.e. in completely opaque manner by an increasingly remote corporation, that far more often than not views its cool thing as nothing more than yet another vector to collect data about its users and market that data to advertisers and aggregators, since that's becoming more profitable than selling cool things. We're becoming surrounded by untrustworthy devices and platforms funneling away all the data they can. Nobody really cares about knowing what sort of cat pictures we prefer, but the power and control possible by proper analysis of all of this data, even in aggregate, is becoming somewhat alarming. AI may have cool potential (I study it myself), but I'm worried about the modern application and misuse of tools facilitating deeper interactions and the analysis thereof... No major modern corporation (or government) has demonstrated itself to be trustworthy in any traditional sense, and many border on psychopathic...

  • I suppose most US inhabitants only know of the kill-all-humans AI from the movies, being Skynet the first thing thing people think of. Even the bad guy in Wall-E is a rogue AI. And do not get me started with Star Trek and the return home for the Voyager space probe.
    • I suppose most US inhabitants only know of the kill-all-humans AI from the movies, being Skynet the first thing thing people think of.

      Don't forget the nuclear bomb controlled by a suicidal AI.

      https://www.youtube.com/watch?v=qjGRySVyTDk [youtube.com]

    • In fairness, Star Trek's VGER wasn't evil, or trying to destroy humanity. Its whole motive was to meet with (and merge with) its creator. Also, Star Trek gave us Lt. Data, probably the most positive example of AI ever (inb4 Lore). We were also exposed to positive AIs in Star Wars with R2-D2 and C-3PO. It seems that if the AI is the central focus of the movie/TV show plot, it will be out to destroy humanity, but that stems from the need for a movie to have a conflict to resolve. If the AI isn't the main focu

      • VGER was not evil, but it is considered as a threat through the movie because it is unknown. People are still afraid from the unknown.
    • And Data from StarTrek.
  • by PPH ( 736903 ) on Sunday May 29, 2016 @04:27PM (#52206621)
    "The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error."
    • All this shows is that Dr Chandra was a hack. HAL was arrogant, blind and shows a stunning lack of wisdom in this statement. HAL is a great example of why we will always have people controlling things. I dont fear enslavement by AI, i fear enslavement by other men USING AI.
      • HAL is a great example of why we will always have people controlling things. I dont fear enslavement by AI, i fear enslavement by other men USING AI.

        So how will it help you any to have those other men control things directly, especially since they'll just delegate to their AI underlings anyway?

  • The general public is going to take their view of AI not from researchers, not from the press, but from mass media entertainment. And possibly a few sensationalist press pieces that play off the mass media entertainment view of AI.

    I really really hate public opinion polls that survey the public’s impression of things that require deep understanding.

    Consider that a good part of the public takes its democratic responsibility to vote on issues that directly impact them using such an educated and informed

  • by JimMcc ( 31079 ) on Sunday May 29, 2016 @04:32PM (#52206655) Homepage

    It's not the AI that I don't trust, it's the companies with access to the data that worry me.

  • by Nyder ( 754090 ) on Sunday May 29, 2016 @04:41PM (#52206697) Journal

    AI is a tool, just like a spreadsheet program. AI's by themselves aren't really the problem, who owns them and programs them are the problem. Corporations, whom some have shown a total greedy need for profit, even if it means breaking the law, will most likely be one of the major players in the AI industry. Governments are going to be another major player. History and news reports shows you how trustful governments are.

    AI isn't going to be bad in itself, but it's owners, well, ya, we are fucked.

  • We computer folk are overexcited about the AI that wins at 'Go' because we know it uses a neural network, which is modeled after the brain, so it must be AI. It's going to take a lot more to win over the general public. As incredible as this AI seems to us, it is no where near to something that feels human; it still just feels like a complex calculation. Go doesn't even involve human psychology, it's all cognitive.

    A better test of AI would be a game of poker, where the system's parameters are human rul
  • The pilot episode, "Study Indicates Many Americans Don't Trust AA" was a real sleeper. Not many heard about it. In retrospect, it probably wasn't the best place to start, but they just couldn't work up enough bile for "Study Indicates Many Americans Don't Trust A" all the way through "Study Indicates Many Americans Don't Trust Z". Disappointing.

    Similarly, AB through AH were all crossed out, though AC will later be spelled out as "air conditioning" and AG will make a reappearance in a future episode title

    • by epine ( 68316 )

      How did my fingers add that apostrophe to the subject line, and nowhere else? It must be that "what Americans trust" feels subconsciously possessive.

      It could also be that the Slashdot preview function doesn't preview the subject line. (Except to bitch about it being blank in cases where you deliberately wished to preview your post before deciding.)

      Beneficent overlords, anyone out there?

  • Americans don't trust most things. That's often a good default response, especially when the AIs are owned by giant evil sociopathic corporations.

  • Something else Americans don't trust or are afraid of - shocking. We're turning into a nation of scared crybabies or angry idiots at any given moment. It's embarrassing.
  • by Lumpy ( 12016 ) on Sunday May 29, 2016 @05:19PM (#52206921) Homepage

    That most americans are dumb as a box of rocks...

    Honestly my fellow citizens are pretty fucking stupid. I personally embrace the AI and use it to my advantage. Let the drooling masses cower in fear of the new Witches.

    Hell many of us that can create things with electronics, software and 3d printers are already considered magicians of dark arts by most of our Fellow Americans

  • AI as in... (Score:5, Funny)

    by wonkey_monkey ( 2592601 ) on Sunday May 29, 2016 @05:19PM (#52206923) Homepage

    Study Indicates Americans Don't Trust AI

    AI = Actual Intelligence

  • Too many Americans don't trust any kind of intelligence, period.

  • When a magazine asks a cohort of people a question like "Do you trust artificial intelligence?" they will get replies that are based on what ideas they have gleaned from the movies, rather than from any more prosaic, nichey system they may have actually used. Their reply is conditioned by thinking something like "I wouldn't want HAL to be in charge of my stock portfolio."

    But meanwhile, AI is creeping into our culture from the edges. Once we have asked Siri to "Call Laura" rather than squint at a smartphone

    • And when Siri asks which of the 5 Lauras on your phone you want to call, and which of the 3 numbers for the Laura you want, you'll soon go from distrust to outright hate of the AI.

  • None of us has ever interacted with an artificial intelligent system as of yet because there are none. Many of us have worked with heavily scripted programs that simulate AI, much like Siri or Google, but they all lack the critical component we often refer to as intuition that makes the difference between a well written script and a possibly intelligent system.

    • Many of us have worked with heavily scripted programs that simulate AI, much like Siri or Google, but they all lack the critical component we often refer to as intuition that makes the difference between a well written script and a possibly intelligent system.

      But intuition is merely an artifact of the limitations of human brain, specifically it's incomplete capacity for self-reflection: you aren't aware of most of the processes in your brain, so their results seem to appear out of nowhere. It's just script

      • by Archfeld ( 6757 )

        Or maybe because intuition is the positive result of an indexing system in a non linear fashion that is beyond our current understanding. Either way without that understanding or the ability to make 'intuitive' leaps in logic you don't have AI, but an admittedly faster machine that makes choices one at a time in a straight line. If we could solve that issue we'd both retire rich beyond our dreams with all the cool toys to play with :)

  • We know. Look who we're gonna be voting for.

  • 54% *claim* to have never interacted with an AI. They probably have (at least indirectly), and just don't realize it.

    I just learned recently that my employer uses an AI to vet expense reports for errors and potential fraud. I'd give decent odds that similar things are being done across the financial industry, even if it is not explicitly referred to as "AI".

    • by geek ( 5680 )

      54% *claim* to have never interacted with an AI. They probably have (at least indirectly), and just don't realize it.

      I just learned recently that my employer uses an AI to vet expense reports for errors and potential fraud. I'd give decent odds that similar things are being done across the financial industry, even if it is not explicitly referred to as "AI".

      How does one interact with an AI when we've yet to actually create AI?

      • OK, maybe "AI" should have been in quotes. If you're going by the Turing Test definition, then no we haven't. But a lot of people would consider Siri, Watson, etc. to be "AI", and the line is becoming increasingly fuzzy. The study presumably focused on the general public, so we need to use the general public's idea of what constitutes "AI".
  • Finally a story telling us that Americans are not as stupid as generally believed.

  • by Exitar ( 809068 )

    42% of Americans believe in Creationism...

  • With an AI, all the information you provide is logged and entered into computer databases. Talking to a human, data is ephemeral unless the human specifically enters it into a database or the conversation is recorded (and if it's recorded, that needs to be indicated and the recording is usually not searchable).

  • How else do you explain the current presidential election cycle?
  • I just don't trust those that makes them.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...