Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Stop Calling Everything AI, Machine-Learning Pioneer Says 116

An anonymous reader shares a report: Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley. He notes that the imitation of human thinking is not the sole goal of machine learning -- the engineering field that underlies recent progress in AI -- or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.

"People are getting confused about the meaning of AI in discussions of technology trends -- that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans," he says. "We don't have that, but people are talking as if we do." Jordan should know the difference, after all. The IEEE Fellow is one of the world's leading authorities on machine learning. In 2016 he was ranked as the most influential computer scientist by a program that analyzed research publications, Science reported. Jordan helped transform unsupervised machine learning, which can find structure in data without preexisting labels, from a collection of unrelated algorithms to an intellectually coherent field, the Engineering and Technology History Wiki explains. Unsupervised learning plays an important role in scientific applications where there is an absence of established theory that can provide labeled training data.
This discussion has been archived. No new comments can be posted.

Stop Calling Everything AI, Machine-Learning Pioneer Says

Comments Filter:
  • by Quakeulf ( 2650167 ) on Wednesday March 31, 2021 @05:33PM (#61222262)

    It's mostly towards investors because it seems they like the terminology even though it's filling the exact same purpose as any other marketing-slogan superlative.

    • It's mostly towards investors because it seems they like the terminology even though it's filling the exact same purpose as any other marketing-slogan superlative.

      Agreed, but let's not be too surprised when Stupid falls for the same marketing scheme again and again. Stupid is far too profitable, and you don't fix, what isn't broken.

    • "People are getting confused about the meaning of AI in discussions of technology trends"

      The confusion is intentional.

    • Do a start-up company specializing in Blockchain AI, and you'll capture billions in venture capital!

    • That's what marketing is, in practice.

  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Wednesday March 31, 2021 @05:35PM (#61222266) Journal

    ... if not the ability to process information in some kind of coherent manner?

    Everything might not be A.I. but a lot of computer processes are.

    • ... if not the ability to process information in some kind of coherent manner?

      Everything might not be A.I. but a lot of computer processes are.

      Sorry, I disagree. Take a look at a recycling plant, with series of machines which physical separate and sort physical input into streams of useful output. To assign "intelligence" to the data equivalent of a recycling plant is abusing the word. We say dogs and dolphins are smart because they sometimes do clever things, but we don't consider them intelligent. It seems the litmus test most of us use is general problem-solving.

      When so-called AI software routines recognize what they're tasked with and do

      • by fazig ( 2909523 )
        Depends on the definition you're using. A list that I found:

        Herbert Simon: We call programs intelligent if they exhibit behaviors that would be regarded intelligent if they were exhibited by human beings.
        Elaine Rich: AI is the study of techniques for solving exponentially hard problems in polynomial time by exploiting knowledge about the problem domain.
        Elaine Rich and Kevin Knight: AI is the study of how to make computers do things at which, at the moment, people are better.
        Stuart Russell and Peter N

        • by nadass ( 3963991 )
          What's funny-sad is how each of those unreferenced quotes are obviously taken wholly out of context; none are meant to serve as a standard-bearer as guiding definition for, "What is A.I.?"
          • How is it "obviously taken out of context"? For example, "the study of how to make computers do things at which, at the moment, people are better" and "We call programs intelligent if they exhibit behaviors that would be regarded intelligent if they were exhibited by human beings" are clearly two of the major classical definitions of AI.
            • by nadass ( 3963991 )

              How is it "obviously taken out of context"? For example, "the study of how to make computers do things at which, at the moment, people are better" and "We call programs intelligent if they exhibit behaviors that would be regarded intelligent if they were exhibited by human beings" are clearly two of the major classical definitions of AI.

              Yes, they are... and science fiction novelist Isaac Asimov's Laws of Robotics are guiding rules for ethical robotics today... and science fiction writer L Ron Hubbard's Dianetics are the foundation for the Church of Scientology. Just because they are classical doesn't make them true, right, accurate, or deserving of getting chiseled in stone (not that it matters).

              Of the 50-odd AI-centric books I've read over the past 25 years (plus the new ones getting published near monthly, so I have a lot of reading

          • by mark-t ( 151149 )

            Coming up with a definition for A.I. first requires a definition of what intelligence is in the first place.

            Are infants intelligent? What about adult whales? Standards vary.... and the answers are almost invariably subjective.

            But define intelligence, and by direct extension, you will have a clear definition of what A.I. is.

            • by nadass ( 3963991 )

              Coming up with a definition for A.I. first requires a definition of what intelligence is in the first place.

              Are infants intelligent? What about adult whales? Standards vary.... and the answers are almost invariably subjective.

              But define intelligence, and by direct extension, you will have a clear definition of what A.I. is.

              Philosophers have resolved the "intelligence" debate -- but nobody listens to them, because it impedes their own abilities to develop their own careers as "original thinkers" in whatever domain of expertise they've chosen to pursue. I get it, and I accept it. But "intelligence" isn't something that's up-for-grabs.

              However, what constitutes "artificial" (and by extension, "artificial intelligence") is nowhere near certain. The expression is merely a century old, although various types of devices have be

              • by mark-t ( 151149 )

                However, what constitutes "artificial" (and by extension, "artificial intelligence") is nowhere near certain

                "Artificial" simply refers to the domain of things that are themselves products of thought, and could not otherwise exist except for the mind that created that thing. For most practical purposes and for the simplification of discussion, this can usually simply refer to anything man-made, Beaver dams and birds nests do technically fit the criteria as well, however.

        • Not on the list (and not entirely about AI):

          Edsger W. Dijkstra: The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

          • "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

            Exactly. I needed a good laugh and that quote did it. Thanks.

      • by UnknownSoldier ( 67820 ) on Wednesday March 31, 2021 @06:21PM (#61222456)

        > We say dogs and dolphins are smart because they sometimes do clever things, but we don't consider them intelligent. It seems the litmus test most of us use is general problem-solving.

        Humans: Animals aren't intelligent
        Crows: Hold [youtube.com] my beer [youtube.com].

      • but we don't consider them intelligent.

        Who precisely is included in this royal we of yours?

    • Matrix division (svd, cholesky, whatever), bayesian update, and gradient descent are at the heart of most machine learning applications. These techniques have been known for decades if not centuries.

      Sometimes the secret sauce behind the curtain of a gizmo that claims to do machine learning is literally a few lines of matlab. Referring to that as "machine learning" is dumb when marketing types do it and dishonest when technical people do it to make their work sound cooler and hipper than it actually is.

    • generally intelligence is about understanding not processing. You could create a computer that could fluently converse with a person given sufficient time and processing power, that doesn't make it intelligent.
    • You're describing an algorithm. Intelligence would be the capability to create an algorithm. Bonus points if the conditions and specifications for the algorithm are also developed, as well as tests to determine if the algorithm works as desired.
      • You're going to have to constrain it further than that.
        Trained networks can create algorithms.
        • Also if they're not trained to do so? BTW I'm going to have to look up trained networks.
          • Well obviously an untrained network can't do shit.
            But neither can an untrained human neural network.
            Of course that leads us to wonder about self-training...
            Of course, an artificial network can't truly self-train itself (it lacks that kind of control over its environment)...
            But when you give it that ability to do so, it can.

            We've got a limited amount of built-in evolutionary programming, but largely speaking, what we call our "intelligence" is cultural learning.
            There's a reason it took us 250,000 year
  • by LatencyKills ( 1213908 ) on Wednesday March 31, 2021 @05:36PM (#61222270)
    This post brought to you by AI, Machine-Learning
  • and how many times can you write that stupid thing in one paragraph anyway? Machines don't "learn" anything, even though it may seem to you puny human that some of them do. I understand that people love trying to "humanize" things, and that marketing departments love capitalizing on dumb ass terms, but enough is enough. Or is it? This all reminds me of the boy who cried "Wolf!". One day, a machine will become so human-like, and "decide" to use its "intelligence" to hurt or kill some people, and so will its
  • Whodunit (Score:4, Funny)

    by Sloppy ( 14984 ) on Wednesday March 31, 2021 @05:37PM (#61222276) Homepage Journal

    If you're wondering who is responsible for mislabeling this (admittedly cool!) stuff, I can tell you: hackers.

  • <img src="Image d'une pipe.jpg" alt="Ceci n'est pas une AI.">
  • by haemish ( 28576 ) on Wednesday March 31, 2021 @05:44PM (#61222298)

    Too many people believe that ML is somehow related to human learning. A better term would be straightforward and honest: something like "advanced statistical methods".

    • by Bengie ( 1121981 )
      Data-driven Statistical programming. The real question is how the AI handles inputs that fall outside of the training set. Undefined behavior is known to happen. As a human, I can generally tell when a request doesn't quite fit the abstract concept of the mental model in my head. We need some way to run sanity checks on AI results to prevent "obviously" wrong results.
      • It's not data driven. It's an algorithm choice driven by the human programmer's intuition about the structure that may exist in a particular kind of data and the best method to tease out the parameters of that structure for some actionable purpose in an autonomous or partially autonomous way.

        Multi-dimensional model-based parameter estimation in the presence of noise is a more correct but less sexy way to describe this stuff.

    • Most of them aren't even that advanced, they are just regular old decision trees/expert systems. plug a pretty interface or a voice on the front of them and suddenly people think they are intelligent. The truly laughable part is their are quite a lot of IT people that are also amazed by this and think that true AI is just a few years away.
    • by jythie ( 914043 )
      Something I have always found ironic is Machine Learning was always kinda looked down on in AI because you don't learn anything from it. If it was not so profitable for marketing and other consumer oriented activities it would probably still be in the 'why would anyone use this, it produces crap answers AND you can not examine it' dustbin.
  • by cjellibebi ( 645568 ) on Wednesday March 31, 2021 @05:48PM (#61222312)
    Neural Networks work by recognising patterns rather than following a fixed set of rules. For example, while writing a physics simulation means programming in the laws of physics using an exact set of rules (or an approximation thereof (eg. using Newtonian physics without taking relativity into account)), training a Neural Network to distinguish cat-pictures and dog-pictures involves training it by showing it pictures of cats and dogs, letting it guess and telling it if it was right or wrong. None of this involves explicitly telling the Neural Network anything about the essence of cat-ness or dog-ness. It's just pattern-recognition based on reinforcement. Pattern-recognition is how intuition works in humans, which use a brain - hence why I believe Neural Networks should be classified as Artificial Intuition, rather than Artificial Intelligence.
    • Well we have come far. Fuzzy Logic use to be the cool thing that kids did. Now it's filtering and pattern matching.

      • by glitch! ( 57276 )

        Yes, "fuzzy logic" had its day. Along with the "Taguchi method", which seemed to be "fiddle with the inputs to get the best output". I thought that approach was abhorrent. That made sense a hundred years ago, but now when we have totally better information and engineering?! No. Wrong.

        Going back to topic, I see so many examples of ML. It is a part of statistics that I missed. Good for them. But AI is the wrong term. As far as I know, AI does not yet exist. I think AI should work for ANY practical trade. Is t

    • None of this involves explicitly telling the Neural Network anything about the essence of cat-ness or dog-ness.

      Uh except for the training step where you feed it labeled sample sets. The training is where you "program" dog-ness or cat-ness into the network. The main difference between training and programming is a neural network can find signals (by chance and then reinforcement) that you as a programmer might not have ever identified.

      Neural networks are definitely "programmed", it's just different terminolo

    • by jythie ( 914043 )
      Well, yes and no. There are some similarities between how neutral networks and brians work, but they are very surface, and missing a rather critical part : neural networks are unidirectional. Human brains seem to use neutral networks for some of its filtering, but still has a core of symbolic reasoning.. something neutral networks are not capable of doing
  • A Manny Coto series starring Peter Weller, so I guess it's no surprise that someone with no real expertise on the subject, such as myself, was impressed. Seeing those "bugs" evolve in the computer simulation was pretty darn cool, and well done, especially for episodes from 2002.
  • so meta (Score:4, Funny)

    by algaeman ( 600564 ) on Wednesday March 31, 2021 @05:59PM (#61222356)
    In 2016 he was ranked as the most influential computer scientist by a program that analyzed research publications, Science reported.

    Sooo, an AI?
  • My resume (Score:5, Funny)

    by backslashdot ( 95548 ) on Wednesday March 31, 2021 @06:01PM (#61222366)

    I put down that I am a machine learning expert in my resume. I immediately got hired to my dream job at McDonaldâ(TM)s where I got to learn how to use the soft serve ice cream machine.

  • And doesn't score well with marketing and shareholder interest.
  • Stop Calling Everything AI, Machine-Learning Pioneer Says

    We need some sort of AI to determine what should be called AI ... Hopefully, the results won't be paradoxical.

  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Wednesday March 31, 2021 @06:08PM (#61222402) Homepage

    Artificial Synthetic Stupidity -- seems closer to what it is.

  • by drkshadow ( 6277460 ) on Wednesday March 31, 2021 @06:10PM (#61222412)

    I was at a company recently that hired an AI research scientist. He legitimately had a Ph.D on the subject (ML or AI, modeling, or similar).

    Ok, it was plausible. We were doing research with vision, modeling of real-world objects, how to interact with real-world objects, etc. It was certainly plausible.

    In the end, I was told, "The AI that we're working on is a decision tree. It's a form of AI." Of course we needed a Ph.D holder for that task. Of course. The decision tree really became the core of the project, with items that could be added and queried within a step of the decision tree, but... A decision tree.

    In fact where his Ph.D came in was probably the right to argue that we didn't need a neural network. Perhaps it worked out for the best.

    • by nadass ( 3963991 )
      That's similar to how some job descriptions used to require 5-7 years of experience with technology which was only 1-2 years old. And they only wanted a Masters' or PhD recipients! HAHAHA... when the job descriptions' reqs don't align with reality, that's when you know the marketing buzzwords will wreck havoc to expectations.
      • Hatr to tell ya, but AI... the real kind... has been a research topic since the 60s. In fact, it already died, decades ago, due to not getting anywhere.

        This new shit, if of course, almost, but not quite, entirely unrelated.

        • by nadass ( 3963991 )

          Hatr to tell ya, but AI... the real kind... has been a research topic since the 60s. In fact, it already died, decades ago, due to not getting anywhere.

          This new shit, if of course, almost, but not quite, entirely unrelated.

          These "AI Winters" are real, and now they've rebranded/separated them into AGI and Robotics... and the government seems more focused on ROI than generational innovation. Sigh... that's why I believe other entities (non US-funded research) will outpace US' AI efforts... [Hmm, unless the US is following a Microsoft investment model, which is to acquire the innovators and champion as one's own.]

    • by cowdung ( 702933 )

      It doesn't really matter if it's a decision tree, or a neural network, or Knn, or an SVM.. Most models perform the same if you are data constrained.

      Where you really start to get amazing performance is with TONS of data on certain NN architectures.

      But I consider that often in a complex "AI" project it's not so much the classifiers or the estimators that are the magic, but the glue that holds them together.

      In complex projects I've seen it's not one thing, but many.

    • Isn't the job of the expert to know when a simple solution is good enough and when you need to bring out "the big guns"?

      Maybe they didn't bring anything to the table, but it may take an expert to know that "simple thing cost X and get the job done to 95%, and complex thing cost 100X and gets the job done to 98%". And then it may be the sound thing to go with the simple solution.

    • If a decision tree does what's needed...
  • But what about Michael B. Jordan [wikipedia.org] or Michael J. Jordan [wikipedia.org] - do they have any thoughts?

  • I have a calendar in my office, one of those small ones where each day gets its own page. On each day it has a little bit of life advice, some of which is pretty thought provoking, and I joke with my coworkers that my calendar is the wisest person in the office.

    But of course the calendar was made by a group of people, and it is their wisdom that I am receiving. The calendar itself is literally as a dumb as a rock.

    Which is exactly how I would describe modern AI. Learning machines are far more advanced than m

  • "I'm not that Michael Jordan!!!"
  • Was clearly Dr. Nim [youtube.com], the 1960's board "game" toy. It had all the elements of an expert system and in current parlance that qualifies it as AI. Honorable mention to the toy 20Q [wikipedia.org], it's a great 20 questions toy and you need to get pretty obscure in your selection in order to beat it.

  • Since I first began studying AI myself in the mid-1980's this has caused me to roll my eyes. I am glad someone is saying it outloud.

    At first it was "Expert Systems". They were just deductive logic programs, using decision trees. At best, they might have had a chatbot interface, referred to as a "natural language processor". Joke.. joke.. and more joke..

    Later, I remember "Robot Wars". Now suddenly what was a remote controlled toy is a "Robot". What??? I learned that altering definitions is a cheap and

  • People want to do magical things with computers. Things that go beyond just some deterministic well defined algorithm.

    So I'd call the current "AI" field: dealing with uncertainty

    And ML is programming with data.

  • by dohzer ( 867770 )

    What's next? Are we going to prevent people from calling those sideways wheely skateboard things "hoverboards"?

  • Intelligence presumes sentience. It's not AI unless it's sentient, and protects itself.

  • Usually when I hear people talk about AI it usually is in terms about fudging a solution or âoesome magic will happen, but I have no idea how to implement it, so it must need AIâ. It generally equates to a lack of competence or buzz word driven developers.

  • Look at my comment history.
    Not a single week went by without me saying that. And explaining why.

    And I conclude: It's all about scamming. That is why the will not stop unless legally forced to. It's literally one big blatant scam. And BizX is in on it. Or wants to be.

  • And there will be no "artificial intelligence" until they create a theory of machine motivation that mimics the Human Motivation Array (HMA). Intelligence exists to build and execute a behavior-space to satisfy the 4.5 billion years evolved HMA. I suggest an n-dimensional motivation array that creates a motivations vector in n-dimensional behavior space, which is connected to sensor inputs to swing the vector through behavior-space. We are born with an almost empty behavior-space and spend our lives (mostly
  • But can I still say I work with big data?

  • When radios first came out in the first decades of the 20th century, everything was called "wireless." These days, the press has a much greater percentage of idiots. This is as hopeless as trying to rein in the use of the term "hacker" back in when, the 80s? 90s?

    • Or as hopeless as getting people to pronounce "GIF" correctly, or record video or take photos in landscape mode.

  • We call it AI when we do not know how it works. I'ts not AI when we know how to program it.

To write good code is a worthy challenge, and a source of civilized delight. -- stolen and paraphrased from William Safire

Working...