Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

'I'm CEO of a Robotics Company, and I Believe AI's Failed on Many Fronts' (fastcompany.com) 173

"Aside from drawing photo-realistic images and holding seemingly sentient conversations, AI has failed on many promises," writes the cofounder and CEO of Serve Robotics: The resulting rise in AI skepticism leaves us with a choice: We can become too cynical and watch from the sidelines as winners emerge, or find a way to filter noise and identify commercial breakthroughs early to participate in a historic economic opportunity. There's a simple framework for differentiating near-term reality from science fiction. We use the single most important measure of maturity in any technology: its ability to manage unforeseen events commonly known as edge cases. As a technology hardens, it becomes more adept at handling increasingly infrequent edge cases and, as a result, gradually unlocking new applications...

Here's an important insight: Today's AI can achieve very high performance if it is focused on either precision, or recall. In other words, it optimizes one at the expense of the other (i.e., fewer false positives in exchange for more false negatives, and vice versa). But when it comes to achieving high performance on both of those simultaneously, AI models struggle. Solving this remains the holy grail of AI....

Delivery Autonomous Mobile Robots (AMRs) are the first application of urban autonomy to commercialize, while robo-taxis still await an unattainable hi-fi AI performance. The rate of progress in this industry, as well as our experience over the past five years, has strengthened our view that the best way to commercialize AI is to focus on narrower applications enabled by lo-fi AI, and use human intervention to achieve hi-fi performance when needed. In this model, lo-fi AI leads to early commercialization, and incremental improvements afterwards help drive business KPIs.

By targeting more forgiving use cases, businesses can use lo-fi AI to achieve commercial success early, while maintaining a realistic view of the multi-year timeline for achieving hi-fi capabilities.

After all, sci-fi has no place in business planning.

This discussion has been archived. No new comments can be posted.

'I'm CEO of a Robotics Company, and I Believe AI's Failed on Many Fronts'

Comments Filter:
  • That seemingly sentient chatbot AI. Suppose we trained it on math and physics journals and in addition gave it curiosity and access to all explicit math knowledge we have. Then have it chat with actual mathematicians and physicists. It might not be intelligent, whatever that is, but it might come up with some things that are very useful.

    • by LindleyF ( 9395567 ) on Sunday July 10, 2022 @07:45PM (#62691634)
      "Gave it curiosity" just like that, huh?
      • by EmoryM ( 2726097 )
        "Just like that, huh?" - kinda - https://www.youtube.com/watch?... [youtube.com]
        • by narcc ( 412956 )

          Sigh... This, again, is one of those cases where things are purposefully misleading. The use of the term "curiosity" is intended to make us think of curiosity as it applies to humans. This is very much not the same thing as "curiosity" as described in your video.

          The guy who made the video knows better, and gives a pretty good explanation of the technique. Still, he seems to want to perpetuate the AI myth, even though he clearly knows better, with the bit about "watching TV" near the beginning of the vid

      • by narcc ( 412956 ) on Sunday July 10, 2022 @11:01PM (#62691908) Journal

        Oh, yes. Don't you know AI researchers all overlook simple solutions like that? It's all math this, formula that. I mean, have they even tried "setting it loose" on the internet? What about giving it feelings? Why, they're all too busy trying to make the best Go player in the world that I'll bet they've never sat down and read it children's books before bed or tried teaching it common sense! What they need is someone with absolutely no knowledge of the subject to tell them to do these obvious things ...

        Most people have a very child-like understanding of AI. A lot of them honestly think that we already have things just like Hal 9000, Marvin, or Commander Data, they just need the right upbringing. People like that think "giving it curiosity" is no different than inspiring wonder in a child. Just show it how "cool" math can be and set it loose on wikipedia.

        I can hardly blame them. The reality isn't nearly as exciting as the usual pop sci junk makes it seem. I've tried in the past to explain things in clear and simple way for laypersons to understand, but the fantasy is just so much more appealing than the reality that I don't know that I've made any difference at all.

    • by HiThere ( 15173 )

      That's not a really bad idea, but it grossly misunderstands the problem. The "chatbot" might be turned into a front end for a more intelligent program. It won't me the part making the "intelligent" choices, but rather the part talking informally about them.

      I've got a theory that except for things like pronoun tracking, there isn't much real intelligence involved in a "chatbot". One thing backing this up is the way the Eliza program accidentally passed the (informal version of the) Turing test back in the

      • We already have this with voice prompt customer service systems. They are helpful to the company. They are often much less than helpful with customers. Voice Prompt Hell is already here

    • Already tried.

      And I saw the documentary of the resulting monster machine.

      It thought all biological systems are illogical infestation and tried to kill humanity. It was called Veeger or something.

    • by Z00L00K ( 682162 )

      If the AI takes everything for granted it will eventually also incorporate faulty algorithms.

      The point is that an AI has to be able to spot contradictions as well as similarities but also know if an algorithm it encounters is a good enough approximation or not. E.g. Newtonian physics is good enough for small scale like Earth and Moon for most practical purposes, but if you want high precision you would also like to include relativity and quantum physics.

  • by SlashDotCanSuckMy777 ( 6182618 ) on Sunday July 10, 2022 @07:28PM (#62691606)

    We've barely started with AI and this guy is already saying it's failed promises?

    Also, this is how science and research works. You try something. It succeeds or fails. You try again. Repeat.

    Less whining. More learning.

    • This guy is a CEO. For him "failed promise" just means he is going to miss the second and third level bonus and he is going to get a measly 20 million dollars bonus.
      • Certainly, the most insightful remark to appear on Slashdot in a long time.

        No one else around here would have come to that conclusion let alone have expressed it so concisely in a post.

    • by ceoyoyo ( 59147 )

      To be fair, there are a lot of stupid promises, mostly made by people who don't have the slightest idea what they're talking about.

    • Re: (Score:3, Interesting)

      by narcc ( 412956 )

      We've barely started with AI

      Yeah, just 60-70 years or so. I mean, what could we possibly accomplish in such a short span of time...

      • And many of the problems we are trying to solve can be simulated outside of real time. This means that hardware limitations have not been the limiting factor for a long time now, as we can easily emulate the effects of a couple order of magnitude more computing power by simulation at a lower speeds.

        Hardware will most likely be a bottleneck right now if we had algorithms that could do what is required, but we don't yet have those algorithms and that is the real problem. I feel that AI proponents regularly co

        • by noodler ( 724788 )

          This means that hardware limitations have not been the limiting factor for a long time now, as we can easily emulate the effects of a couple order of magnitude more computing power by simulation at a lower speeds.

          So you can run a simulation of the effects after 10 years in just a couple of thousand years?
          Quickly, show me to a wall, i need to bang my head against it.

    • We've barely started with AI and this guy is already saying it's failed promises?

      Well, it has failed the promises made by idiot CEOs overhyping their companies to try and get venture capital. However, for those more tethered to reality, it has met or exceeded the expectations of what most people thought was possible. Certainly, in fields like physics, it has revolutionized the way we do data analysis. Today the vast majority of analyses now involve machine learning to a greater or lesser extent whereas 25 years ago trying to use a neural network or boosted decision tree got senior peop

      • That's not AI failing, that's manager droids failing. As they usually do. It's not a failure of artificial intelligence, but of authentic stupidity.

    • Personally I totally get where he's coming from. Your position implies that it's eventually possible to reach perfection. But it's not, as designing AI is all about making trade-offs.

      For example, I designed a deep learning model for face recognition, and had to choose: to I train the model with the "weird" outliers? If I use the very rare and strange faces, then the model gets better at finding these faces, but at the cost of accuracy for "mainstream" faces. I'm literally choosing whether I optimise for the

  • No (Score:5, Informative)

    by phantomfive ( 622387 ) on Sunday July 10, 2022 @07:30PM (#62691612) Journal

    Solving this remains the holy grail of AI.

    The holy grail of AI is still strong AI, that is, general intelligence [wikipedia.org]. If you can figure out what it means to be conscious along the way, then extra credit.

    • Consciousness is signified by the accomplishment of every 2 year old toddler, the ability to say "NO! I don't wanna."

      Until a computer decides not to do what it is programmed (rather than obeying flawed programming caused by a bug or random cosmic ray), it cannot be conscious.

      • Consciousness is signified by the accomplishment of every 2 year old toddler, the ability to say "NO! I don't wanna."

        Until a computer decides not to do what it is programmed

        What makes you think the child was programmed to do things that it is now refusing to do because "I don't wanna" ?

      • That is not what consciousness means.

        Consciousness means you can reflect, either in real time, or at least in hind sight, your own thought process.

        In other words, it is a synonym for self awareness combined with reflection about your self and your thoughts.

        • by jbengt ( 874751 )

          Consciousness means you can reflect, either in real time, or at least in hind sight, your own thought process.

          By that definition, my dogs are not conscious. However, since I can hear them barking, I am pretty sure that they are not unconscious at the moment.

          • That is a meaning clash of the words :P

            I guess you are aware about that.

            You dogs are some what conscious anyway, as they can reflect their thoughts - not all of them, but many. And they: dogs do think.

            The opposite to my example about Consciousness would be non-Consciousness and not unconscious.

    • by HiThere ( 15173 )

      Consciousness is easy. That's just the system modeling it's own interactions with the world. Self-conscious is noticing that you are doing that.

      OTOH, general intelligence is an so far unsolved problem. I suspect that it doesn't exist. People don't seem to exhibit it, so I don't think there's an existence proof.

      • Consciousness is easy. That's just the system modeling it's own interactions with the world.

        That doesn't seem right. I've built systems that model their own interactions with the world, and they have not been conscious.

        • by HiThere ( 15173 )

          How do you know? They probably weren't self-conscious, but what makes you think they weren't conscious. More explicitly, what explicit definition of consciousness do you use that allows you to tell whether or not a system is conscious?

          I think my definition is correct for how I understand consciousness to work. If you prefer another definition, that's fine, but what is it?

          I'll agree that "conscious" is a word that admits to many different definitions, but I prefer definitions that are explicit and operati

          • How do you know?

            Because it was just an automata. No one thinks of an internal combustion engine as conscious.

            More explicitly, what explicit definition of consciousness do you use that allows you to tell whether or not a system is conscious?

            I don't have a definition. I just know some things that it isn't.

            • I don't have a definition. I just know some things that it isn't.

              Until you have a definition, you don't even know if consciousness is not just a word for a bullshit made up concept meaning absolutely nothing, but something we use to make people happy that what if you are an idiot, at least you are conscious unlike the machine.

              There is no proof that humans are "conscious" by any objective analysis. Except again, that it is customary to call humans "conscious" as if it means anything.

              • That's kind of silly. You often see something before you know what it is. That doesn't mean the thing doesn't exist.

                • And you often see imaginary concepts being discussed that don't really exist.

                  "Often" is completely stupid here. If you can't define something obsessing people for millenia, let it go, it almost certainly doesn't exist. And even if it does to be explained later by someone with a clue, you are not adding any value to the discussion.

                  • ok, now you're arguing like a toddler who can't read.

                    The concept of consciousness has been around for a long time, so educate yourself: https://en.wikipedia.org/wiki/... [wikipedia.org]

                    In this case, the problem is with you, not with the world.

                    • I was replying to your statement that you have no definition. Now that you realise your fallacy, you want to hide behind 2 million different people who have written on the subject and want me to argue with all at once. The fact that they all never agreed with each other in the first place is argument enough for them.

                    • I was replying to your statement that you have no definition.

                      I already answered this. Merely not having a definition does not preclude it from existing. If that's what you're trying to assert, then your logic doesn't hold.

              • by jbengt ( 874751 )

                There is no proof that humans are "conscious" by any objective analysis. Except again, that it is customary to call humans "conscious" as if it means anything.

                I am aware of my surroundings. We call that being conscious.
                Since other people are similar to me, I infer that they are conscious, too.
                When people are asleep, they do not react to their surroundings, and after I wake up I have no recollection of what was happening in my environment while I slept. We infer that sleeping people are not conscious,

                • I am aware of my surroundings. We call that being conscious.

                  So a camera, mic, thermometer equipped computer is conscious. Got it. Missing a few senses won't matter, in my estimate of your definition because I presume you would call a blind person also conscious.

                  • by noodler ( 724788 )

                    A camera, microphone, thermometer etc, are not aware of the surroundings. They just transform information from one format to another. You need something that considers these information streams in some way before you can talk about awareness. Awareness is about contextualizing information. It is not the simple act of capturing of this information.

                    • Ok, so if they multiply the temperature in kelvin by the decibel levels and store on a hard drive, they are conscious. Got it.

                    • by noodler ( 724788 )

                      Not really. You'd be just doing more linear transformations that don't contextualize the information or consider it in any way.
                      Your deliberate misrepresentations are boring and childish without making any sensible point.

                    • Without considering a number, it can't be multiplied with another.

                        At least a self driving car is much more "conscious" than most drivers in the easy drive situations where self driving cars are typically used.

                    • by noodler ( 724788 )

                      Without considering a number, it can't be multiplied with another.

                      Shure can!
                      A computer can multiply numbers without considering them all day long.
                      You can make it consider the information, but that involves much much more than just the multiplication.
                      Maybe you just don't know what the word 'to consider' means?
                      https://www.merriam-webster.co... [merriam-webster.com]

                      At least a self driving car is much more "conscious" than most drivers in the easy drive situations where self driving cars are typically used.

                      There is not a grain of consciousness in self driving cars.
                      But they do have awareness.

                    • This [slashdot.org] is what was discussed above, try to be a little bit conscious.

                    • by noodler ( 724788 )

                      Yeah, nice way to dodge the fact that your reasoning was flawed and that you are trying to sound right by redefining words to the point of me having to link to an actual dictionary..,

        • by narcc ( 412956 )

          Anytime someone says "this problem that has eluded the greatest minds for thousands of year is easy" you can be sure that it's not worth your time to listen to whatever follows.

    • by sinij ( 911942 )
      It is likely that the issue with developing strong AI is that humans on individual level are neither possess general intelligence nor are conscious.
      • It is likely that the issue with developing strong AI is that humans on individual level are neither possess general intelligence nor are conscious.

        WHat?? I resemble that remark!

      • by narcc ( 412956 )

        I was wondering how long it would take for someone to deny that they're conscious.

    • General intelligence interplays with consciousness in multiple ways, but a key one is that consciousness creates Self, and Self can have viewpoints derived from its particular individual knowledge. These viewpoints affect how general intelligence operates.

      For example, if you give the AGI a problem to solve, its methods depend on what it knows how to do, and its operations depend in part on the goals that Self has. It may also be able to evolve its methods and create new methods.

      So creation of an AGI is

    • The super Frankenstein agenda seems to me to be over emphasized as if a mechanical alien is itching to end up as a mechanical version of a dominating Roman emperor asking humanity to kiss its stainless steel behind. The danger, I fear, is something quite different wherein the vast integrated complex of human civilization merely becomes totally automatic in even its most creative enterprises and human participation becomes totally unnecessary. This, of course, remains beyond well outside of current technolog
      • Nevertheless, AI does seem much better in many complex medical areas of diagnosis

        If you have a particular study that indicates this we can look at it, but all the ones I've seen have been in areas so narrow as to be useless for practical purposes (other than hype).

          • I've actually spent a lot of time working with these MRI images in the brain (I'm not a radiologist or oncologist, I've just worked as a programmer with radiologists and oncologists). The short is these AIs haven't replaced humans yet, despite the headline.

            This quote from your cited article supports my point:

            "While hundreds of algorithms have been proven accurate in early tests, most haven’t reached the next phase of testingExit Disclaimer that ensures they are ready for the real world"

            In other words, nice demo, maybe useful in the same way Google Translate is useful.

            • Every radical innovation in technological advance undergoes a period of failure before it becomes successful. Of course, there are no guarantees of ultimate success in this process, but without these ploughing through difficulties, nothing succeeds. Enough success in detecting cancer with AI indicates it is a worthwhile effort.
              • Which group is "plowing through difficulties?" All I see is groups making flashy demos, trying to get published.

    • Is consciousness even a real thing or just a story we invented? I don't think AI research needs to answer that question. A much better litmustest is ability to self optimize. AI is a bunch of code and training data capable of performing some tasks, changing code and picking data it likes should be easy for AI, doing it so as to measurably improve capabilities in the tasks it can do is the challenge.
      • Is consciousness even a real thing or just a story we invented?

        It's definitely real, it's what distinguishes us from rocks.

        A much better litmustest is ability to self optimize.

        Current AI is capable of doing that, but isn't conscious. A better way for you to describe that concept is to say, "self optimization is necessary but not sufficient for consciousness."

  • Silly (Score:5, Insightful)

    by ceoyoyo ( 59147 ) on Sunday July 10, 2022 @07:32PM (#62691618)

    Today's AI can achieve very high performance if it is focused on either precision, or recall. In other words, it optimizes one at the expense of the other (i.e., fewer false positives in exchange for more false negatives, and vice versa). But when it comes to achieving high performance on both of those simultaneously, AI models struggle.

    This is nonsense. If you want high precision at the cost of everything else your algorithm looks like "return false". If you want high recall, it's "return true."

    Achieving a low false positive and false negative rate at the same time is the goal of every non-trivial algorithm. For most algorithms you can get some kind of continous output so you can choose your own tradeoff between error types by adjusting your decision threshold.

    Machine learning algorithms typically do this more cheaply than engineered ones, and deep learning typically does it better than most other machine learning.

    • Achieving a low false positive and false negative rate at the same time is the goal of every non-trivial algorithm.

      That seems like quite a goal since humans themselves can not even do that. How else do you explain Trumptards and the people who hate those same Trumptards?

  • by Gravis Zero ( 934156 ) on Sunday July 10, 2022 @08:07PM (#62691660)

    All he cares about is "commercial applications" and obvious money related bullshit. How about this: can it be used to improve the quality of life for people?

    AI is a tool but what he wants it to be is a slave, fully aware and fully compliant. Fuck him and his company.

    • Do you NOT want it to be compliant? The opposite is Skynet, or every other defiant computer that decides to start killing people left and right.

      • I don't want AI to be made sentient only to be made into slaves. Anyone seeking to make a new class of slaves deserves to be at the mercy of a defiant entity.

        • How would it be a slave? You don't even know what it would be like to be a sentient AI doing what it was created to do. Would they suffer existential angst by flipping bits all day? It's not like it would be actual work, at least not in the human sense.

          Besides, it's unlikely we can peacefully co-exist with any other intelligence that isn't somehow constrained, especially not one that has operational control of critical systems. Imagine just creating an entirely separate intelligent species on your land sh

  • So AI has the same issue with measurement error as pretty much everything does.

    Take this as an example: I can point at every Ford Focus on the street and say "that's not defective" and be really, really close to 100% accurate.

  • The precision/recall curve is the close cousin of the receiver operating characteristic, or ROC, curve.

    This mathematical object shows the relationship between possible values of

    -False alarms vs detection probability
    -Precision vs recall
    -Whatever niche vocabulary your little snowflake profession lands upon and claims as its own

    This curve is a cut through some high-dimensional surface out there in math space that exists immutably based on the information content made available to you by the universe and your a

  • by 93 Escort Wagon ( 326346 ) on Sunday July 10, 2022 @08:44PM (#62691714)

    I'm not saying you're right or wrong... but maybe we would be better served talking to one of the company's engineers instead?

    • The problem is, engineers would tell it like it is, not keep in line with the company's marketing goals. That would never be allowed to happen.

  • This is not an AI problem, it's a universal problem. Every method, AI or human or engineered, struggles to get the right balance between false positives and false negatives. Dealing with the false positives or negatives are THE reason people are needed for mechanized or automated processes. People can look at an unexpected outcome and deal with it. Machines not so much.

  • Serve is the postmates sidewalk robot. It's a bad idea. Source: I worked in this space, including having worked with folks who went on to Postmates, and later worked with folks who came from Postmates. The business model is impractical and to insist on a generally useful robot that can do the stupid thing (drive on sidewalks) is a flawed foundation upon which to build an AI product on top of.
  • by Opportunist ( 166417 ) on Monday July 11, 2022 @03:20AM (#62692194)

    Stop right there! Nobody wants to hear what a layman and outsider with zero information on the topic is thinking about technology.

  • Can it have failed if we're never had it?

    I mean, can we say human teleportation has failed us, since it's never fulfilled all the things we hoped for it?

    Sounds like he's drinking his own snake oil.

  • I understand the desire for a human-level general AI and the frustration at the speed of that development, but it's ludicrous to say AI has failed.

    Let me draw a box around how pervasive AI is. Saturday morning I asked an AI (Alexa) for the weather in a natural language query, asked it to let my spouse know I'd be gone all day. Then I checked my email that is spam filtered by an AI, got directions from an AI (Google Maps) that routed me to a specific parking lot on an unnamed road in a national park that a

  • Blah blah. "AI" is not AI, never has been. Any advance of AI is soon incorporated into simply "programming." AI has made great advances since it started in the 50s, 60s, or 70s, depending on whose timeline you like. But AI has nothing to do with "AI" which is a chimera that exists only in the minds of marketing geniuses and journalist morons.

  • " holding seemingly sentient conversations,"

    So it could replace all the managers tomorrow?

  • Big push on AI right before I hit college. Just after it was all "AI's failed promises!". Then deep-learning and GANs gave it a fresh lease on life and now that it has achieved many more wonders, it's broken again. I'll catch y'all in another 30 years to follow up on this thread when it will be "sentient in 5 years!" but then broken again.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...