'I'm CEO of a Robotics Company, and I Believe AI's Failed on Many Fronts' (fastcompany.com) 173
"Aside from drawing photo-realistic images and holding seemingly sentient conversations, AI has failed on many promises," writes the cofounder and CEO of Serve Robotics:
The resulting rise in AI skepticism leaves us with a choice: We can become too cynical and watch from the sidelines as winners emerge, or find a way to filter noise and identify commercial breakthroughs early to participate in a historic economic opportunity. There's a simple framework for differentiating near-term reality from science fiction. We use the single most important measure of maturity in any technology: its ability to manage unforeseen events commonly known as edge cases. As a technology hardens, it becomes more adept at handling increasingly infrequent edge cases and, as a result, gradually unlocking new applications...
Here's an important insight: Today's AI can achieve very high performance if it is focused on either precision, or recall. In other words, it optimizes one at the expense of the other (i.e., fewer false positives in exchange for more false negatives, and vice versa). But when it comes to achieving high performance on both of those simultaneously, AI models struggle. Solving this remains the holy grail of AI....
Delivery Autonomous Mobile Robots (AMRs) are the first application of urban autonomy to commercialize, while robo-taxis still await an unattainable hi-fi AI performance. The rate of progress in this industry, as well as our experience over the past five years, has strengthened our view that the best way to commercialize AI is to focus on narrower applications enabled by lo-fi AI, and use human intervention to achieve hi-fi performance when needed. In this model, lo-fi AI leads to early commercialization, and incremental improvements afterwards help drive business KPIs.
By targeting more forgiving use cases, businesses can use lo-fi AI to achieve commercial success early, while maintaining a realistic view of the multi-year timeline for achieving hi-fi capabilities.
After all, sci-fi has no place in business planning.
Here's an important insight: Today's AI can achieve very high performance if it is focused on either precision, or recall. In other words, it optimizes one at the expense of the other (i.e., fewer false positives in exchange for more false negatives, and vice versa). But when it comes to achieving high performance on both of those simultaneously, AI models struggle. Solving this remains the holy grail of AI....
Delivery Autonomous Mobile Robots (AMRs) are the first application of urban autonomy to commercialize, while robo-taxis still await an unattainable hi-fi AI performance. The rate of progress in this industry, as well as our experience over the past five years, has strengthened our view that the best way to commercialize AI is to focus on narrower applications enabled by lo-fi AI, and use human intervention to achieve hi-fi performance when needed. In this model, lo-fi AI leads to early commercialization, and incremental improvements afterwards help drive business KPIs.
By targeting more forgiving use cases, businesses can use lo-fi AI to achieve commercial success early, while maintaining a realistic view of the multi-year timeline for achieving hi-fi capabilities.
After all, sci-fi has no place in business planning.
Forget intelligence. Ask if it's useful instead. (Score:2)
That seemingly sentient chatbot AI. Suppose we trained it on math and physics journals and in addition gave it curiosity and access to all explicit math knowledge we have. Then have it chat with actual mathematicians and physicists. It might not be intelligent, whatever that is, but it might come up with some things that are very useful.
Re: Forget intelligence. Ask if it's useful instea (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Sigh... This, again, is one of those cases where things are purposefully misleading. The use of the term "curiosity" is intended to make us think of curiosity as it applies to humans. This is very much not the same thing as "curiosity" as described in your video.
The guy who made the video knows better, and gives a pretty good explanation of the technique. Still, he seems to want to perpetuate the AI myth, even though he clearly knows better, with the bit about "watching TV" near the beginning of the vid
Re: Forget intelligence. Ask if it's useful instea (Score:4, Interesting)
Oh, yes. Don't you know AI researchers all overlook simple solutions like that? It's all math this, formula that. I mean, have they even tried "setting it loose" on the internet? What about giving it feelings? Why, they're all too busy trying to make the best Go player in the world that I'll bet they've never sat down and read it children's books before bed or tried teaching it common sense! What they need is someone with absolutely no knowledge of the subject to tell them to do these obvious things ...
Most people have a very child-like understanding of AI. A lot of them honestly think that we already have things just like Hal 9000, Marvin, or Commander Data, they just need the right upbringing. People like that think "giving it curiosity" is no different than inspiring wonder in a child. Just show it how "cool" math can be and set it loose on wikipedia.
I can hardly blame them. The reality isn't nearly as exciting as the usual pop sci junk makes it seem. I've tried in the past to explain things in clear and simple way for laypersons to understand, but the fantasy is just so much more appealing than the reality that I don't know that I've made any difference at all.
Re: (Score:3)
That's not a really bad idea, but it grossly misunderstands the problem. The "chatbot" might be turned into a front end for a more intelligent program. It won't me the part making the "intelligent" choices, but rather the part talking informally about them.
I've got a theory that except for things like pronoun tracking, there isn't much real intelligence involved in a "chatbot". One thing backing this up is the way the Eliza program accidentally passed the (informal version of the) Turing test back in the
Re: (Score:2)
We already have this with voice prompt customer service systems. They are helpful to the company. They are often much less than helpful with customers. Voice Prompt Hell is already here
Re: (Score:2)
And I saw the documentary of the resulting monster machine.
It thought all biological systems are illogical infestation and tried to kill humanity. It was called Veeger or something.
Re: (Score:2)
If the AI takes everything for granted it will eventually also incorporate faulty algorithms.
The point is that an AI has to be able to spot contradictions as well as similarities but also know if an algorithm it encounters is a good enough approximation or not. E.g. Newtonian physics is good enough for small scale like Earth and Moon for most practical purposes, but if you want high precision you would also like to include relativity and quantum physics.
This seems like an overreaction. (Score:5, Insightful)
We've barely started with AI and this guy is already saying it's failed promises?
Also, this is how science and research works. You try something. It succeeds or fails. You try again. Repeat.
Less whining. More learning.
Re: (Score:3)
Moderators: upvote this (Score:2)
Certainly, the most insightful remark to appear on Slashdot in a long time.
No one else around here would have come to that conclusion let alone have expressed it so concisely in a post.
Re: (Score:3)
To be fair, there are a lot of stupid promises, mostly made by people who don't have the slightest idea what they're talking about.
Re: (Score:3, Interesting)
We've barely started with AI
Yeah, just 60-70 years or so. I mean, what could we possibly accomplish in such a short span of time...
Re: (Score:3)
And many of the problems we are trying to solve can be simulated outside of real time. This means that hardware limitations have not been the limiting factor for a long time now, as we can easily emulate the effects of a couple order of magnitude more computing power by simulation at a lower speeds.
Hardware will most likely be a bottleneck right now if we had algorithms that could do what is required, but we don't yet have those algorithms and that is the real problem. I feel that AI proponents regularly co
Re: (Score:3)
This means that hardware limitations have not been the limiting factor for a long time now, as we can easily emulate the effects of a couple order of magnitude more computing power by simulation at a lower speeds.
So you can run a simulation of the effects after 10 years in just a couple of thousand years?
Quickly, show me to a wall, i need to bang my head against it.
Re: This seems like an overreaction. (Score:2)
Re: (Score:2)
This is useless semantics argument.
Actually it is not.
Calling any program, behaviour of which is defined by set of training data as opposed to hard coded logic, an AI is perfectly fine.
No, it is not perfectly fine. It is completely wrong. Both is not AI. The first one is a trained ANN, which is not AI, the second one is a single clever algorithm, that can only one thing cleverly.
Intelligence of an ant or an ape is a question of scope and magnitude, not nature.
And Ant is not intelligent. They are biologica
Re: (Score:3)
And Ant is not intelligent.
You seem to operate on the basis of a very narrow definition of intelligence.
Can you maybe explain what you think intelligence is?
Re: (Score:2)
Can you maybe explain what you think intelligence is?
No.
But I can explain to you why Ants - or Bees for that matter - are not intelligent.
However the conclusions would be probably shocking, perhaps you suddenly even start believing in gods - otherwise how ants and bees evolved: can hardly be explained by "simple evolution" :P
Re: (Score:2)
But I can explain to you why Ants - or Bees for that matter - are not intelligent.
Please do.
Re: (Score:2)
Says this:
What AI is or should be is not well defined.
...then goes on to tell people what is or isn't AI.
Re: (Score:2)
I only told people: what is not AI. Simple.
Hm, did you not somehow challenge "reading comprehension" in another post?
Hint: I studied that stuff.
Hint *number two*: you did not.
Regarding reading comprehension: I made that pretty cleat that I know what AI is not, as I got best grades in the examinations about it :P - perhaps I should have mentioned that?
Re: (Score:2)
I only told people: what is not AI. Simple.
By defining what something is not you actually attempt to define what it is.
Hint: I studied that stuff.
Ooh, i'm so scared now for your Uberintellekt.
Just for your information, you have motivated nothing you said.
Your position here is basically "because it was told to me in uni".
There is no better way to show stupidity then to refer to your education.
Regarding reading comprehension: I made that pretty cleat that I know what AI is not, as I got best grades in the examinations about it :P
So you were good at repeating the things told to you? Nice for you. You're a big boy now!
But maybe you should try thinking instead of just repeating.
Failed to meet unrealistic hype (Score:3)
We've barely started with AI and this guy is already saying it's failed promises?
Well, it has failed the promises made by idiot CEOs overhyping their companies to try and get venture capital. However, for those more tethered to reality, it has met or exceeded the expectations of what most people thought was possible. Certainly, in fields like physics, it has revolutionized the way we do data analysis. Today the vast majority of analyses now involve machine learning to a greater or lesser extent whereas 25 years ago trying to use a neural network or boosted decision tree got senior peop
Re: (Score:2)
That's not AI failing, that's manager droids failing. As they usually do. It's not a failure of artificial intelligence, but of authentic stupidity.
Re: (Score:2)
Personally I totally get where he's coming from. Your position implies that it's eventually possible to reach perfection. But it's not, as designing AI is all about making trade-offs.
For example, I designed a deep learning model for face recognition, and had to choose: to I train the model with the "weird" outliers? If I use the very rare and strange faces, then the model gets better at finding these faces, but at the cost of accuracy for "mainstream" faces. I'm literally choosing whether I optimise for the
Re: (Score:3)
Trial and error is a bit part of science. Once you get working results then the theoreticians explain them. That's another part. The two parts almost always appear in separate people, and quite often in separate organizations.
Re: (Score:2)
Trial and error is a bit part of science.
Sure but that bit won't work by itself. You also need to make intelligent propositions about what to try. And you also actually learn something from the failures, so it's not just going "Hey, this didn't work, let's throw some other shit against the wall and see if it sticks".
It's kindof like saying "Hey, you buy four wheels and you've got a car!". You don't. You still need a frame, a steering mechanism, breaks, an engine, gas, etc, etc, etc. And all these things need to work together before you can even re
Re: (Score:2)
Re: This seems like an overreaction. (Score:2)
No (Score:5, Informative)
Solving this remains the holy grail of AI.
The holy grail of AI is still strong AI, that is, general intelligence [wikipedia.org]. If you can figure out what it means to be conscious along the way, then extra credit.
Re: (Score:2)
Consciousness is signified by the accomplishment of every 2 year old toddler, the ability to say "NO! I don't wanna."
Until a computer decides not to do what it is programmed (rather than obeying flawed programming caused by a bug or random cosmic ray), it cannot be conscious.
Re: (Score:2)
Consciousness is signified by the accomplishment of every 2 year old toddler, the ability to say "NO! I don't wanna."
Until a computer decides not to do what it is programmed
What makes you think the child was programmed to do things that it is now refusing to do because "I don't wanna" ?
Re: (Score:2)
That is not what consciousness means.
Consciousness means you can reflect, either in real time, or at least in hind sight, your own thought process.
In other words, it is a synonym for self awareness combined with reflection about your self and your thoughts.
Re: (Score:2)
By that definition, my dogs are not conscious. However, since I can hear them barking, I am pretty sure that they are not unconscious at the moment.
Re: (Score:2)
That is a meaning clash of the words :P
I guess you are aware about that.
You dogs are some what conscious anyway, as they can reflect their thoughts - not all of them, but many. And they: dogs do think.
The opposite to my example about Consciousness would be non-Consciousness and not unconscious.
Re: (Score:2)
My programs often do the unexpected. No need for ANNs. :(
Re: (Score:2)
Consciousness is easy. That's just the system modeling it's own interactions with the world. Self-conscious is noticing that you are doing that.
OTOH, general intelligence is an so far unsolved problem. I suspect that it doesn't exist. People don't seem to exhibit it, so I don't think there's an existence proof.
Re: (Score:2)
Consciousness is easy. That's just the system modeling it's own interactions with the world.
That doesn't seem right. I've built systems that model their own interactions with the world, and they have not been conscious.
Re: (Score:2)
How do you know? They probably weren't self-conscious, but what makes you think they weren't conscious. More explicitly, what explicit definition of consciousness do you use that allows you to tell whether or not a system is conscious?
I think my definition is correct for how I understand consciousness to work. If you prefer another definition, that's fine, but what is it?
I'll agree that "conscious" is a word that admits to many different definitions, but I prefer definitions that are explicit and operati
Re: (Score:2)
How do you know?
Because it was just an automata. No one thinks of an internal combustion engine as conscious.
More explicitly, what explicit definition of consciousness do you use that allows you to tell whether or not a system is conscious?
I don't have a definition. I just know some things that it isn't.
Re: (Score:2)
I don't have a definition. I just know some things that it isn't.
Until you have a definition, you don't even know if consciousness is not just a word for a bullshit made up concept meaning absolutely nothing, but something we use to make people happy that what if you are an idiot, at least you are conscious unlike the machine.
There is no proof that humans are "conscious" by any objective analysis. Except again, that it is customary to call humans "conscious" as if it means anything.
Re: (Score:2)
That's kind of silly. You often see something before you know what it is. That doesn't mean the thing doesn't exist.
Re: (Score:2)
And you often see imaginary concepts being discussed that don't really exist.
"Often" is completely stupid here. If you can't define something obsessing people for millenia, let it go, it almost certainly doesn't exist. And even if it does to be explained later by someone with a clue, you are not adding any value to the discussion.
Re: (Score:2)
ok, now you're arguing like a toddler who can't read.
The concept of consciousness has been around for a long time, so educate yourself: https://en.wikipedia.org/wiki/... [wikipedia.org]
In this case, the problem is with you, not with the world.
Re: No (Score:2)
I was replying to your statement that you have no definition. Now that you realise your fallacy, you want to hide behind 2 million different people who have written on the subject and want me to argue with all at once. The fact that they all never agreed with each other in the first place is argument enough for them.
Re: (Score:2)
I was replying to your statement that you have no definition.
I already answered this. Merely not having a definition does not preclude it from existing. If that's what you're trying to assert, then your logic doesn't hold.
Re: (Score:2)
I am aware of my surroundings. We call that being conscious.
Since other people are similar to me, I infer that they are conscious, too.
When people are asleep, they do not react to their surroundings, and after I wake up I have no recollection of what was happening in my environment while I slept. We infer that sleeping people are not conscious,
Re: (Score:2)
I am aware of my surroundings. We call that being conscious.
So a camera, mic, thermometer equipped computer is conscious. Got it. Missing a few senses won't matter, in my estimate of your definition because I presume you would call a blind person also conscious.
Re: (Score:2)
A camera, microphone, thermometer etc, are not aware of the surroundings. They just transform information from one format to another. You need something that considers these information streams in some way before you can talk about awareness. Awareness is about contextualizing information. It is not the simple act of capturing of this information.
Re: (Score:3)
Ok, so if they multiply the temperature in kelvin by the decibel levels and store on a hard drive, they are conscious. Got it.
Re: (Score:2)
Not really. You'd be just doing more linear transformations that don't contextualize the information or consider it in any way.
Your deliberate misrepresentations are boring and childish without making any sensible point.
Re: (Score:2)
Without considering a number, it can't be multiplied with another.
At least a self driving car is much more "conscious" than most drivers in the easy drive situations where self driving cars are typically used.
Re: (Score:2)
Without considering a number, it can't be multiplied with another.
Shure can!
A computer can multiply numbers without considering them all day long.
You can make it consider the information, but that involves much much more than just the multiplication.
Maybe you just don't know what the word 'to consider' means?
https://www.merriam-webster.co... [merriam-webster.com]
At least a self driving car is much more "conscious" than most drivers in the easy drive situations where self driving cars are typically used.
There is not a grain of consciousness in self driving cars.
But they do have awareness.
Re: (Score:2)
This [slashdot.org] is what was discussed above, try to be a little bit conscious.
Re: (Score:2)
Yeah, nice way to dodge the fact that your reasoning was flawed and that you are trying to sound right by redefining words to the point of me having to link to an actual dictionary..,
Re: (Score:2)
Anytime someone says "this problem that has eluded the greatest minds for thousands of year is easy" you can be sure that it's not worth your time to listen to whatever follows.
Re: (Score:2)
Re: (Score:2)
It is likely that the issue with developing strong AI is that humans on individual level are neither possess general intelligence nor are conscious.
WHat?? I resemble that remark!
Re: (Score:2)
I was wondering how long it would take for someone to deny that they're conscious.
Re: No (Score:2)
Re: (Score:2)
For example, if you give the AGI a problem to solve, its methods depend on what it knows how to do, and its operations depend in part on the goals that Self has. It may also be able to evolve its methods and create new methods.
So creation of an AGI is
Re: (Score:2)
Re: (Score:2)
Nevertheless, AI does seem much better in many complex medical areas of diagnosis
If you have a particular study that indicates this we can look at it, but all the ones I've seen have been in areas so narrow as to be useless for practical purposes (other than hype).
Re: (Score:2)
Re: (Score:2)
I've actually spent a lot of time working with these MRI images in the brain (I'm not a radiologist or oncologist, I've just worked as a programmer with radiologists and oncologists). The short is these AIs haven't replaced humans yet, despite the headline.
This quote from your cited article supports my point:
"While hundreds of algorithms have been proven accurate in early tests, most haven’t reached the next phase of testingExit Disclaimer that ensures they are ready for the real world"
In other words, nice demo, maybe useful in the same way Google Translate is useful.
Re: (Score:2)
Re: (Score:2)
Which group is "plowing through difficulties?" All I see is groups making flashy demos, trying to get published.
Re: No (Score:2)
Re: (Score:2)
Is consciousness even a real thing or just a story we invented?
It's definitely real, it's what distinguishes us from rocks.
A much better litmustest is ability to self optimize.
Current AI is capable of doing that, but isn't conscious. A better way for you to describe that concept is to say, "self optimization is necessary but not sufficient for consciousness."
Silly (Score:5, Insightful)
This is nonsense. If you want high precision at the cost of everything else your algorithm looks like "return false". If you want high recall, it's "return true."
Achieving a low false positive and false negative rate at the same time is the goal of every non-trivial algorithm. For most algorithms you can get some kind of continous output so you can choose your own tradeoff between error types by adjusting your decision threshold.
Machine learning algorithms typically do this more cheaply than engineered ones, and deep learning typically does it better than most other machine learning.
Re: (Score:2)
Achieving a low false positive and false negative rate at the same time is the goal of every non-trivial algorithm.
That seems like quite a goal since humans themselves can not even do that. How else do you explain Trumptards and the people who hate those same Trumptards?
How myopic. (Score:3)
All he cares about is "commercial applications" and obvious money related bullshit. How about this: can it be used to improve the quality of life for people?
AI is a tool but what he wants it to be is a slave, fully aware and fully compliant. Fuck him and his company.
Re: (Score:2)
Do you NOT want it to be compliant? The opposite is Skynet, or every other defiant computer that decides to start killing people left and right.
Re: (Score:2)
I don't want AI to be made sentient only to be made into slaves. Anyone seeking to make a new class of slaves deserves to be at the mercy of a defiant entity.
Re: (Score:2)
How would it be a slave? You don't even know what it would be like to be a sentient AI doing what it was created to do. Would they suffer existential angst by flipping bits all day? It's not like it would be actual work, at least not in the human sense.
Besides, it's unlikely we can peacefully co-exist with any other intelligence that isn't somehow constrained, especially not one that has operational control of critical systems. Imagine just creating an entirely separate intelligent species on your land sh
Re: (Score:2)
as long as the benefits trickle down.
Umm... when has that ever been the case?
Re: (Score:2)
That's the tricky part under our current system. Bots will eventually be able to do most the grunt work, but the benefits are mostly going to the bot owners, not regular people.
Measurement Error (Score:2)
So AI has the same issue with measurement error as pretty much everything does.
Take this as an example: I can point at every Ford Focus on the street and say "that's not defective" and be really, really close to 100% accurate.
Precision and recall. Typical phb talk (Score:2)
The precision/recall curve is the close cousin of the receiver operating characteristic, or ROC, curve.
This mathematical object shows the relationship between possible values of
-False alarms vs detection probability
-Precision vs recall
-Whatever niche vocabulary your little snowflake profession lands upon and claims as its own
This curve is a cut through some high-dimensional surface out there in math space that exists immutably based on the information content made available to you by the universe and your a
You're the CEO (Score:3)
I'm not saying you're right or wrong... but maybe we would be better served talking to one of the company's engineers instead?
Re: (Score:2)
The problem is, engineers would tell it like it is, not keep in line with the company's marketing goals. That would never be allowed to happen.
False positives vs. false negatives (Score:2)
This is not an AI problem, it's a universal problem. Every method, AI or human or engineered, struggles to get the right balance between false positives and false negatives. Dealing with the false positives or negatives are THE reason people are needed for mechanized or automated processes. People can look at an unexpected outcome and deal with it. Machines not so much.
Maybe their business model is just shit? (Score:2)
I'm a CEO.... (Score:3)
Stop right there! Nobody wants to hear what a layman and outsider with zero information on the topic is thinking about technology.
huh? (Score:2)
Can it have failed if we're never had it?
I mean, can we say human teleportation has failed us, since it's never fulfilled all the things we hoped for it?
Sounds like he's drinking his own snake oil.
Slow down. (Score:2)
I understand the desire for a human-level general AI and the frustration at the speed of that development, but it's ludicrous to say AI has failed.
Let me draw a box around how pervasive AI is. Saturday morning I asked an AI (Alexa) for the weather in a natural language query, asked it to let my spouse know I'd be gone all day. Then I checked my email that is spam filtered by an AI, got directions from an AI (Google Maps) that routed me to a specific parking lot on an unnamed road in a national park that a
blah blah blah (Score:2)
Blah blah. "AI" is not AI, never has been. Any advance of AI is soon incorporated into simply "programming." AI has made great advances since it started in the 50s, 60s, or 70s, depending on whose timeline you like. But AI has nothing to do with "AI" which is a chimera that exists only in the minds of marketing geniuses and journalist morons.
Indeed (Score:2)
" holding seemingly sentient conversations,"
So it could replace all the managers tomorrow?
/sigh. AI's broken again? (Score:2)
Big push on AI right before I hit college. Just after it was all "AI's failed promises!". Then deep-learning and GANs gave it a fresh lease on life and now that it has achieved many more wonders, it's broken again. I'll catch y'all in another 30 years to follow up on this thread when it will be "sentient in 5 years!" but then broken again.
Re: (Score:3)
Yes, notice how the CEO even admits that it is "...a historic economic opportunity". He doesn't say a historic scientific or computing opportunity. He doesn't say it will be something that makes people's lives better. No, it is about the money.
Re: slashvertisement (Score:2)
Re: (Score:2)
This comment is getting really old. Go look up what AI means. Hint: it does not mean what you think it means.
Re: (Score:2)
for which nobody has a solid understanding of their operating mechanisms.
What? We have a complete understanding. There is absolutely no mystery at all.
Re: AI hasn't failed to deliver anything yet (Score:3)
Re: (Score:2)
This is misleading, and it drives me crazy. We can give a complete and comprehensive explanation for the results. What we can't do, is offer an explanation in human terms, because it's not operating at that level of description.
We want to be able to say things like "the classifier mistook the image for a cat because this looks like whiskers and that looks like ears" but that's just not how these things work.
Re: (Score:2)
It's been a marketing scam since the term was coined. Pamela McCorduck, who was there at the time, talks about this at length in her book Machines Who Think.
Re: what is AI? (Score:2)
Re: He is correct (Score:2)