Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

MIT Robotics Pioneer Rodney Brooks On Generative AI 41

An anonymous reader quotes a report from TechCrunch: When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997. In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he's doing. He knows what he's talking about, and he thinks maybe it's time to put the brakes on the screaming hype that is generative AI. Brooks thinks it's impressive technology, but maybe not quite as capable as many are suggesting. "I'm not saying LLMs are not important, but we have to be careful [with] how we evaluate them," he told TechCrunch.

He says the trouble with generative AI is that, while it's perfectly capable of performing a certain set of tasks, it can't do everything a human can, and humans tend to overestimate its capabilities. "When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that," Brooks said. "And they're usually very over-optimistic, and that's because they use a model of a person's performance on a task." He added that the problem is that generative AI is not human or even human-like, and it's flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don't make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It's instead much simpler to connect the robots to a stream of data coming from the warehouse management software. "When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it's just going to slow things down," he said. "We have massive data processing and massive AI optimization techniques and planning. And that's how we get the orders completed fast."
"People say, 'Oh, the large language models are gonna make robots be able to do things they couldn't do.' That's not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization," he said.

"It's not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots," he said.
This discussion has been archived. No new comments can be posted.

MIT Robotics Pioneer Rodney Brooks On Generative AI

Comments Filter:
  • by ihadafivedigituid ( 8391795 ) on Wednesday July 03, 2024 @07:23PM (#64599467)

    "When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that,"

    "AI" doesn't have to be HAL9000 or Skynet to be super disruptive. It only has to speed up and/or improve a human's performance.

    We are already seeing job losses due to the massive efficiencies gained by savvy people with LLMs helping them. Copy writing is being devastated as I post this, for just one example.

    • How many copywriters have lost their job to AI? Do you know?
    • by gweihir ( 88907 ) on Wednesday July 03, 2024 @09:58PM (#64599681)

      He actually was just not commenting on your question.

      Incidentally, I agree with you. What we will see in a ton of low-level white-collar jobs is the "Amazon Warehouse" model: Replace 10 workers with 2 workers and 10 robots. And that can be done. But it is customer service, sales, administration, end so on. Probably no engineers or scientists will lose their jobs, but a lot of people will, and it will reach a degree as to threaten the stability of society. And this time, the technology does not make any new product or any better product. It just makes bureaucracy require less people to do it, so no replacement jobs.

      The problem is, from actual productivity, society needs maybe 1 day per week per person of work. The other days are filled with bureaucracy and administration. And AI can make those massively more efficient. But with that the wealth-distribution mechanisms of society break down and a lot of "work" becomes obsolete, again, with no replacement. An UBI is just one thing that is needed and without alternative. It is not enough though, people need to be helped to do something with their time or society will burn.

      • I generally agree with what you said, and this is something I have been preaching for quite a while:

        An UBI is just one thing that is needed and without alternative. It is not enough though, people need to be helped to do something with their time or society will burn.

        Idle hands do the devil's work.

        • by gweihir ( 88907 )

          Idle hands do the devil's work.

          Not universally, but many definitely will. And being able to deal with that may well become a survival-critical thing for society. It is completely anathema to the "work or you are a bad person" idea many people have though and just generating work artificially will not cut it. I do hope this will not result in a Great Big Evil solution like locking everybody up or some synthetic religion or something. But we never had that problem on the level of a whole society before. Small groups, like the Aristocracy,

    • Jeez... you missed his point!
    • Comment removed based on user account deletion
  • It's not efficient for the stocking system to translate instructions into a voice command and issue it to a human either. It IS a long-held dream to be able to sit on the couch and inefficiently yell things like "Robot, get me a beer!" That's what language models are for: translating inefficient human communication into efficient commands.

    • So why don't we have that kind of robot then? Why doesn't even Alexa do a good job playing the music I want?
      • Brooks in his full comments explains that: If you want the robot to get you a beer, the hard part isn't understanding what you want, it's actually performing the task. Written language is the easiest part of AI, and that's why it's the first to show some great results.

      • by ceoyoyo ( 59147 )

        Natural language processing, up until recently, has been very hard. Human language manages to be highly redundant and ambiguous at the same time. That's this guy's point: natural language is shit for communication, so you definitely don't want LLMs involved in your warehouse robotics.

        If you're starting with a human, you don't have much choice. Everyone (well, not Amazon) is racing to stick LLMs and neural-network speech recognition into their Siris and Googles and whatever else. When they do, those things a

    • The thing about llms is that they do impressive things with sloppy, fuzzy input, but by the same token produce sloppy fuzzy output. This can be tremendously useful, but not in all cases. It might accelerate production of more precisely deterministic code with a human auditing, correcting, testing, etc, but even if it were equally cheap, you'd be using the deterministic code over trying to directly use LLM for a lot of scenarios that can't really accept very fuzzy input

      • by ceoyoyo ( 59147 )

        LLMs != chat apps.

        An LLM is a language model. You can use one to translate natural language into numerical vectors representing meaning, or vice versa. That vector represents as precise a meaning as the model can derive from the imprecision of natural language.

        Chat apps decode natural language input (using an LLM encoder), do some processing of that to formulate a response, add noise, then convert that into natural language again (using an LLM decoder). Note the "add noise" part. The sloppy, fuzzy output is

  • It's kind of weird to hear someone talking rationally about the issue in terms of benefits/limitations when everyone else has lost their mind in the hype. It's become an emotional topic (because every topic is now).
    • by gweihir ( 88907 )

      This is (among other factors) because society has stopped making it clear to people how smart (or not) they are. Cannot tell the fragile little egos they may not be Einstein-level after all. People failing maths and science and thinking they are very smart. People that have zero on-target education and no scientific education at all and thing they can competently evaluate complex scientific questions. People that do not even notice their conclusions are inconsistent in the most obvious ways. All of them con

      • by ghoul ( 157158 )
        Smart people like you fall for the fallacy of comparing AI to your intelligence. Most people are dumb. AI is perfectly capable of replacing people who could not pass science and math. So AI is more of a threat to their livelihood. You may say "not that great at all". But neither are most humans.
        • by gweihir ( 88907 )

          Most people are dumb. AI is perfectly capable of replacing people who could not pass science and math. So AI is more of a threat to their livelihood. You may say "not that great at all". But neither are most humans.

          Well, yes. And no. Even dumb people (and I agree most people are dumb) are way ahead of LLMs and by a very long shot. What may not be way ahead is what they actually do in their jobs. And that _is_ a threat, I agree and have said here so numerous times. It is not that an LLM could replace a low-level white collar worker. It generally cannot. But it does not need to. If it can replace that worker 80% of the time, then 4 of 5 of them will lose their jobs. This was possible before with expert systems, for exam

      • This is (among other factors) because society has stopped making it clear to people how smart (or not) they are.

        Thank goodness for that. I was getting awfully tired of asshats with a god complex running around constantly commenting about how smart they think they are and how dumb they believe everyone else is.

        Cannot tell the fragile little egos they may not be Einstein-level after all.

        Yes absolutely, we all wake up every morning thinking we are little Einsteins.

        People failing maths and science and thinking they are very smart.

        Yea like totally... I don't know about anyone else yet my first reaction to failing is certainly damn I'm too smart for this class. This must be a completely normal reaction.

        People that have zero on-target education and no scientific education at all and thing they can competently evaluate complex scientific questions. People that do not even notice their conclusions are inconsistent in the most obvious ways. All of them convinced they have truth. These days everybody believes they are an expert on everything.

        Absolutely... totally agree... everybody thinks they are ex

        • by gweihir ( 88907 )

          Fascinating. And now explain why you bothered posting that. You may notice something when you think about it. Or not.

  • by gweihir ( 88907 ) on Wednesday July 03, 2024 @09:44PM (#64599669)

    Seeing that does really not require some highly qualified expert. A person with working rationality that can fact-check (a minority to be sure at 20% or so of the general population), is quite enough. For us, this is exceptionally obvious.

    My latest test: Let students write a bogo-sort, but with shuffling first. What does Artificial Ignorance deliver? Check first, because that is what it has seen, never mind the exceptionally clear requirement in the task description. This was on an exam to boot, (open Internet, AI allowed). About 90% handed in the wrong solution. This tech is really as dumb as a rock. It has seen a lot, true, but it has zero insight and understanding and cannot identify what matters and what does not. All it can do is correlations. Good luck having it do anything where the spec actually matters.

    • Let students write a bogo-sort, but with shuffling first.

      What do you mean shuffling first? As in do; suffle; while not sorted; as opposed to while not sorted; shuffle;

      • by gweihir ( 88907 )

        Yes. The shuffle step was a bit more elaborate, but chearly before the check step in an "1. shuffle 2. check whether sorted and repeat if not" kind of way. No idea which AIs my students used, but it looks like none of the AIs got it. There were a few non-AI or clearly marked partial-AI solutions and these were fine. Oh, and I did not give them the name but called it something else.

        • Yeah it's funny what they don't get, or rather interesting what they do and don't get.

          They are remarkably good for things close in some sense to the training set. Obviously they produce things not exactly in their training set, but they have real trouble with certain kinds of variations. I'm mildly surprised that it can't reorder bits, I would of thought that would be straightforward enough. Tough perhaps the combination of bogosort (which is not that common) and an unusual variation on shuffling pushed it

          • by gweihir ( 88907 )

            Exactly. If they have it in the training set or can get there via strong enough correlation (but _not_ implication), then they generally do fine. If the tiniest derivation step is needed (i.e. implication), then they are completely lost. In humans, understanding implications corresponds to rational thought and the ability to fact-check. Correlation, on the other hand, basically only gives you conformity and the ability to repeat what others said. Now look at how many people do not get how implications, i.e.

            • Well, I have about 25 instances of proof that AI can not do it.

              You'd think that matters. It does not appear to.

              • by gweihir ( 88907 )

                I think what we see here is a clear division between people that can see a thing for what it is and can fact-check on one side, and on the other side people that believe they can do these things but really cannot and instead just select something to believe. At this time it is safe to say that anybody that expects actual intelligence in an LLM or thinks LLMs are the path to AGI is not rational and cannot fact-check at all. The severe limits of LLMs are blatantly obvious and can be verified by anybody, no sp

                • There's also this weird binary thing you get which is either an LLM can do something or it can't. But of course it's not binary. They can't do logic in a general sense, but they can do some sorts of logical reasoning.

                  I've not investigated fully, but I suspect as long as the logical reasoning is similar enough in shape somehow toe existing ones, then it can manage. For less common ones or ones with counnter intuitive conclusions that are poorly represented in training, it falls over.

                  But either way, "can do s

                  • by gweihir ( 88907 )

                    I've not investigated fully, but I suspect as long as the logical reasoning is similar enough in shape somehow toe existing ones, then it can manage.

                    That would be my observation and it would be consistent with the theory. In fact, LLMs cannot do logical reasoning at all. But they can adapt a logical reasoning step they have seen in their training data often enough, as long as that adaption only requires correlation and no reasoning in itself. Basically, they can bend a puzzle-piece they already have a bit to make it fit a hole. This is not proper logical reasoning, but it can substitute for easy and often done and published logical steps. It can also ca

  • Like any tool people will naturally develop a sense for the capabilities of systems over time as they gain experience. Some initial misunderstanding over capabilities of unfamiliar technology is expected.

    • Normally I'd agree with you.

      But the use case of the LLM is "this is too much text to parse manually, make the robot do it for me".

  • CSAIL wasn't created until 2004, so Rod Brooks couldn't have directed in in 1997. He directed the AI lab from 1997-2003, and then when the AI lab and LCS merged to form CSAIL, was director of CSAIL for 3 years from 2004-2007. Though CSAIL didn't exist, if you want to simplify and claim there was a "director of CSAIL" from 1997-2003, then Mike Dertouzos and Victor Zue (who ran LCS) would have as much claim to that title as Brooks.

  • Panasonic Professor of Robotics Emeritus at MIT

    What the fuck does that even mean?

Let's organize this thing and take all the fun out of it.

Working...