Comment Re:It's not the year of robotic AI. (Score 1) 69
I doubt that our mutual experience levels are going to allow us to agree on these points.
Consider the architecture of an RTOS vs, say Linux (which has been re-architected in an RTOS release). Let's map the RTOS to a standard kernel for sake of this example.
An RTOS is entirely reactive. Inputs control it at all times. This reactance is like the self-driving application that must make snap judgments of numerous conditions and rapid input changes to alter the path of what it's controlling. It needs excruciating calculative strength to make snap judgments that direct projected control.
Oh, no question that it's a hard problem. The difference is that self-driving can be broken down easily into a bunch of smaller hard problems, and each problem has, to some extent, a right answer, or at least relatively straightforward ways to objectively verify that an answer is not wrong. For example, you can fairly objectively define what constitutes a reasonable driving path, and provide guard rails in running your tests against newly trained models that fail if it goes outside of those parameters. So this is more in the NP-complete level of complexity, where computing it is hard, but verifying the solution is of polynomial complexity.
What constitutes a reasonable architecture for piece of software is either entirely subjective or an intractably large set of objective constraints or some combination thereof, because maintainability considerations play a role, along with data backwards/forwards compatibility, etc. I'm not even sure where you would begin trying to define adequate guard rails. This is more like the broader NP-hard level of complexity, where computing it is hard, and verifying it may well be even harder.
Its state-machine logic has to be incredibly unerring, all whilst moving down the road at speed with humans as cargo.
Without that onus, a kernel tries to systematically deliver interactive response so quickly that users see no pause. There is no human payload, only the satisfaction that screen and device updates are acceptable, perhaps a few pauses now and then as one rotates a vector 3D model through onscreen space.
The coding model for the RTOS behind realtime transport navigation is a different one than say, CRM, web, or pub/sub models with messaging reactance.
No disagreement on any of those points. The tolerances for self-driving car tech are indeed higher than the tolerances for tools that write software. But part of the reason for that is that there's a human in the loop in the latter situation. You aren't trying to write software that can design a word processor from scratch. You're writing software that can design a single function or maybe a small class that performs a specific behavior from scratch, and all of the hard work happened before you even asked — specifically, coming up with the specifications.
I know that's true for self-driving as well, but the difference is that the specifications for working self-driving behavior are largely consistent across platforms, with the exception of some specific rules of the road being different in different countries, whereas the specifications for a word processor are entirely unrelated to the specifications for an image editor or a web server.
So at that general level, being able to drive a car is at best like AI being able to write a word processor, and AI being able to write any arbitrary piece of software is by definition a much broader problem.
In your robotics example, it's connected to a pub/sub network to deliver it largely realtime information about the qualities of characteristics that it navigates, sucking your floor. The pub/sub model micro-rewards various participants in its revenue model, while cleaning your litter and dead skin.
Probably not. The robotic vacs can be used entirely offline; you just lose the ability to control it remotely if you do. And if you believe their privacy policy, no data is stored remotely except for aggregated data.
Tesla and other's nav firmware isn't finished. It's not provable, only a sum of projections;
I mean, that's the very definition of AI.
the realtime driverless taxis clog the veins in SF, as an example, where people uniformly vilify their stupidity.
That would be Cruise, I suspect. The general perception of Waymo in SF seems to be pretty positive.
Clearly, they're not ready. It doesn't matter if you badge them with a Jaguar leaper on their hood or not-- they're working only under very highly confined circumstances.
I wouldn't call it "highly confined". The biggest constraint was lack of support for driving on the freeway. They just started testing that in early 2024, and got regulatory approval in California in mid-2024. Without freeway driving, self-driving cars wouldn't be feasible in a lot of cities. Now that they've started doing freeway driving, I suspect you'll find that the number of situations and environments that they can't handle is remarkably small, bounded largely by the need to do high-definition mapping drives first.
I agree that it is imperfect, particularly when construction is involved, but the difference between the modern Waymo cars and the hesitant Waymo cars from a decade ago is night and day, speaking as someone who periodically ends up driving near one.
No, it's not ready for prime time, and the Chinese robotics meme isn't so much a sham, as it's wishful thinking, a hope and prayer.
I disagree on the first part, but I agree with the second. I have little faith in any mass-manufactured humanoid robots being usable right now. But if they bring the cost of the hardware down enough through mass manufacturing, then as long as they run some sufficiently open operating system, other folks will find interesting ways to use them, and will figure out how to make the software work.
For example, electronics PCB manufacturing is already highly automated, with human workers loading tape reels of components into pick-and-place machines, and the machines doing all the rest. It would be hard to replace all of that hardware with new hardware designed for any sort of automated loading, but it seems obvious that a humanoid robot with the right programming could fully automate the loading of components. That's well within the realm of what robotics can do today.
Similarly, Amazon warehouses have non-humanoid robots that can already pick a lot of things off of warehouse shelves. A humanoid robot could probably do a better job, and they actually have the software engineering resources to make that happen. Whether any of that software would ever become available outside of Amazon is, of course, a different question.