Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:It's not the year of robotic AI. (Score 1) 69

I doubt that our mutual experience levels are going to allow us to agree on these points.

Consider the architecture of an RTOS vs, say Linux (which has been re-architected in an RTOS release). Let's map the RTOS to a standard kernel for sake of this example.

An RTOS is entirely reactive. Inputs control it at all times. This reactance is like the self-driving application that must make snap judgments of numerous conditions and rapid input changes to alter the path of what it's controlling. It needs excruciating calculative strength to make snap judgments that direct projected control.

Oh, no question that it's a hard problem. The difference is that self-driving can be broken down easily into a bunch of smaller hard problems, and each problem has, to some extent, a right answer, or at least relatively straightforward ways to objectively verify that an answer is not wrong. For example, you can fairly objectively define what constitutes a reasonable driving path, and provide guard rails in running your tests against newly trained models that fail if it goes outside of those parameters. So this is more in the NP-complete level of complexity, where computing it is hard, but verifying the solution is of polynomial complexity.

What constitutes a reasonable architecture for piece of software is either entirely subjective or an intractably large set of objective constraints or some combination thereof, because maintainability considerations play a role, along with data backwards/forwards compatibility, etc. I'm not even sure where you would begin trying to define adequate guard rails. This is more like the broader NP-hard level of complexity, where computing it is hard, and verifying it may well be even harder.

Its state-machine logic has to be incredibly unerring, all whilst moving down the road at speed with humans as cargo.

Without that onus, a kernel tries to systematically deliver interactive response so quickly that users see no pause. There is no human payload, only the satisfaction that screen and device updates are acceptable, perhaps a few pauses now and then as one rotates a vector 3D model through onscreen space.

The coding model for the RTOS behind realtime transport navigation is a different one than say, CRM, web, or pub/sub models with messaging reactance.

No disagreement on any of those points. The tolerances for self-driving car tech are indeed higher than the tolerances for tools that write software. But part of the reason for that is that there's a human in the loop in the latter situation. You aren't trying to write software that can design a word processor from scratch. You're writing software that can design a single function or maybe a small class that performs a specific behavior from scratch, and all of the hard work happened before you even asked — specifically, coming up with the specifications.

I know that's true for self-driving as well, but the difference is that the specifications for working self-driving behavior are largely consistent across platforms, with the exception of some specific rules of the road being different in different countries, whereas the specifications for a word processor are entirely unrelated to the specifications for an image editor or a web server.

So at that general level, being able to drive a car is at best like AI being able to write a word processor, and AI being able to write any arbitrary piece of software is by definition a much broader problem.

In your robotics example, it's connected to a pub/sub network to deliver it largely realtime information about the qualities of characteristics that it navigates, sucking your floor. The pub/sub model micro-rewards various participants in its revenue model, while cleaning your litter and dead skin.

Probably not. The robotic vacs can be used entirely offline; you just lose the ability to control it remotely if you do. And if you believe their privacy policy, no data is stored remotely except for aggregated data.

Tesla and other's nav firmware isn't finished. It's not provable, only a sum of projections;

I mean, that's the very definition of AI.

the realtime driverless taxis clog the veins in SF, as an example, where people uniformly vilify their stupidity.

That would be Cruise, I suspect. The general perception of Waymo in SF seems to be pretty positive.

Clearly, they're not ready. It doesn't matter if you badge them with a Jaguar leaper on their hood or not-- they're working only under very highly confined circumstances.

I wouldn't call it "highly confined". The biggest constraint was lack of support for driving on the freeway. They just started testing that in early 2024, and got regulatory approval in California in mid-2024. Without freeway driving, self-driving cars wouldn't be feasible in a lot of cities. Now that they've started doing freeway driving, I suspect you'll find that the number of situations and environments that they can't handle is remarkably small, bounded largely by the need to do high-definition mapping drives first.

I agree that it is imperfect, particularly when construction is involved, but the difference between the modern Waymo cars and the hesitant Waymo cars from a decade ago is night and day, speaking as someone who periodically ends up driving near one.

No, it's not ready for prime time, and the Chinese robotics meme isn't so much a sham, as it's wishful thinking, a hope and prayer.

I disagree on the first part, but I agree with the second. I have little faith in any mass-manufactured humanoid robots being usable right now. But if they bring the cost of the hardware down enough through mass manufacturing, then as long as they run some sufficiently open operating system, other folks will find interesting ways to use them, and will figure out how to make the software work.

For example, electronics PCB manufacturing is already highly automated, with human workers loading tape reels of components into pick-and-place machines, and the machines doing all the rest. It would be hard to replace all of that hardware with new hardware designed for any sort of automated loading, but it seems obvious that a humanoid robot with the right programming could fully automate the loading of components. That's well within the realm of what robotics can do today.

Similarly, Amazon warehouses have non-humanoid robots that can already pick a lot of things off of warehouse shelves. A humanoid robot could probably do a better job, and they actually have the software engineering resources to make that happen. Whether any of that software would ever become available outside of Amazon is, of course, a different question.

Comment Re:It's not the year of robotic AI. (Score 1) 69

The transient nature of navigating transportation obstacles requires knowing many concepts, and avoiding the ones that lead to bad outcomes. Driving automation and coding intersect at many junctures.

Code is not static, and neither is driving. On a good day, easily summoned choices can be made, and on a bad day, dependencies require astute and rapid choices to be made productively.

Making a choice at its simplest is an if statement. It's boolean logic. Making lots of rapid choices that take into account the data coming in can be measured objectively. Creative efforts can only be measured subjectively. That by itself makes the two fundamentally different in terms of designing training systems.

The timing of transportation doesn't wait; conclusions of many inputs have to render the right choice in an action. Deftly done, all is good, rider arrives at a destination, money earned, no harm no foul.

Let's be realistic here. On 99% of drives, nothing interesting happens. You just have to pick the correct lane, stop for stop signs and traffic lights, and obey the speed limit, and you get there safely. For 1% of drives, somebody cuts in front of you or tries to sideswipe you or steps out in front of you. And for the 95% case, you just have to recognize that this is about to happen by computing the speed and direction of each vehicle, pedestrian, animal, or other large object, and hit the brakes soon enough. By the the time you get into situations where you have to steer to avoid something, you're at more like .0001% of drives.

This, of course, ignores the path planning headaches of some drives involving driving through safety cones for temporary lane shifts, or on rare occasions, having to deal with a human directing traffic, but again, these are relatively rare edge cases.

A similar sequence of events occurs in programming. The only item ostensibly removed is a split-second life/death choice.

Which means it's not a similar sequence at all from a complexity perspective.

In driving, once you have passed a particular spot in the road, what happened back there no longer matters. You can forget about it. And anything that isn't about to happen within the next double-digit seconds also doesn't matter. You don't need to think about it yet. There's a very narrow window of temporal data that matters.

In programming, every single one of those decisions has to take into account at some level every future decision that will lead to successfully building the app, not only immediately, but also making it maintainable for features that you might want to add later. You can't look at one part of the project in isolation, do your job, and walk away unless you are a very junior programmer working on a very well-defined task. And if you are a very junior programmer, for you to even be able to get that well-defined task in the first place, someone else had to think about all of those things; it just wasn't you.

So no, these are not similar. Not at all. The decisions made by a self-driving car are expressible as a simple state machine, with simple outputs (steer X% left or right, brake x%). All of the complexity is in the data gathering (get the location and motion vector for all interesting objects) and path planning. The decisions made by a programmer are not. The inputs are vague English descriptions of what you want to build. The outputs are large, complex software systems.

That under highly confined circumstances, driving after millions of miles of training in limited geography, AI can drive some cars is just a toe-dip in the real world. Phoenix, SF LA-- they get little snow. They have minimal random objects invading spaces.

I'll grant you that snow is a problem. It's also a problem for human drivers, though. Random objects? They're about as likely in the Bay Area as in any other city.

The template you cite is highly-confined, somewhat to maximally arid circumstance and environment. The real world is but a fraction of that.

Actually, quite the opposite. Driving usefully in a dense city with pedestrians and cyclists is worst-case for self-driving tech. Driving out in the middle of nowhere means you're orders of magnitude less likely to encounter other cars, pedestrians, and cyclists, so the decisions tend to be way less critical. Obviously, the lack of updated maps can be a problem, but that's more of a process issue, not a technology issue. The only other real difference is the potential for higher road speeds, but a car's ability to drive at higher speeds is mostly bounded by CPU/GPU performance and camera resolution for seeing things at longer distances, i.e. it is the sort of complexity difference that will go away on its own with a few years of hardware improvements.

Your new robot vacuum doesn't make any money, it just phones home and rats out your living quarters geometry for profits. Look it up. And you know how your Tesla knows your every move. There is no privacy in a Tesla. You're part of the product. You charge at Tesla chargers, use the screen for nav and looking up restaurants. You're part of the product. You're no longer autonomous as a driver, and not really in control. Hope that works for you.

Yes, it does. Thanks for asking. i've read Tesla's privacy policy, and it seems entirely reasonable. Same with Roborock. If that changes, I'll let you know.

Comment One word: embedded (Score 1) 191

If you think companies are going to keep using Windows for their embedded devices when the company has to create an online account before they ship the box to customers, you have another thing coming. Letting the manufacturer of a product have permanent control over that hardware and permanent ability to take control over someone else's account on a device that they own would be a showstopper for literally every company that preinstalls Windows for embedded use.

Comment Re:Oye Vey (Score 1) 27

So what fresh new hell have they cooked up for my domain certs now?

... that they will no longer issue certificates for your domain to someone else? Sounds like a good thing, on the whole!

The thing is, they could shut down and guarantee that with 100% certainty. Making it harder for someone else to get certs for your domain is nice until it prevents you from getting certs for your own domain. Also, it does nothing for the gaping hole that lookalike domains represent.

As far as I am concerned, TLS certificates stopped being beneficial when they stopped costing thousands of dollars. Expanding TLS to every random person who wants to run a web server destroyed their value in recognizing a site's legitimacy. For everybody else, we should have used a separate system, like public keys in DNS records with key pinning a la SSH. Instead, they basically broke TLS completely, so the security icon in Safari for Amazon.com is indistinguishable from the security icon in Safari for my personal website with a free certificate from LetsEncrypt.

From here on, there's nothing that they can realistically do to make TLS trustworthy, because it just isn't. They could make it impossible for other people to get certs for your sites, and that still won't make it more trustworthy. For 99.9999% of use cases, we'd be better off if they stopped trying, let anybody get a cert who wants one, and used key pinning to guarantee that fake certs are useless except at sites you haven't visited. If you have pinning, the only remaining benefit for actually validating the ownership of the domain is for extended validation certs, and again, if you can't tell which ones are which, then the industry has decided that they don't matter, so why bother with that?

In other words, the whole industry is basically a useless tax on the web at this point, and should go away.

Comment Re:It's not the year of robotic AI. (Score 2) 69

No. The same myriad inputs needed for safety in navigating an auto for a passenger is quite similar to the variety of skills needed to be a good coder.

No, not really. Being a good programmer is a creative process involving design aspects, low-level coding aspects, naming things (one of the two^H^H^Hthree hard problems in computer science), and generally figuring out how to express a vague general concept as a series of strict rules.

Driving safely is just combining GPS routing (which is a long-solved problem) with obeying a bunch of fairly well-defined road rules and recognizing hazards and stopping or steering when you see one &mdash identification, path planning, etc

There are several orders of magnitude difference in the difficulty of those two tasks. That's why we have generative AI art with six fingers and two thumbs, while self-driving cars are operating commercially in multiple cities.

AI isn't going to replace either, it's a labor-replacement wet dream.

AI is already replacing drivers. Right now, you can take a ride in a self-driving car from Waymo in Phoenix, much of the San Francisco Bay Area, Los Angeles, and Austin, with 10 more cities rolling out this year. Their at-fault accident rate blows human drivers out of the water. I mean sure, you still have the occasional bizarre story about someone circling a parking lot for twenty minutes or the cars honking at each other all night in a parking lot, but arguing that self-driving car tech isn't mostly a solved problem is disingenuous at best. Tesla mostly has it working on highways. Waymo mostly has it working in cities. Put the two together, and you could at least ostensibly have end-to-end autonomous driving from coast to cost.

Doing autonomous driving well enough and cheaply enough to have it in a vehicle that individuals own is, of course, still not fully solved, and doing it generally in every environment without having to block off certain areas because of construction, etc. is also still not entirely solved, but that's like arguing that robots can't replace vacuum cleaners because they can't get into corners. For the 95% case, it is solved, and that's actually pretty amazing, IMO.

Posting this as I eagerly await the HW4 upgrade for my Tesla and the arrival of my new robotic vacuum cleaner in May.

Comment Re:It's not the year of robotic AI. (Score 1) 69

It's the drooling wet dream of capitalists to cut out labor. That's you and I. First, AI will replace all coders. Yeah, sure, go ahead with that and reap the rewards. It'll take 10x the costs to unravel those bugs.

Put into self-driving vehicles? Wasn't that supposed to happen a few years ago? How many deaths will it take until the lessons are learned. How much money will get burned on the attempts? How many will die in bad crashes in the meantime, boiled in burning lithium battery fires?

I think there's a big differences between self-driving cars and using AI to replace programmers. There's no feasible way to have enough cab drivers and Uber drivers for everyone to stop driving themselves, nor will public transit ever get good enough to be a good alternative to a car outside of large cities. So self-driving car tech is doing way more than just replacing the small number of people who drive for a living. It is also giving mobility to the elderly, giving several hours per week of commute time back to the sorts of workers who can work remotely during the commute, driving down the cost of package delivery and reducing delivery times, and massively transforming probably a lot of other markets. And that is likely to result in more jobs, rather than fewer overall, albeit fewer in a few narrow areas.

The same can't be said for using AI to replace workers in most other areas.

Comment Re:Any who cares (Score 1) 81

Or find someone with the same model and borrow a laptop.

If it turns out to be a bug that's reproducible on the same model of laptop with that monitor, consider getting a free Apple developer account and filing a bug with Feedback Assistant, which will capture logs and submit them to Apple.

Comment Re:Long road (Score 1) 81

As a longtime Mac user, I've been feeling and talking about exactly this with colleagues for some time now. Since "El Capitan", the OS started to suffer greatly from visible lack of direction and feature creeping.

You misspelled Snow Leopard, or maybe Mountain Lion. :-D

But seriously, I mostly blame the decision to merge Mac and iOS development. It took three years for the iOS side of the house to drag the Mac side down enough for you to start noticing, but having them under the same team resulted in constant attempts to merge these very different platforms, always to disastrous effect. The downhill slide for macOS began very clearly when Forstall left Apple and iOS got merged under Craig. Nothing against Craig here — I think he does a great job, or at least he did when I worked under him — but the teams should never have been combined.

Mavericks had some nice changes at a low level; the UI changes were questionable, but not objectionable.

Yosemite went too far, IMO, though users seemed to like it, I guess?

By the time you got to El Capitan, we were calling it "El Crapitan".

I haven't really seen anything of value added since then other than Apple Silicon Support. And that's okay. It's an OS. It doesn't have to constantly change and add new features. At some point, all you can really do is make it worse, and I think they passed that point a while back for the most part. :-)

Comment Re:Any who cares (Score 1) 81

My biggest gripe with Mac OS is that the desktop glitches (with garbage drawn over many of the "tiles" that I guess different GPU cores update), and fullscreen video playback will crash the whole OS within 5 to 15 minutes, on my main monitor.

This is not normal. You almost certainly have either defective VRAM or a defective GPU. For the M*-series, those are the same thing.

Chances are, an address line is marginal, so sometimes data is getting written to the wrong part of VRAM.

I mean, there's a very, very tiny chance that it could be a software bug where VRAM is getting simultaneously allocated to multiple processes at the same time, and given that you're right on the threshold of running out, that's slightly more likely than it otherwise would be, but my money is on hardware.

What I would do is bring your computer and monitor to an Apple store and show them what's happening and then try it with the same monitor on a different Mac and see if the problem reproduces. When it doesn't, tell them you need a new logic board.

Comment Re: Canada needs to jump on this (Score 2, Insightful) 284

And yet now the Administration is floating the claim that China and Russia want Greenland as an obvious cover for the US seizing an ally's sovereignty erritory.

This is a lawless administration running a country filled with morons, fascists and cowards. Threatening allies' sovereignty, even their existence, fabricating crises to get their way. You think that Trump and his heirs are going to let a stupid little thing like a constitution get in the way of destroying their rivals?

Comment Re:Canada needs to jump on this (Score 4, Informative) 284

America already shit the bed on this one. The Insurrection Act is going to be invoked soon enough to seize military control of the blue states, you'll start seeing Congressional and state Democrats arrested. Sane people will get the hell out of the US before the border closes.

Good luck with that. We'll happily take your academics and scientists, and leave you to sink into right wing religious fanatical shithole the country has always wanted to be.

Comment Re: We are running out of work (Score 1) 56

And AI is going to effectively destroy

Yeah, like everything else so far did.

If you're being sarcastic, know that this very much has happened.

  • It used to be the case that you could make high salaries doing manufacturing in the U.S.; now, most manufacturing is highly automated and has fewer workers (and is overseas).
  • Doctors have always been some of the best-paying jobs out there, but insurance companies (both health and malpractice) have been squeezing them from both sides, and technology has been pushing up from the bottom to make it easier for nurse practitioners with less training to do more of the jobs that doctors can do with supervision that is becoming less and less, and the result is that most doctors now are imported from other countries, in part because not enough people are going to medical school and in part because the people who do go to medical school are choosing to do research instead of actually practicing medicine. Being a doctor still pays well, but the relative pay has dropped, from 3.47x an average college professor salary in 1980 to just 2.81x today.
  • Speaking of medicine, the entire field of medical transcription has basically been replaced by tech.
  • We're about to see truck driving go the same way as self-driving tech replaces that previously moderately high-paying career.
  • We're seeing hints that generative AI may start to impact tech as well.

What makes this different right now is that the rate of job destruction seems to be accelerating, probably at a faster rate than the market can adapt to.

Comment Re: We are running out of work (Score 1) 56

How do you see AI reducing the training required for jobs? Perhaps transiently, for example a call center worker who needs less training, but I think in most of those cases the job goes away entirely.

But now, those folks can use generative AI to become crappy low-end programmers writing tests for a software company, or to become staff writers for Associated Press, or whatever. They move up to higher-level jobs because they can, and the lowest-end jobs go away and are *maybe* replaced by higher-end jobs, though that's the giant open question.

Yes the jobs that it helps the most with are the less skilled ones, but that means I need less of those people. The higher level design and architecture my senior team does just gets more valuable by contrast.

Except that the AI also gets better at helping people with less training to do that design and architecture work, which means more people become capable of doing that work, which means the value to the firm declines, and the salary they pay declines with it.

As the overall productivity of the people increase, there are more people who can do the work at each level, so perversely, it encourages companies to pay them less.

Slashdot Top Deals

Shortest distance between two jokes = A straight line

Working...