Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:It's not the year of robotic AI. (Score 1) 69

I doubt that our mutual experience levels are going to allow us to agree on these points.

Consider the architecture of an RTOS vs, say Linux (which has been re-architected in an RTOS release). Let's map the RTOS to a standard kernel for sake of this example.

An RTOS is entirely reactive. Inputs control it at all times. This reactance is like the self-driving application that must make snap judgments of numerous conditions and rapid input changes to alter the path of what it's controlling. It needs excruciating calculative strength to make snap judgments that direct projected control.

Oh, no question that it's a hard problem. The difference is that self-driving can be broken down easily into a bunch of smaller hard problems, and each problem has, to some extent, a right answer, or at least relatively straightforward ways to objectively verify that an answer is not wrong. For example, you can fairly objectively define what constitutes a reasonable driving path, and provide guard rails in running your tests against newly trained models that fail if it goes outside of those parameters. So this is more in the NP-complete level of complexity, where computing it is hard, but verifying the solution is of polynomial complexity.

What constitutes a reasonable architecture for piece of software is either entirely subjective or an intractably large set of objective constraints or some combination thereof, because maintainability considerations play a role, along with data backwards/forwards compatibility, etc. I'm not even sure where you would begin trying to define adequate guard rails. This is more like the broader NP-hard level of complexity, where computing it is hard, and verifying it may well be even harder.

Its state-machine logic has to be incredibly unerring, all whilst moving down the road at speed with humans as cargo.

Without that onus, a kernel tries to systematically deliver interactive response so quickly that users see no pause. There is no human payload, only the satisfaction that screen and device updates are acceptable, perhaps a few pauses now and then as one rotates a vector 3D model through onscreen space.

The coding model for the RTOS behind realtime transport navigation is a different one than say, CRM, web, or pub/sub models with messaging reactance.

No disagreement on any of those points. The tolerances for self-driving car tech are indeed higher than the tolerances for tools that write software. But part of the reason for that is that there's a human in the loop in the latter situation. You aren't trying to write software that can design a word processor from scratch. You're writing software that can design a single function or maybe a small class that performs a specific behavior from scratch, and all of the hard work happened before you even asked — specifically, coming up with the specifications.

I know that's true for self-driving as well, but the difference is that the specifications for working self-driving behavior are largely consistent across platforms, with the exception of some specific rules of the road being different in different countries, whereas the specifications for a word processor are entirely unrelated to the specifications for an image editor or a web server.

So at that general level, being able to drive a car is at best like AI being able to write a word processor, and AI being able to write any arbitrary piece of software is by definition a much broader problem.

In your robotics example, it's connected to a pub/sub network to deliver it largely realtime information about the qualities of characteristics that it navigates, sucking your floor. The pub/sub model micro-rewards various participants in its revenue model, while cleaning your litter and dead skin.

Probably not. The robotic vacs can be used entirely offline; you just lose the ability to control it remotely if you do. And if you believe their privacy policy, no data is stored remotely except for aggregated data.

Tesla and other's nav firmware isn't finished. It's not provable, only a sum of projections;

I mean, that's the very definition of AI.

the realtime driverless taxis clog the veins in SF, as an example, where people uniformly vilify their stupidity.

That would be Cruise, I suspect. The general perception of Waymo in SF seems to be pretty positive.

Clearly, they're not ready. It doesn't matter if you badge them with a Jaguar leaper on their hood or not-- they're working only under very highly confined circumstances.

I wouldn't call it "highly confined". The biggest constraint was lack of support for driving on the freeway. They just started testing that in early 2024, and got regulatory approval in California in mid-2024. Without freeway driving, self-driving cars wouldn't be feasible in a lot of cities. Now that they've started doing freeway driving, I suspect you'll find that the number of situations and environments that they can't handle is remarkably small, bounded largely by the need to do high-definition mapping drives first.

I agree that it is imperfect, particularly when construction is involved, but the difference between the modern Waymo cars and the hesitant Waymo cars from a decade ago is night and day, speaking as someone who periodically ends up driving near one.

No, it's not ready for prime time, and the Chinese robotics meme isn't so much a sham, as it's wishful thinking, a hope and prayer.

I disagree on the first part, but I agree with the second. I have little faith in any mass-manufactured humanoid robots being usable right now. But if they bring the cost of the hardware down enough through mass manufacturing, then as long as they run some sufficiently open operating system, other folks will find interesting ways to use them, and will figure out how to make the software work.

For example, electronics PCB manufacturing is already highly automated, with human workers loading tape reels of components into pick-and-place machines, and the machines doing all the rest. It would be hard to replace all of that hardware with new hardware designed for any sort of automated loading, but it seems obvious that a humanoid robot with the right programming could fully automate the loading of components. That's well within the realm of what robotics can do today.

Similarly, Amazon warehouses have non-humanoid robots that can already pick a lot of things off of warehouse shelves. A humanoid robot could probably do a better job, and they actually have the software engineering resources to make that happen. Whether any of that software would ever become available outside of Amazon is, of course, a different question.

Comment You make good points about Efail and MDC (Score 1) 36

You make good points in the linked critique. Regarding Efail and reference implementations:

If someone, while trying to sell you some high security mechanical system, told you that the system had remained unbreached for the last 20 years you would take that as a compelling argument.
[...]
CFB (Cipher Feed Back) is actually sort of awesome.

CFB mentioned; I'm eager to see how you address Efail. (I'm also curious about what makes CFB better than counter mode, but that might be a separate discussion.)

There is no such “reference PGP implementation”.

In 2002, when Trevor Perrin found the IETF OpenPGP spec to be wrong about the strength of MDC, the spec was changed to reflect what implementations actually do. At the time, this made GnuPG the "reference implementation" of MDC in fact, even if not formally. RNP didn't exist until a decade and a half later.

The much beleaguered OpenPGP modification detection code (MDC) reliably detected the EFAIL attack.

This is reassuring.

The popular Thunderbird email client switched from GnuPG to RNP. Is RNP “effectively the reference implementation for PGP” now?

Now I'm curious about how many people use OpenPGP mail in Thunderbird compared to APT in Debian, Ubuntu, and Linux Mint. APT depends on gpgv, a subset of GnuPG. So it might end up the case that GnuPG is the reference implementation of OpenPGP in general, or at least for package signing (unless they switch to Sigstore), while RNP is the reference implementation for OpenPGP in email.

Regarding RSA:

RSA with 2048 bit keys is a perfectly reasonable and conservative default

"Seriously, stop using RSA" by Trail of Bits (2019-07-08) mentions numerous footguns in the implementation: poor selection of prime numbers making a key easy to factor, temptation to choose a vulnerable small exponent, padding oracle attacks, and more. How does the OpenPGP spec require implementations to mitigate this?

And are there mitigations for the plaintext subject and risk of accidental forwarding of quoted plaintext to users on the CC list who haven't exchanged keys?

Comment Re:It's Called Greed! (Score 1) 102

There is no federally mandated maximum interest rate for credit cards.

I never said that there was. I said that there were legal limits. State law limits, among others.

See the article that you linked to and its references to state usury laws for examples of some such limits.

Slightly over half of states have usury laws that limit credit card interest rates, BUT federal law specifies that the rates a bank can charge are limited by the state where the bank's headquarters is located, not where its customers are. This is why most credit card issuers are incorporated in a small number of states (e.g. Delaware) that don't have any limits. As a result, credit card rates are limited by competitive and similar factors, not regulations.

Comment Re:More Google f*ckery (Score 1) 36

If Google would license its technology at no cost, then I'd have less of a problem with it.

I doubt there's any technology to license here. I'm sure it's just leveraging ownership of a widely-used platform to provide a feature on that platform. Any other email platform with both servers and clients could provide the same, within its garden. Crossing those garden boundaries is where this problem gets impossible to solve.

As to why Google should be broken apart, the answer is because...

So, nothing to do with email encryption, i.e. just confirmation bias.

Comment Re:"according to a new study" (Score 1) 107

I think global warming has a good chance of collapsing Western societies. I call that a large threat to mankind. I did not say "existential threat".

You did say "biggest", and it can't be bigger than existential threats with even moderate probability.

Also, I disagree that climate change might collapse Western societies. Western societies are actually the ones best equipped to protect themselves from it... and from the waves of refugees from regions that aren't so well off.

Trump and Musk are playing crazy games that could end in World War 3.

Agreed. However, I think nuclear war is less likely to end humanity than AI, though civilization probably wouldn't survive. Einstein's quote about WW IV comes to mind.

Comment Re:"according to a new study" (Score 1) 107

While I agree that asteroids, AI. pandemics, nuclear war etc all loom large, climate change is the only one that is here right now, that we can see, and that has a roadmap.

AI has a roadmap, we just don't know the timeframe (could be months, is more likely at least a few years, almost certainly isn't more than a decade or three), and don't know if some deus ex machina might save us. Though I think that last possibility is very unlikely.

Nuclear war, sadly, is looking dramatically more likely. With Trump making threatening noises against NATO allies, it's clear that Europe can no longer count on the US nuclear umbrella, which means that France and the UK will need to change the strategic focus of their nuclear forces from invasion deterrence to regional defense, which means increasing their weapons stockpiles and developing their own delivery systems. It also means they'll begin helping other EU states to acquire nuclear weapons. That will break the non-proliferation detente that has mostly held, almost certainly encouraging lots of non-NATO countries to acquire and build up their own nuclear forces.

Comment The Optimism of Uncertainty (Howard Zinn) followup (Score 1) 137

https://www.thenation.com/arti...
"In this awful world where the efforts of caring people often pale in comparison to what is done by those who have power, how do I manage to stay involved and seemingly happy?
        I am totally confident not that the world will get better, but that we should not give up the game before all the cards have been played. The metaphor is deliberate; life is a gamble. Not to play is to foreclose any chance of winning. To play, to act, is to create at least a possibility of changing the world.
      There is a tendency to think that what we see in the present moment will continue. We forget how often we have been astonished by the sudden crumbling of institutions, by extraordinary changes in people's thoughts, by unexpected eruptions of rebellion against tyrannies, by the quick collapse of systems of power that seemed invincible.
      What leaps out from the history of the past hundred years is its utter unpredictability. ..."

Tangential social perspective examples -- to remember there are compassionate insightful people out there who made all these:
https://ratfactor.com/tech-nop...
https://aeon.co/essays/thought...
https://whorulesamerica.ucsc.e...
https://maggieappleton.com/for...
https://www.wheresyoured.at/lo...
https://www.wheresyoured.at/ne...
https://www.salon.com/2010/08/...
https://caucus99percent.com/co...
https://www.youtube.com/watch?... "On whether the USA is greatest country in the world"
https://www.youtube.com/watch?... "Why I stopped watching the news"

Tech examples -- not even scratching the surface of everything out there:
https://ftp.squeak.org/docs/OO...
https://en.wikipedia.org/wiki/...
https://squeak.js.org/
https://mithril.js.org/
https://en.wikipedia.org/wiki/...
https://en.wikipedia.org/wiki/...
https://www.amazon.com/Reinven...

And so on...

Consider what the 1950s were like In the USA. People smoking on airplanes. Jim Crow laws. McCarthyism. Rivers caught fire. Lobotomies. Not saying everything is better now (some things are worse) -- but some things are indeed better.

Slashdot Top Deals

Shortest distance between two jokes = A straight line

Working...