Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:More to come (Score 1) 953

What do you know; wikipedia has statistics on risk factors, such as drunk driving: https://en.wikipedia.org/wiki/... -- even at 4 times the legal limit (beyond which there are no stats) a drunk human would be unlikely to cause a fatality in as few miles as this.

So yeah, evidence suggests autonomous vehicles aren't all that safe yet.

Comment Re:More to come (Score 1) 953

Going by the numbers, and considering the fact that the idea that self-driving cars are safer, it would appear most people then overestimate the driving ability of autonomous vehicles by at least a factor 10.

So far, the record isn't good, and that's with backup drivers for tricky bits, cherry-picked circumstances, and not counting safety interventions that would have caused accidents, so let's not get too exuberant about how great this tech - at this point, anyhow.

Comment Re:More to come (Score 1) 953

This is simply factually incorrect. Current statistics suggest autonomous vehicles are *orders of magnitude* more dangerous; autonomous vehicles have so few miles driven that the sample size is low.

Consider that a little more than 1 fatality per 100 million miles travelled occurs (ref: https://en.wikipedia.org/wiki/...), and that includes all those drunk distracted meatbags that happen to be using their phone too.

Uber just had their first fatality, and they're nowhere near 100 million miles. And even that is being unreasonably charitable, because those cars have back up drivers that deal with complicated situations the driver can't handle, and safety drivers that can and do intervene if the car appears to be making a mistake; if the cars had to drive regardless of the circumstances (the way a human does), and without someone to correct mistakes, the safety record many well be much, much worse. For some perspective: if people caused as many fatalities and traffic levels remained unchanged, then traffic fatalities would be the cause of more than half of all deaths; it would cause a massive reduction of life-expectency by many, *decades* (!)

Tesla's record too is poor - although their accident rate for the autodrive is similar to that of a human, it simply doesn't work in complicated situations at all. And in highway-style traffic where the system *is* used and made its first fatality, human error is even rarer; and again, consider that the human driver is there and supposed to intervene, so this too likely underestimates the actual risk caused by the autopilot.

Waymo has no serious accidents, but with so few miles driver (it's not much more than uber), it's too early to tell. If they drive at *least* 100 times more than the total they have so far (without serious error), you might cautiously venture a hope that they really are safer than human drivers, but even that wouldn't be statistically sound.

It's totally reasonable to expect autonomous vehicles to become safer than drunk-meatbag vehicles at some point, but they clearly are not yet. I'm not even sure they're safer than an actual drunk driver!

Comment Re:So the worker did their job (Score 1) 221

Unfortunately, it's not unthinkable for language to be misused to the point of becoming meaningless. It may well be that the phrase "this is not a drill" is headed that way. This certainly isn't the first time I heard obvious drills begin with "this is not a drill" - usually followed by sheepish announcements immediately thereafter that, eh, sorry, it kind of was a drill.

I don't think you're going to be able to avoid the need for people to simply use common sense, and *not* follow instructions sometimes, if there's reason not to. (No idea if that was the case here!)

Comment Re:Fines (Score 1) 72

If you think EU fines have dubious beneficiaries (not unjustifiably so), consider that due to the existence of punitive damages in the US, the US fines far more heavily overall. E.g. banks have been fined 321 billion (!) dollars; mostly by the US due to the financial shenanigans in the crash (see e.g. https://www.bloomberg.com/news...). Similarly, VW is likely to pay a lot more in fines than a US firm would in the EU. (Not that it's weird for VW to be fined so heavily, it's just that the law isn't symmetric).

Frankly, I think the EU fines are absurdly low, especially fines such as this which undermine the whole point of capitalism in the first place. Firms have grown absurdly large, causing competition to cease in significant portions of the economy - particularly in large homogeneous markets such as the US. And as you might expect, such firms engage in rent-seeking behavior: their profits soar, while customers stagnate (again, as economics 101 dictates).

I do agree that it's problematic that there is this perverse incentive for a prosecutor to "capture" as many spoils of war as they can (on a somewhat related note, WTF asset forfeiture). Part of the problem here is the voting public - the very sentiment you're now feeling; where you're probably quietly relieved that VW is fined a lot in the US (hey, it's foreign!) but indignant that intel is elsewhere. You'd want that to be fixed, but how? There is at least some solace in that anticompetitie behavior is much, much more harmful to the US than a relatively piddling fine. Just be happy that the far more questionably fair punitive damages haven't (yet) arrived in the EU; even if the concept is fair, the distribution of "loot" surely is not.

Comment Re:We already have an optimal swarm intelligence (Score 1) 83

A prediction market is a a prediction method that runs on a bunch of humans, not a computer. It must obey the same convergence laws as all other such processes, from other markets, to evolution, to human learning, and indeed machine learning.

It definitely does need to make assumptions about smoothness to be able to find even a local optimum - in the infinite space of possible prediction methods the market is exploring, if the optimum method is almost identical to a bunch of other terrible methods, then the market isn't likely to find that solution. If, by contrast, the nearby methods show promise but arent' *quite* as good, then normal market action works: people will see the winners getting rich and try to beat them at their own game by trying variations. Some of those variations might be better than the orignal; etc. This process is obviously highly complicated in practice; "nearby" isn't even a clearly defined term, and yet is still applies.

Philosophically, I doubt a prediction market will ever find a global optimum - not sure that it matters, however. There's no way we can tell, I suspect.

The point isn't that a market "is" a prediction method - sure from some perspective it is; yet there are lots of interesting differences too. The point is that it's risky to assume a prediction market will always beat any other prediction method; i.e. that it is an "optimal swarm intelligence". It's one possible and known effective way to leverage a diversity of other prediction methods and ongoing research into new ones. But it may not be optimal. We don't know that. In several ways, we *know* it's not optimal: not just are there known issues with markets in general, but more specifically, markets necessarily react only when some of its actors already have: markets are slow. It can be worth a lot of money to be faster, and other methods that don't rely on actors to interpret information for them can have an edge there.

Comment Re:We already have an optimal swarm intelligence (Score 1) 83

What you call meta-prediction is really just a variation of https://en.wikipedia.org/wiki/... - a prediction market isn't radically different. Indeed even within one model, combining separate predictions is useful - low-level density estimation layers deep learing have some similarity.

The overlap between markets and ML (and e.g. evolution) is that they're both complex optimization problems, where finding the "true" solution is generally infeasible. There are various approaches to come up with a best guess, but they all need to make various assumptions about the problem space to work - for one, that the problem space is "smooth" in some sense (so by exploring the current solution and nearby choices, you have a chance of going in the right direction), and e.g. that there aren't too many local minima.

Not all problems are like that, and sometimes a problem takes some massaging until it is a candidate. And these techniques - including markets - do *not* generally converge to the global optimum, the converge to some local optimum. If you're lucky, or under some non-obvious preconditions, then it's global.

Comment Re:We already have an optimal swarm intelligence (Score 2) 83

Note that a prediction market is not particularly more likely to be accurate than any other machine learning technique. If there's been one thing that's been demonstrated time and time again over the years, it's that there are many techniques that can work, but that to get truly excellent results, appropriate data collection, selection, filtering etc. is critical. It's easy to get charmed by techniques that have a great story and convincing argument they'll work - but that doesn't mean they're the best.

Slashdot Top Deals

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...