Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Baby steps (Score 1) 289

I'm equally sure that there will be exponentially more situations where standard automation will make better decisions, and produce better outcomes, than average (or even well above-average) human drivers.

I absolutely agree with you that there are probably already "exponentially more situations where standard automation will make better decisions." Human drivers make stupid decisions all the time -- driving too fast, following too close, changing lanes abruptly without signaling, etc. But thankfully, humans are also adaptable enough to deal with a lot of bad unexpected things that come about because of those bad decisions.

I'm less certain whether I agree with you that AI will "produce better outcomes" in "exponentially more situations" anytime soon, mostly because of articles like this one. It sounds like AI is great for dealing with the expected, and it probably survives well by having detailed information about the route along with pointedly NOT making all those poor decisions that human drivers make (i.e., actually using a safe following distance, not weaving between lanes, etc.).

But the question is -- in real life where significant adaptability is required -- which factor will win out? Will AI perform better because all of those "better decisions" prevent more accidents, or will AI's lack of adaptability cause more accidents than all the "better decisions" prevent? What really matters is the number of serious and fatal accidents per X number of miles -- an AI may make "better decisions" 99% of a time than a human, but it's those 1% of cases where accident avoidance is critical where adaptability matters... and if AI doesn't have it, AI's stats may not be better than humans in terms of outcomes for a while.

I tend to agree with GP on this: it will be decades before AI will achieve adaptability to ALL roadway conditions on unknown roads (or at least roads with unknown novel hazards) that will outperform GOOD human drivers (not stupid humans who drive like maniacs).

That doesn't mean that AI won't be able to perform well under controlled conditions on well-known routes -- the question is just when that limited functionality becomes good enough for drivers, safe enough the regulatory agencies will allow them to be sold to anyone, and safe enough that the legal problems that could arise (liability issues, insurance issues, etc.) can be adequately resolved..

I'm sorry, but "there will always be situations where a human performs better than AI" sounds an awful lot like "I won't wear a seat belt because it might trap me in a burning car".

I really don't mean to be a jerk about this, but didn't you actually just utter pretty much those exact words?! -- from earlier in your post:

I'm sure that there will always be a few situations where a skilled human driver will make better decisions, and produce better outcomes, than standard automation.

So, given that you said that and that you were "sure" of that statement, does that mean you also don't wear a seat belt because you're afraid of dying in a car fire? Just wonderin'. :)

Comment Re:Stop being so impatient.... (Score 1) 289

Until the vehicle can classify what a person is doing on the side of the road it is not a viable solution. That person could be a statue, a child who could dart into the road, an person standing safely on the side,

Neither of those two matter; the vehicle would ensure that it was at a speed it could stop if whatever it was began to dart into the road, and if it DID, the car could stop much faster than a person.

a police officer pulling the car over

That's, really, the only difficult bit.

Comment Re:Remote management (Score 1) 155

You've seen PAR files presumably? The same could easily be done on a filesystem-level basis (and I imagine, somewhere, already is for some specialist niche).

While all hard drives now do their own Hamming error correction (or something better), RAID2 is the same idea for "raw" storage that doesn't: you write explicit ECCs to redundant volumes to allow recovery from both drive loss and bad sectors.

RAID5 with modern drives gives all the same resiliency, as the drives do the block-level ECC themselves, so you never see RAID2. But for a pile of flash memory, that's the filesystem-level equivalent of PAR files.

Comment Jane/Lonny Eachus goes Sky Dragon Slayer (Score 1) 708

No, I'm not wrong. You calculated the outside temperature from the inside temperature, saying it's LOWER because of its greater area. This much is correct. THEN you try to say that with a thermal superconductor, the inner temperature would be the same as outside. Except you just calculated that outside temperature from a WARMER interior. You quite literally can't have it both ways. EITHER you're claiming a superconductor has a different temperature on both sides, or you're claiming that the inside has 2 different temperatures simultaneously. [Jane Q. Public, 2014-08-30]

Remember that the inner surface of the enclosing shell is different than the surface of the heated plate. The inner and outer surfaces of the enclosing shell are at exactly the same temperature because it's a thermal superconductor. That's what I've always been saying, despite your attempts to pretend otherwise.

The surface of the heated plate at equilibrium, however, is warmer than the inner surface of the enclosing shell. It has to be.

Here is an excellent example of this (19.3.2), which illustrates why it is a straw-man argument that is not relevant to the problem at hand. In this case the walls are warmer, not cooler, and the radiation shield is blocking the thermocouple from the radiation inward from the chamber walls, so that it can get an accurate temperature reading of the air without interference from the walls. In your case, it is the opposite: the walls are cooler than the thermocouple. But in neither case is the situation a representation of equilibrium (for example in this case, air is convecting away some of the heat of the thermocouple). The shield is absorbing and emitting radiation, too, it's just that it is isolated from the chamber walls, and so is closer to the ambient temperature of the medium being measured. This is in no way related to our experiment at all. It is in a vacuum. There is no "medium" to measure, with an ambient temperature. Not even remotely. [Jane Q. Public, 2014-08-30]

I've repeatedly linked to that excellent example. Despite your incoherent protests, it's a relevant example where a passive plate reduces radiative heat loss from a warmer source, warming it to a higher equilibrium temperature. It's a real world example which shows Jane and the Sky Dragon Slayers are wrong.

See? Same shit different day. You won't sit down and do the calculations start-to-finish, instead you do one small part, then start indulging in your hallmark game of out-of-context he-said, she-said, toss in a straw-man, then claim it's all proved. ... It's simply another illustration of the depths of hand-waving you will go to, rather than actually doing all the calculations on the actual experiment from start to finish. All you're doing is tossing in more straw-men and irrelevancies. You won't do the actual experiment. The only reasonable conclusion to be drawn here is that you won't do it because you know you're wrong. [Jane Q. Public, 2014-08-30]

Don't you see the irony here? I've repeatedly done the calculations "start-to-finish" by deriving and solving equations describing the final equilibrium temperature of the enclosed plate using increasingly realistic scenarios. I've repeatedly told you that you'd only be able to understand this thought experiment if you did the same. But you still haven't. Haven't you noticed that I'm the only one here deriving equations and doing calculations?

Is the only reasonable conclusion to be drawn here that you won't even attempt to solve this problem because you know you're wrong?

And I want to be clear about this: I'm not demanding anything from you. YOU are the one who proclaimed Latour wrong, therefore it is your burden to demonstrate that he actually is, by showing exactly where he is incorrect. ... The whole point: You claimed Latour was wrong. But you refuse to back up your claim by showing WHERE in his calculations he was incorrect. That's your burden and you haven't been meeting it. Until you do, you have no argument to make. You can throw all the ad-hominem and straw-man arguments and irrelevancies in that you want, but none of it proves you correct. Until you actually show where Latour made a mistake, in his actual calculations related to this experiment, you're wrong by default. [Jane Q. Public, 2014-08-30]

Once again, Dr. Latour and Jane claim that enclosing the heated plate wouldn't warm it. I've shown that this would violate conservation of energy.

In physics, violating conservation of energy is a pretty big mistake.

Comment Re:Baby steps (Score 1) 289

Stop or swerve to avoid doesn't resolve driving on snow covered roads
The car will know way better than you ever could how well the car is gripping at any particular moment.

People keep missing that the cars dont need to "know" things or "improvise"; they will have way better data than the human driver in most circumstances and far better reaction times. "Improvise" is somewhat of an oxymoron / bad usage anyways; computers dont "improvise", they follow a structured set of rules, and will always do so until we create a strong AI (which will never happen IMO). The thing is, if you come up with a good ruleset, theres no need to improvise.

Comment Re:columns of pixels? wrong. (Score 3, Insightful) 289

Uh, yes.

That sort of thing is trivial for computers as its basically a simple physics question; whats not trivial is predicting behavior. The point is that a GoogleCar probably wouldnt need to predict behavior in the same sort of way.

People are acting like a googlecar needs to have the exact same senses and responses as a human driver, which is not true; it doesnt have the same limitations (field of view, ~200ms minimum reaction speed, distractions, imperfect data from car) so it can operate differently.

For instance, a person driving a car on an icy winter night has all sorts of unknowns to deal with, between limitied vision, glare from ice / oncoming traffic, not knowing how slippery the roads are, etc. An automated car will have much better vision, a better sense of how well the tires are gripping, and wont be affected by glare. Saying "how will the car know if theres snow in the forecast" is completely missing the point.

Comment Agreed; incremental versions can be useful (Score 1) 289

I agree (article submitter here). I submitted the article mostly not to complain about lack of progress but because the article covered a lot of interesting details about how the Google technology worked in discussing the limits of the current system. I have little doubt such systems will continue to rapidly improve.

I was involved briefly on a project for self-driving cars in the late 1980s at Princeton involving neural network ideas for image processing, and I suggested we could just train the cars to drive specific routes. However, that suggestion was scoffed at (and I did not try hard to push it). My argument was that most driving is stuff like daily commutes or runs to well known stores, and so pretty much the car could drive exactly the same way every time, seeing the exact same sights. That might make it feasible to train the neural networks from just a few video recordings of drives over the same stretch of roadway. Granted, lighting conditions, weather, other cars, pedestrians, and possible lane changes make that harder -- but is seemed like a good place to start, rather than try to create a car driving system that could drive in arbitrary new circumstances where it has never seen the road before. Solar panels have succeeded much the same way -- the early ones were niche (like in calculators or satellites), but sales drove more R&D that lead to better and cheaper panels in more and more applications. A self-driving car that could only drive me from home to a few local towns and back on fixed routes (safely, while, say, I surfed the web) would still be tremendously valuable to me. Think of how many people commute the same routes every day for years and could use that commuting time more productively in other ways via the internet. If people with an hour commute could use that time to answer email, then maybe they could work one hour less in the office? Also, a car that just knew how to park itself in a standard location and come back to pick you up in front of some building you work at or apartment you live in would be very useful in cities.

Another idea I had several years ago is that we could have an open source software effort to drive cars in various simulated racing games like "Gran Turismo" or other free play driving games like "Driver: San Francisco" or various off-road sims. That would be a inexpensive and safe challenge for college students. Those driving simulators go to great lengths to make realistic looking images (including things like dust clouds and vehicle dynamics), and they continue to improve. You just feed the first-person video generated the game into the car-driving visual processing algorithms, and you have the software control the game via USB outputs. As the software gets better, then you can fuzz up the image more and more by adding more white noise to it, or whatever other distortions you wanted (like bug white blotches over parts of the image) to challenge the algorithms. Or you could introduce delays and noise in how commands for steering were processed. Such an approach makes writing such software feasible for the average software developer without a special car. Granted, the software would have to focus on processing 2D images instead of 3D laser ranging data. Even Google has talked about testing their software in simulations regarding certification. Ideally, the simulations used for testing would be open source too, like Rigs of Rods (or even more realistic) and if so, things like 3D ranging data could probably be extracted too: http://www.rigsofrods.com/

Comment Re:*drool* (Score 1) 181

Thanks - a real example! Wow, to me "300" does not sound like a large number for a computer. My mind boggles at how anyone could write code that bad - the AI must be written in some wildly inappropriate language? Or the developer just didn't care about perf and never got a bug assigned as they didn't QA at that scale? Nah, he got the bug and the game shipped with it, of course.

Comment Re:They could start by not using civilians as shie (Score 1) 369

The Palestinian demands are to end the blockade and recognize them as a state.

No, the demands of Hamas (who runs Palestine now) are that every Jew be killed (it's in the written charter, that is not hyperbole). They include the right to keep all arms, including rockets, so they can continue to fire them at Israel.

Also in the demands are that Israel not be recognized as a star (they do not currently). If it's reasonable to recognize someone else as a state, then Hamas not doing so with Israel is obviously unreasonable - by your own definition.

It's just a shame that people like you do not actually look deeper to see what is going on. In now way is allowing Hamas to gather far more arms to attack Israel reasonable. That is the only thing that lifting the blockade would cause to occur. Food and medicine and other relief is already allowed to enter the country freely.

Slashdot Top Deals

We are experiencing system trouble -- do not adjust your terminal.

Working...