Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:It probably can. (Score 1) 289

So they just drove over the same "few thousand miles of roadway" again and again and again and again? Until they got to 700,000 miles?

I think you meant this as sarcasm, but that one is mostly correct. These cars are not doing cross country trips, so claiming a few thousand miles of roadway is probably an overestimate. They drive the same roads and areas over and over and over again.

As it should. Because you don't know if that piece of paper is covering a rock or a pothole or whatever.

I have been tempted to carry a bucket of chaff and just see how well a Google car handles it, but then again rain and snow are problems so the experiment is really not needed.

The point here is that a human can notice things that a current auto driving car can not. Not all humans pay attention, but for the percentage that do you can tell when a paper bag is blowing around on the freeway. Human reaction to those things is generally measured and controlled much better than a google car. In time, I am sure it will get better but you need to discuss what is there today, not what we wish it had and are working for.

So they cannot deal with new stop LIGHTS but they can deal with new stop SIGNS. WTF?

I'm not sure how much you drive around California, but if you ever do you will see why this one is an issue. Many traffic lights in Mountain view for example are angled downward, so you have to be at a certain distance to see the color. There is one by Shoreline and Central that you can't see until you are about 40-50 feet away (for those interested, east bound traffic at the fire station).

Compare that issue with scanning for a red octagon pattern, and is should become obvious why stop signs are much easier to do. Traffic lights would be easy if they broadcast a signal, but they don't.

Overall, I'm not against self driving cars as long as we can choose between modes of operation. I think we are a long way off in terms of technology to make them safe in all environments, that does not imply even decades. I am mostly concerned with the health impact of all those radars and sensors broadcasting everywhere, but that's mostly due to my own ignorance (I have not taken any time to study since they are extremely rare).

Comment Re:Congressional Pharmaceutical Complex (Score 1) 217

I won't argue that the war on drugs is a huge failure, but that's a different argument in my opinion. The primary argument here is whether or not marijuana legalization has reduced deaths from prescriptions.

Given legalization is extremely new, the conclusion of the article and study is grossly premature. Making matters worse in my opinion, is that the study only looks at a single element of drugs, and not the complete impact.

As with my opening paragraph, I'm not pro drug war or anti marijuana. I simply think that these types of studies would be better to include other impacts, because in 3 years the stats may show something completely different. Studies should include things like crime reduction and savings to law enforcement due to crime reduction, local economy impact (Dorito sales!!), overall health of patients receiving and using medical marijuana, etc...

The war on drugs is a failure for many reasons, and single impact studies won't flesh all of those out.

Comment Wouldn't edibles have the same effect (Score 3, Insightful) 217

If pot becomes legal in all states, I hope there are warnings on the marijana cigarettes like there are on tobacco cigarettes.

Is that as likely to cause cancer? It does seem like smoking anything is a bad idea, but perhaps tobacco has something that makes it more likely to develop issues...

However there's also another way to get MJ into your system, edibles. If you were using it for medical purposes a medicinal brownie seems like a more appealing application than does smoking...

Comment Re:Mod parent up. (Score 1) 289

Google has logged over 700,000 miles in those vehicles. Without a single robot-controlled accident.

Yeah, when Google started coming out with these stats a few years back (maybe when they were at 250k or 300k miles), I actually polled my immediate family members about number of accidents and estimate of total miles driven in their lives.

Basically, among my immediate family members I asked, there were a total of 3 accidents over at least 1.5 million miles driven (probably closer to 2 million). And all of these were situations where the other driver was at fault (actually, in one case the fault was actually poorly designed road signage, but the other driver was still deemed at fault). So, my family seems to average about 500k between accidents. On average, I've read stats that are around 150k for all drivers.

But that latter number is not reasonable to compare to Google, since most of Google's miles logged were highway miles. So, if we instead compare good drivers on highways, we might look at trucker stats. In that case, stats suggest 1 accident per at least 250k miles, but only 20% of those cases are the fault of the trucker. So, in real-world situations a professional driver who drives mostly highway miles will easily go over a million miles before causing an accident.

My point is: Google's car so far has barely outperformed my average family member in terms of safety, and it has done so by mostly driving in predictable highway situations, whereas most of my family doesn't tend to log a lot of predictable highway miles. And I'm actually more interested in situations where an AI's lack of adaptability ends up CAUSING accidents than the AI's ability to avoid accidents (which is probably a harder number to figure out, since my understanding is that much of Google's driving takes place in scenarios where a human driver will take over as necessary in complex situations or places where the AI can't do as well) -- it's easy for most people to avoid most accidents if they drive reasonably, which is why my family averages 500k between accidents, and truckers can go over a million miles without causing one.

I'm not saying Google's stats aren't an achievement. I'm just saying that I'm going to wait for an accident rate over at least a few tens of millions of miles logged in a greater variety of scenarios before I think we have enough data to assess safety -- and enough to say whether Google's AI actually drives better than my average family member.

Comment Jane/Lonny Eachus goes Sky Dragon Slayer (Score 1) 708

Don't you see that you threw in this whole "thermal superconductor" schtick without considering what properties a thermal superconductor must actually have? In order to superconduct, it must be the same temperature everywhere, always. The only way this would be even remotely possible were if it were a perfect radiator... [Jane Q. Public, 2014-08-30]

Superconductors are distinguished from aluminum by internal properties, not radiative surface properties. That's because conduction happens inside materials, whereas radiation is emitted and absorbed on surfaces.

... The only way this would be even remotely possible were if it were a perfect radiator, with emissivity of 1. It would also be a perfect absorber, absorptivity of 1. Regardless of wavelength. So while this might not technically be true, for all practical purposes it is: a thermal superconductor would be completely transparent to all radiation... [Jane Q. Public, 2014-08-30]

No. As I've explained, emissivity = 1 and absorptivity = 1 is the definition of a blackbody. A completely transparent material would have transmittance = 1 and absorptivity = 0. Blackbodies can't be transparent.

... a thermal superconductor ... has no "thermal mass". So it would have absolutely no effect on anything in this experiment. For practical purposes, it would not exist. Your idea that you can get around this by placing some kind of thin lining on its interior doesn't work. It's still as though it weren't there at all... all you have left for practical purposes is the thin shell, nothing else. ... [Jane Q. Public, 2014-08-30]

I've already solved this problem with an aluminum enclosing shell rather than a thermal superconductor shell. Both shells warm the heated plate to ~233.8F.

... That's why I say: no more prevarication. No more beating about the bush. Take Spencer's original challenge, apply Latour's thermodynamic treatment of it, and show where it is wrong. Anything else constitutes failure to back up your claim that Latour is wrong and -- as you have said more than once -- some kind of nutcase. You've had more than 2 years. That is plenty. [Jane Q. Public, 2014-08-30]

Dr. Spencer's original challenge included the possibility of a fully-enclosing passive plate. And so did Dr. Latour. Note that Dr. Latour never specifies the dimensions of the plates (as Jane began to) before wrongly concluding that T remains 150. This means his incorrect conclusion must apply to all geometries, including a fully-enclosing passive plate. In fact, notice that Dr. Latour explicitly allows for K = 1 and k = 1, which describes a fully-enclosing blackbody passive plate.

So Dr. Latour wrongly claimed that a fully-enclosing passive plate wouldn't warm the heated plate. I've shown that his claim violates conservation of energy. As long as the shell is warmer than the chamber walls (which it is), the net radiative heat loss from the heated plate is reduced. So power in > power out, which means the heated plate either warms or energy isn't conserved. Just like how a bathtub fills up.

Since you just linked to this excellent example, did you notice that MIT solved this problem at the very top and got a completely different answer than Dr. Latour?

Again, note that MIT's final expression reduces to my Eq. 1 for blackbodies, and is consistent with these equations and Eq. 1 in Goodman 1957.

Comment Re:And individual is not going to own a Google car (Score 1) 289

Because there will obviously be an infinite number of cars driving around just waiting for you to call one.

In the real world, rather than Self-Driving Cars Are Magic! Utopia, if you have enough cars to handle all requests at peak times without making customers wait, you have a metric fsck-ton of cars that will be doing nothing the rest of the day.

Comment Re:Baby steps (Score 1) 289

The thing is, if you come up with a good ruleset, theres no need to improvise.

So, who's going to write the rule that tells it what to do if faced with a choice between running over a baby, or swerving and running over an old granny? On an icy road. In a snow storm. With a crowd of screaming schoolkids that it will run over if it miscalculates and goes out of control on the ice?

Comment Re:Baby steps (Score 1) 289

I'm equally sure that there will be exponentially more situations where standard automation will make better decisions, and produce better outcomes, than average (or even well above-average) human drivers.

I absolutely agree with you that there are probably already "exponentially more situations where standard automation will make better decisions." Human drivers make stupid decisions all the time -- driving too fast, following too close, changing lanes abruptly without signaling, etc. But thankfully, humans are also adaptable enough to deal with a lot of bad unexpected things that come about because of those bad decisions.

I'm less certain whether I agree with you that AI will "produce better outcomes" in "exponentially more situations" anytime soon, mostly because of articles like this one. It sounds like AI is great for dealing with the expected, and it probably survives well by having detailed information about the route along with pointedly NOT making all those poor decisions that human drivers make (i.e., actually using a safe following distance, not weaving between lanes, etc.).

But the question is -- in real life where significant adaptability is required -- which factor will win out? Will AI perform better because all of those "better decisions" prevent more accidents, or will AI's lack of adaptability cause more accidents than all the "better decisions" prevent? What really matters is the number of serious and fatal accidents per X number of miles -- an AI may make "better decisions" 99% of a time than a human, but it's those 1% of cases where accident avoidance is critical where adaptability matters... and if AI doesn't have it, AI's stats may not be better than humans in terms of outcomes for a while.

I tend to agree with GP on this: it will be decades before AI will achieve adaptability to ALL roadway conditions on unknown roads (or at least roads with unknown novel hazards) that will outperform GOOD human drivers (not stupid humans who drive like maniacs).

That doesn't mean that AI won't be able to perform well under controlled conditions on well-known routes -- the question is just when that limited functionality becomes good enough for drivers, safe enough the regulatory agencies will allow them to be sold to anyone, and safe enough that the legal problems that could arise (liability issues, insurance issues, etc.) can be adequately resolved..

I'm sorry, but "there will always be situations where a human performs better than AI" sounds an awful lot like "I won't wear a seat belt because it might trap me in a burning car".

I really don't mean to be a jerk about this, but didn't you actually just utter pretty much those exact words?! -- from earlier in your post:

I'm sure that there will always be a few situations where a skilled human driver will make better decisions, and produce better outcomes, than standard automation.

So, given that you said that and that you were "sure" of that statement, does that mean you also don't wear a seat belt because you're afraid of dying in a car fire? Just wonderin'. :)

Comment Re:Stop being so impatient.... (Score 1) 289

Until the vehicle can classify what a person is doing on the side of the road it is not a viable solution. That person could be a statue, a child who could dart into the road, an person standing safely on the side,

Neither of those two matter; the vehicle would ensure that it was at a speed it could stop if whatever it was began to dart into the road, and if it DID, the car could stop much faster than a person.

a police officer pulling the car over

That's, really, the only difficult bit.

Slashdot Top Deals

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...