Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:wait, what? (Score 1) 169

Congrats on winning a strawman argument.

It's not a strawman argument. Earlier in the conversation, you explicitly wrote:

And while this debate continues, those cardboard boxes containing small children are being run over by drunk or inattentive humans on a regular basis.

The implication there is that, because drunk or inattentive humans regularly run over the hypothetical (occasionally real) cardboard boxes containing small children, then it is OK if humans do it. It's not a strawman argument, it's a direct response to what _you_ wrote.

That said, did you even win? Clearly we allow babies to be on or near a road, and we still allow cars to drive around those same roads. If we as a society is not ok with babies being run over, then a lot more effort would have been put into preventative measures. We could for example put ankle monitors on all babies and make loud noises when they crawl near a road. The fact that that's not even being discussed shows that we're actually ok with babies being run over.

It wasn't really about trying to "win", it was mostly just to clarify your position. Honestly though, the rest of your paragraph is just wacky. The solution to put ankle monitors on babies? The whole conversation is about whether self driving cars should and do avoid objects on the road. There's an even more detailed argument about the technical problems involved that we're not having (that includes the problems of identifying objects on the road and weighting them based on how important it is to avoid them like for example leaves vs. icy patches vs. gum wrappers vs. fallen trees, etc.). So, once again, when a human runs over a dog on the road, we can just chalk that up to known factors because we are human and we have a reasonable understanding of human nature and behavior in such situations. When a self-driving car does it, however, there are logically questions we have to ask at this stage in the car about why. Those questions include the ones I have mentioned numerous times like: Was the car even aware that the object was there before, and did it realize it had hit an object after? If it was aware, did it take any action to avoid it and if not, why not? Was it because it fell under a list of objects it considered OK to hit? Was it because it fell under a list of objects it considered to be not OK to hit, but a flaw in programming didn't actually allow braking or avoidance as options? Was it because it was unidentified and anything unidentified is considered OK to hit? Etc.

The point is that we have a good idea how a good human driver will react to these things and what they will be experiencing and understanding about the situation and about the various ways they might react. Example: I remember driving along a country road late on a Winter night and, ahead of me, an animal started coming over a snow bank. At first I thought it might be a dog, but I had not fully classified it before I braked and came to a stop. That worked out for me because it was a deer, and not just one. It was the deer in front of a whole herd that raced across the road in front of me. If I had hit them, my car would have been totaled, not to mention that injury or death to myself was a possibility. So, the question is, why did I brake as soon as I saw the head of an animal breach the top of the snowbank? I didn't know what kind of animal it was. I am not even 100% sure if my mind had actually identified it as truly an animal at that point, or if it was just a pareidolia heuristic that told me: probably an animal, probably about to cross the road. I didn't know the actual size yet, and I definitely did not know that it was a herd (although, once again I do know that animals of various kinds can travel in groups). I knew there were no closely following cars, which is something I maintain awareness of while driving, so there was no obstacle to braking, so I braked based on a vague possibility of a collision and the gamble paid off massively. A moment of inconvenience stopping, versus a car-destroying and possibly life ending collision.

So, for me, the important question for self driving cars is, what will they do in such situations. What is the current state of the art. What does it see and recognize, what assumptions does it make about other objects on the road and how they may act and move? What about objects that are not on the road yet? Does it know what a snowbank is? What a deer is? Does it need to recognize it as a deer before it will act as if it may be a deer. You may be uninterested in the details, but I am not.

Let me give you some other examples using humans. My girlfriend once upon a time had a best friend who once gave a restaurant a drive through by, well, driving through it. This was with my girlfriend in the passenger seat. She managed to just miss diners and the main gas line. On another occasion, years later, my girlfriend and I were sitting in the back seat while she was driving and her boyfriend was in the passenger seat. It was night and we reached a red light. When it changed green, the light change freaked her out and she took her hands off the wheel and refused to put them back on, even though she kept driving. Her boyfriend steered from the passenger seat for the next few miles. We have not seen her in ages, but it is a miracle she is still alive. I don't actually recall if that was the first time I was in a car with her driving, but it was definitely the last. I would not trust her to drive me around. Then there's my father. He is, generally speaking, good at driving, but I would not consider him all that good a driver, mainly because he has always had a tendency to drive a bit like he's James Bond in the middle of a car chase. This is especially annoying when you're supposed to be following him somewhere and he drives like you're a tail he's trying to lose. Generally though, while his driving takes more risks than I would like, he is in control of the vehicle and does not do weird unpredictable things. I generally have no problem being driven places by him, although I will note that, I can not sleep when he is driving. Then there's my girlfriend. She is definitely not as skilled at driving as my father, but from a safety perspective, she is a better driver. In some sort of competitive driving situation, my father could probably beat her even at his current age, but the way she drives is statistically safer.

So, let's compare AI driving to those examples. On average, AI drivers may be better than either my father or my girlfriend. They have to be better than my girlfriend's friend. They won't take many of the risky maneuvers that my father would, and they probably would do better at the plain driving style that my girlfriend uses when she drives because they would have better situational awareness and be able to deal with some emergency situations better than she can. However, the question for the AI driver is if it is completely free from the kind of bizarre things that my girlfriend's friend would do while driving? Will it drive through a restaurant? Tests have shown that some self-driving systems actually will if, for example, it has a convincing road painted on it. Will it do things like ignoring stop signs for school buses? I mean, we know that they may because there's a story up on the main page about Waymos doing exactly that. Will they exhibit bizarre emergent behavior during close maneuvers in parking lots? Yes, we know that will happen because we hear about the complaints about them honking their horns all night at depots. The reason for the honking is that the cars automatically warn other cars by honking their horns and also act defensively by backing up if they move into a danger zone in front of them without stopping. Except that, when backing up, the cars don't respect the same danger zone behind them. So, when one of them backs up with another behind them, that one will honk it's horn and back up... right into the danger zone of the car behind it, which will then honk its horn and back up... you see where this is going I hope. The point is that self driving cars are at the point where they are good enough for certain uses, but they still may handle edge cases in bizarre, inhuman, unreasoning ways. Without true general AI (and probably even then), the way to handle bizarre edge cases is generally to identify them, come up with a plan for how to handle them, then develop a way for the system to identify them and apply the plan to handle them. This is clearly an ongoing process for self driving cars. So, it makes sense, when something that may be an unhandled edge case comes up, to not simply assume that what happened was not simply unavoidable and verify if that is actually the case and, if not, see if it could have been handled better. This is not an unreasonable position.

So how well do you understand the human brain? The behavior of that piece of safety-critical autonomous equipment is also unknown and varies widely across the population. And in a few cases where the behavior is known, it acts in fail-dangerous ways.

That is missing the point. For starters, we do actually hold humans to certain standards to perform certain jobs. Additionally, holding automated systems to the same standards is ridiculous.

Even if it were the case that the average self-driving car may be worse than a tiny fraction of human drivers. So what? Nobody should be making policies to cater to the extraordinary. Yeah some humans can pull an commercial jet. That doesn't mean you should replace aircraft tugs with people

You keep getting hung up on the same problem, confusing general performance metrics with focused performance metrics. There are specific situations that occur while driving that the majority of humans can handle quite well that self-driving cars are either bad at or completely incapable of. The situation I mentioned above with the honk/backup behavior is an example. So is the stopping for school buses example. There's that time that a self driving car confused a turning truck for a billboard, for example, and cut off the top of the car (and the passenger). My point is simply that self-driving is a work in progress, so incidents need to be scrutinized while your argument is... honestly I am still not really sure. Are you actually arguing that self-driving is now a solved problem that no longer needs any development? I don't see how you could be arguing from that position, but it seems to me like the only position where you could be complaining about my desire for incidents involving self-driving cars to be evaluated and publicly disclosed in detail.

Self-driving cars will probably not beat Max Verstappen on a good day for many decades. If that takes 50 years, are you okay with 2,000,000 people dying as a result of your demand for perfection?

This statement is clear evidence that you do not understand my position. So that we can hopefully reach a mutual understanding, can you please clearly state what you think my position actually is?

No, those are rational reasons to replace the accident-prone humans with self-driving cars. Once we have that, then we can issue a patch that recognizes deer. Then a bit later we can patch more animals.

See, this is the thing. You are not understanding my position. My position is that we should be doing the process that would lead to those patches as an ongoing process and that it should also be done publicly so that people understand clearly what the current state of the art in self driving is and what it isn't. I am actually all for self-driving cars, I am just not blind to the flaws of current self-driving systems or the flaws of the corporations producing them.

There's nothing you can do with human drivers to improve the situation in any way. And yes, we've been trying for many decades.

Yes there is, and we're doing them. Those things primarily include self-driving and self-driving adjacent features like auto-braking, lane following, etc. Once again, for the I don't know what number of times, I am all for self-driving. My entire point is that there are areas that need improvement, so we should scrutinize incidents to understand what happened and what needs to be improved.

Waymo engineers should have the information and they should work on it at some point. But from a policy perspective, it doesn't matter. Humans are already worse on average and even the best drivers have bad days.

Waymo engineers are a corporate black box. I think I see the fundamental problem here. You don't seem to know that Slashdot is supposed to be a nerd site. People here are supposed to care about the details, not just declare that they are problems for someone else to worry about and not sweat them. Unlike you, I actually want to know what the real state of the art is when it comes to self driving.

Comment Re:So you've clearly never spent time on it? (Score 1) 46

I'm going to have to say that this is one of the occasions where Somervillain is making sense and is not stuck on some right-wing echo chamber notion. If there were, for example, a binary choice between kinds using Roblox and, for example, 4chan, I know which one I would choose. Roblox or Andrew Tate? Etc., etc. This is not to say that the corporation behind it is not a scummy exploitative monster. That's pretty much a given these days. The odds of children falling prey to some sort of child sex predator there don't seem like they would be any higher than most Internet locales, however, and are almost certainly lower than some locations.

Comment Re:Can we please cut Russia off the entire Interne (Score 1) 46

Because it's not only that Russia doesn't want sexual minorities to have a expression in the public, reserving sexual behaviours to the private space; Russia thinks gays are so bad (and Russia so great) that there can't possibly be gays in Russia, other than through the corruption of the West and their propaganda.

It wasn't Russia, but I remember a perfect example of this kind of attitude with regards to some lions that were caught on camera engaged in activity that was interpreted as homosexual (it may have been, but it may have more along the lines of a dominance display, sexual or otherwise): "These animals need counselling, because probably they have been influenced by gays who have gone to the national parks and behaved badly," was the quote from Ezekiel Mutua, the head of the Kenya Film Classification Board. He also said that the "crazy gay lions" needed to be isolated to prevent it from spreading. I should note that he also thought that it might be demonic possession, so there's that. Of course, the Film Classification Board is censorship organization and he is a moralizing zealot.

I think that's an example of this same attitude where being gay is seen as some sort of communicable disease or slippery slope moral collapse that is seeded externally. It's nuts, and the people who promote these ideas are nuts, but there you have it.

Comment Re:Life is extremely improbable (Score 4, Insightful) 31

Bro, that sound plausible to you? Think about what you're saying .. LUCA's descendants were able to go to every possible life niche on Earth and displace all other types of life? That makes very little sense.

It actually makes a lot of sense to me for the simple fact that expanding into new niches is generally going to require evolving through natural selection to be able to thrive in that niche. Also because, to my knowledge, we have not found any niches anywhere on Earth without life as we know it that are not under such extreme conditions that it is unlikely for other forms of life to stand a chance.

I am not saying that other versions of life have definitely developed on Earth before. I am simply saying that, if they had, they would clearly have been out-competed by life as we know it both due to first-mover advantage and because, as we can see from the results from the Bennu samples, the building blocks of life as we know it clearly have their own survival advantage. Ultimately, any form of life is going to concentrate energy and those kinds of concentrations of energy are going to be food for more evolved and more capable forms of life.

So, while most of what you are saying is correct, if you are asserting that no other form of life could exist or have ever arisen on Earth, you clearly do not have the data to prove that.

Comment Better info (Score 1) 83

According to an AAIB Field Investigation report (pg. 4), two samples from the intake were tested and found to have a glass transition temperature of 54.0C and 52.8C

So some idiot printed them in PLA. PLA is great but is very much NOT temperature resistant. It has been known to sag in a hot car.

Comment This is why I'm waiting (Score 1) 148

This is why I'm waiting for someone to make a dual-DIN Android head unit with the source available so we can rebuild it free of nonsense like unwelcome ads. They don't have to warranty it. Just make some decent hardware and gimme the source, and I'll handle the rest (and sing their praises endlessly).

I could build one but I literally don't want to.

Comment 3D printing wasn't the problem (Score 1) 83

The problem was using a cheap substitute part. I'm guessing an injection molded ABS part would also have failed in that scenario.

CF-ABS is NOT like fiberglass at all. The CF is chopped into fine bits. They lend some stiffness at room temperature but not strength to the part. Certainly the carbon fiber bits don't lend any heat resistance.

Comment Re:"Risks of clinical errors" (Score 1) 70

The food pyramid has also been debunked as made up pseudoscience.

Well, yeah. I thought it was pretty well known that, like the "four food groups" before it, the food pyramid came from the USDA. The USDA does not serve the same function as the department of Health and Human services. The food pyramid was developed to promote the interests of MidWestern farmers, not health. That aligns with the mission of the USDA. My understanding is that most doctors, and especially nutritionists, have never paid attention to the food pyramid.

Slashdot Top Deals

The opulence of the front office door varies inversely with the fundamental solvency of the firm.

Working...