Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:This may matter when we create sentience. (Score 1) 129

by dpidcoe (#49347659) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

"I never really saw anything not work in a predictable controlled fashion"

Accidents happen whenever something doesn't work in a predictable and controlled fashion and, believe me, accidents do happen. Oh! and butter does melt in your mouth.

But when you examine the accident after the fact, it usually turns out that it did happen in an incredibly predictable and controlled fashion. It's just that the events leading up to it weren't immediately obvious before the accident happened.

Comment: Re:Given that humans still struggle... (Score 1) 129

by dpidcoe (#49346595) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?
Yes, but you missed the point. The solution of pushing a fat guy in front of the train isn't believable. People hear it and get a gut feeling of "then it'll kill the fat guy plus the people on the tracks". That's where the hesitation comes from, not from the fact that they need to push the guy.

Comment: Re:This may matter when we create sentience. (Score 1) 129

by dpidcoe (#49337819) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?
As someone who worked in IT and was usually the guy who got assigned all of the tricky problems, I never really saw anything not work in a predictable controlled fashion (at least once all of the facts were known, which wasn't always straightforward).

Comment: Re:Engineers? Bah...ignore them. (Score 1) 129

by dpidcoe (#49337783) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

Er...no. How about just letting engineers figure these things out like we always have?

I took an ethics class as a required part of my CS degree, and this was pretty much the conclusion everyone came to after reading the sections about robot morality. The computer scientists have enough trouble understanding how an AI would work in reality, let alone some random philosopher whose only experience with robots and AI is what they've seen on TV.

Comment: Re:Given that humans still struggle... (Score 1) 129

by dpidcoe (#49337753) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?

Most people say "no", and even most of those who say yes seem to struggle with it.

The reason people struggle with it is because the scenario doesn't make a ton of sense. Everyone has seen videos of trains smashing cars like the car isn't even there, it's hard to believe that a fat guy would be heavy enough to stop the train. What if I push him, the train hits him and then continues on to hit the people? And if the fat guy is heavy enough to stop the train, doesn't that mean he's going to be too fat for me to push? I'm a skinny guy, physics wouldn't be on my side here. What if I try to push him, fail, and then he turns around and wonders why the hell I was trying to push him into a train? And even if it works, people might think I pushed the guy because he was fat, rather than to save the people.

All of that compared to me flipping a switch to save 4 people at the cost of 1. If I flip the switch the outcome is pretty set.

It might be better to pose the fat guy variant of question as "there's some dynamite near the track and someone is sitting there eating their lunch on it, do you blow it up to stop the train, killing the fat guy in the process?"

Comment: Re:It has an acronym , so it will fail. (Score 1) 149

by dpidcoe (#49336607) Attached to: Obama To Announce $240M In New Pledges For STEM Education
Throwing money at the problem would work in the case of your school because it's not top heavy with administration. It wouldn't work work in other schools which are top heavy with administration. As such, if you're going to throw money at the problem, it would be best to do so on a case by case basis or not at all. Otherwise you're just encouraging top heavy administrators.

Comment: Re:Truth = modded down (Score 2) 149

by dpidcoe (#49322737) Attached to: Obama To Announce $240M In New Pledges For STEM Education

Do you consider this amount of money to be so completely unreasonable? To start the discussion, for sure we can agree that this amount is not infinite.

(Also, if you agree with HBI, why would you mod HBI up, and not reverse the mods of the AC?)

ok I'll bite. I won't call any parcitular sum reasonable or unreasonable (mostly because I'm not an analyist and every location is going to have different costs associated with it). That said, there are a lot of situations where school systems pay a very small amount (from memory, isn't utah like 6k per student?), but get significantly better results than places like california and new york that are in the 20k/student range.

Ancidotally, my experience with increased funding to any particular program just means there's increased waste. I was very involved with the computer science program at my commuinity college before transferring (we were trying to make it its own thing instead of tagging along behind the math department). We got a huge influx of funding from some program, but it basically just sat there while we tried to think of things to use it on. We had meetings about how to spend it (which got nowhere because there were all sorts of limitations as to what it could be spent on), we upgraded all the computers in the lab (which were promptly slowed down again after campus IT loaded them up with the required crapware and monitoring), we spent 10k on building a tiny supercomputing cluster (which was promptly unused because we didn't really have anything computationally intensive to run on it), and then we bought the computer club one of the new (at the time) nvidia tesla cards to do CUDA programming on (which never even got setup because campus policy wouldn't allow us in the same room as it without the professor present). Meanwhile, the CS professors continued todraw abnormally low salaries while the campus president voted herself raises (she was well into the high 200k range by the time enough people revolted and threw her out) and the rest of campus services (i.e. internet connectivity, which we relied on to allow students to ssh into the cluster) suffered horribly.

Comment: Re:You can take a horse to the water ... (Score 1) 120

by dpidcoe (#49296807) Attached to: Persistent BIOS Rootkit Implant To Debut At CanSecWest

.. well .. security usually adds complexity to point and click. That's just the way it is.

Yes... to a degree. The issue is that a lot of times the "experts" take it way too far to the point the system slows to an unusable crawl or needlessly hampers the user. To continue with your car analogy, it would be the equivalent of telling *everyone* that they need a car with a standard key lock, an electronic lock, and a password that must be entered before starting the engine (that requires an internet connection to authenticate and will disable the car after 2 incorrect entries). Eventually you cause the user to either get rid of the car and go back to no locks, or rent/borrow a different one in order to do their driving if you force them to keep it. In fact, if I were to be cynical about it, why not just disable their car entirely so that it can't be stolen? They'll have to circumvent policy to actually drive anywhere, but if data gets stolen it's their fault for circumventing policy right? It's absolutely not the fault of the "expert" who gave them an unusable car.

Assuming that we both agree some security is better than no security, and a usable car is preferred, it would be better to recommend that they stop 99% of successful thefts by locking their doors and exercising some common sense about where they leave the car, rather than proposing extreme measures for thwarting 100% of all attempts to steal it.

"Life sucks, but it's better than the alternative." -- Peter da Silva

Working...