Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:The Problem with Robots (Score 1) 101

by dpidcoe (#49379707) Attached to: Robots4Us: DARPA's Response To Mounting Robophobia

The other problem is that new opportunities do not make up for the lost opportunities. It's not a one to one migration of workers. The assembly line that needed hundreds of workers now only needs a dozen or so to maintain the robots. There is a net reduction of jobs.

You missed the point I was making. Yes there's a loss in one field (e.g. automotive assembly lines). But as a result of automated assembly lines, there are gains in other fields (e.g. Anything having to do with supporting the infrastructure that makes cars and car manufacturing possible).

Comment: Re:Makes sense (Score 1) 190

by dpidcoe (#49379639) Attached to: If You Want To Buy an Apple Watch In-Store, You'll Need a Reservation

Who the fuck was talking about smartwatches? AC wasn't in the original sneering post, I haven't been throughout and you weren't when you started wittering on about $12 casio watches.

So no, I'm not getting a $350 smartwatch, so you're welcome not to tell me anything. Probably for the best, I'm unlikely to give you any credence anyway.

I think you need to work on your comprehension skills and/or re-read the thread again. You're awfully hostile over what probably amounts to a misunderstanding. If you're not advocating buying a needlessly expensive watch just to tell the time, then there's no disagreement here.

Comment: Re:What the "doomsday" critics all have in common: (Score 2) 101

by dpidcoe (#49374289) Attached to: Robots4Us: DARPA's Response To Mounting Robophobia

When I was getting my degree, I had to take an "ethics" class geared towards CS students. Towards the end of the semester, we started discussing AI and how morality may or may not apply to it. The half of the class who had actually done some machine learning and had backgrounds in AI got really annoyed with it because 100% of the hand wringing in the assigned reading was done by philosophers and "futurists" with horrible track records.

The worst part about it is that to someone who's actually worked with this kind of stuff, the doomsday people look about as silly as that one senator who was afraid an island might tip over if they landed too many marines on one side of it. It's just so stupid that it tends to put one at a loss for words on how to even begin refuting it.

Comment: Re:The Problem with Robots (Score 1) 101

by dpidcoe (#49374233) Attached to: Robots4Us: DARPA's Response To Mounting Robophobia

The problem with robots is that they are replacing humans in a world where humans often define their own value by the things that they do.

I don't really see this as being a problem. It might temporarily displace some people when some new kind of automation replaces something (and change can be scary), generally the same advancing technology that caused the displacement opens up opportunities elsewhere.

The easiest kinds of jobs to automate are usually the most menial. Generally the automation of those kinds of jobs will cause the market to open up new job opportunities elsewhere. e.g. automating an automotive assembly line will initially displace those workers, but it also makes cars a lot cheaper, meaning more cars and more demand for the infrastructure to support them (roads and road maintenance, fuel, mechanics).

Comment: Re:So she can do to the US... (Score 1) 352

by dpidcoe (#49370795) Attached to: Former HP CEO Carly Fiorina Near Launching Presidential Bid

It's profit based in a system where there's no incentive to lower price, and lots of incentive for people to rack up huge bills with no immediate consequences. Someone could run crying to the doctor because they have a cold, demand every test and scan in the book be run, and it's less than $100 in co-pays with insurance. It's easy for them to rationalize it becuse "this is what I pay all this money every month for and I want to get the most out of it". The insurance buracracy doesn't really care as long as all the Is are dotted and Ts are crossed, but the accountants notice rising costs and make it harder for people with legitimate issues to get tests run next year.

Comment: Re:This may matter when we create sentience. (Score 1) 129

by dpidcoe (#49347659) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

"I never really saw anything not work in a predictable controlled fashion"

Accidents happen whenever something doesn't work in a predictable and controlled fashion and, believe me, accidents do happen. Oh! and butter does melt in your mouth.

But when you examine the accident after the fact, it usually turns out that it did happen in an incredibly predictable and controlled fashion. It's just that the events leading up to it weren't immediately obvious before the accident happened.

Comment: Re:Given that humans still struggle... (Score 1) 129

by dpidcoe (#49346595) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?
Yes, but you missed the point. The solution of pushing a fat guy in front of the train isn't believable. People hear it and get a gut feeling of "then it'll kill the fat guy plus the people on the tracks". That's where the hesitation comes from, not from the fact that they need to push the guy.

Comment: Re:This may matter when we create sentience. (Score 1) 129

by dpidcoe (#49337819) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?
As someone who worked in IT and was usually the guy who got assigned all of the tricky problems, I never really saw anything not work in a predictable controlled fashion (at least once all of the facts were known, which wasn't always straightforward).

Comment: Re:Engineers? Bah...ignore them. (Score 1) 129

by dpidcoe (#49337783) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

Er...no. How about just letting engineers figure these things out like we always have?

I took an ethics class as a required part of my CS degree, and this was pretty much the conclusion everyone came to after reading the sections about robot morality. The computer scientists have enough trouble understanding how an AI would work in reality, let alone some random philosopher whose only experience with robots and AI is what they've seen on TV.

Comment: Re:Given that humans still struggle... (Score 1) 129

by dpidcoe (#49337753) Attached to: Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?

Most people say "no", and even most of those who say yes seem to struggle with it.

The reason people struggle with it is because the scenario doesn't make a ton of sense. Everyone has seen videos of trains smashing cars like the car isn't even there, it's hard to believe that a fat guy would be heavy enough to stop the train. What if I push him, the train hits him and then continues on to hit the people? And if the fat guy is heavy enough to stop the train, doesn't that mean he's going to be too fat for me to push? I'm a skinny guy, physics wouldn't be on my side here. What if I try to push him, fail, and then he turns around and wonders why the hell I was trying to push him into a train? And even if it works, people might think I pushed the guy because he was fat, rather than to save the people.

All of that compared to me flipping a switch to save 4 people at the cost of 1. If I flip the switch the outcome is pretty set.

It might be better to pose the fat guy variant of question as "there's some dynamite near the track and someone is sitting there eating their lunch on it, do you blow it up to stop the train, killing the fat guy in the process?"

"I think Michael is like litmus paper - he's always trying to learn." -- Elizabeth Taylor, absurd non-sequitir about Michael Jackson

Working...