Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:What a stupid question (Score 1) 167

Asking nerds what apps are good is like strolling into a literature forum and asking "I haven't read a book in 15 years - anything new out that you think is good?"

Well this "Twilight" series is a best seller. As is this "50 Shades of Grey".

I really with the old Twilight Zone was still running. I think that that premise would make a great episode.

Comment Mod parent up. (Score 1) 289

I'm going to map my drive to work, by driving it a few dozen times.

And that is if you are the ONLY person with a robot car on that road. Which may be correct for the initial roll-out. But this is a great example of the "network effect". If 100 people in your state own robot cars then a LOT of your state will be continuously mapped / re-mapped / re-re-mapped / etc.

Are we really whining because a brand new technology can't do EVERYTHING for us? Because it only takes care of MOST of the drudgery?

There is space to be filled and page hits to be collected. Demanding instant perfection for every edge-case is a good way of doing both.

Google has logged over 700,000 miles in those vehicles. Without a single robot-controlled accident.

There might be problems in certain weather conditions. Or in certain other conditions. Or whatever. In which case the driver should take over.

And since it is software, eventually those problems should be solved.

Comment It probably can. (Score 4, Insightful) 289

Judging by how badly TFA was written.

If a new stop light appeared overnight, for example, the car wouldn't know to obey it.

Got it. So the cars cannot handle changes in traffic markers.

Google's cars can detect and respond to stop signs that aren't on its map, a feature that was introduced to deal with temporary signs used at construction sites.

So they cannot deal with new stop LIGHTS but they can deal with new stop SIGNS. WTF?

But in a complex situation like at an unmapped four-way stop the car might fall back to slow, extra cautious driving to avoid making a mistake.

And it would be "unmapped" for the first attempt. Right? Because the cars should be sending back data on road conditions and such to HQ. Right?

Maps have so far been prepared for only a few thousand miles of roadway, but achieving Google's vision will require maintaining a constantly updating map of the nation's millions of miles of roads and driveways.

And the car needs the map to drive, right?

Google's cars have safely driven more than 700,000 miles.

So they just drove over the same "few thousand miles of roadway" again and again and again and again? Until they got to 700,000 miles?

The car's sensors can't tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either.

As it should. Because you don't know if that piece of paper is covering a rock or a pothole or whatever.

For example, John Leonard, an MIT expert on autonomous driving, says he wonders about scenarios that may be beyond the capabilities of current sensors, such as making a left turn into a high-speed stream of oncoming traffic.

Isn't that one of the easier problems? The car waits until it detects a gap of X size where X is dependent upon the speed of oncoming vehicles and the distance it needs to cross PLUS a pre-set "safety margin".

Comment Mod parent up. (Score 4, Insightful) 108

This is the primary problem with "sweep" methods of collecting data.

There MIGHT be something in the "sweep" that MAY impact a current investigation. Therefore, ALL of the "sweep" must be hidden from the public.

Bullshit. There shouldn't be any difficulty in removing the items relevant to a current investigation. The should already be tagged as such. Then release the rest.

This is a case of "collect EVERYTHING and keep it FOREVER" so that anyone can be backtracked if the cops or politicians decide to do so. Where do you go? When? Why? What do you do there?

Now imagine a cop tracking your daughter to find out where she lives and where she works and which college she goes to and when she leaves for classes.

Comment Already there. (Score 5, Insightful) 108

Roombas (and variants) are common household robots. YouTube has a lot of videos about Roombas cleaning a room while being ridden by a cat. Sometimes the cat is wearing a shark-suit.

Therefore, as this project progresses, Roombas will start to hunt cats in the neighborhood in order to get them to sit on top of them while they clean a room.

Or TFA is massively overstating the research and the concept and even robotics.

Comment Re:Do they? (Score 2) 329

I've never heard someone saying a sentence like this in high school (girls or boys). Anyone?

Not me, either. If anything that would happen in college, wouldn't it?

Anyway, from TFA (by the way, is it really displaying as grey text on a white background):

NCWIT senior research scientist Catherine Ashcraft cites the 2008 Harvard Business Review study "The Athena Factor," which found that "56% of technical women leave their private sector jobs by mid-career," she said. "But 75% continue to work full-time, and approximately half of these continue to work in technical occupations.

Check my math, okay?
100 tech women
56% leave the private sector (56 in this example)
75% of the 56 continue to work full time (42 in this example)
~50% of 42 continue in tech (21 in this example)

So that 21 plus the 44 that did not change is 65. So only 35% of women in tech leave tech in mid-career. 65% are in tech and stay in tech full time.

What's the percentage of men who leave tech in mid-career? How does that compare to the 35% for women?

In her position as a professor of computer science at Union College, Barr found contextualizing computer science classes led to an increase in female enrollment.

I don't mean to sound mercenary here, but isn't "money" a major motivating factor? Paying the mortgage and such?

Comment Mod parent up! (Score 4, Insightful) 421

From TFA:

Police told My Fox Chicago that Stone was difficult during questioning and they arrested him and charged him with disturbing the school.

How did "the school" know about this? At most his teacher and the school principal and the regional/district/whatever superintendent should have been aware of the issue.

If anyone was "disturbing" "the school" it would have been one of those three (or the cops) and they should be arrested.

For a student, being "difficult during questioning" should (at most) result in expulsion AND NOT ARREST.

Comment Statistics. (Score 1) 441

I agree. Even if what TFA says is true (it is not) then the US companies would be competing with companies around the world for those people. And their own governments.

Not to mention the ones who start their own companies and work for themselves.

Which would mean that those awesome programmers would have all the bargaining power. They wouldn't be accepting H-1B wages.

Statistically, there cannot be enough of "the best" to feed the stated demand for "the best".

But it makes sense if you substitute "cheaper" for "the best".

And that is reflected in the quality of the code being produced.

Comment Counter argument. (Score 0) 44

Because it wasn't 1,000 words long.

But the counter argument is that he clicks on links sent to him via email prior to verifying their origin (who sent them) or destination (where do they link to).

Next episode - If only there was some way to inform people that they should not click on links in email. Even if they think they're from someone they know. How will the bitter rivalry between MySpace and Friendster play out?

Comment Re:MUCH easier. (Score 3, Insightful) 239

Given a choice, I think autonomous cars at some point WILL be programmed with such a choice. For example, hitting an elderly person in order to avoid hitting a small child.

Congratulations. Your product just injured Senator Somebody in order to avoid hitting a Betsy-wetsy doll.

Senator Somebody has filed "lawsuit" against your company. It is super-effective. All your assets are belong to him.

Comment Re:MUCH easier. (Score 2) 239

It doesn't have to identify all the objects in the area, it simply has to not hit them.

Which is an order of magnitude EASIER TO PROGRAM.

And computers can recognize an obstacle and brake faster than a person can.

And that is why autonomous cars will NEVER be programmed with a "choice" to hit person X in order to avoid hitting person A.

So the premise of TFA is flawed.

Comment Will not matter. (Score 4, Insightful) 239

I wonder whether your insurance company would demand to know how you have set your car, and adjust your rates accordingly?

That does not matter because it won't be an option.

That is because "A.I." cars will never exist.

They will not exist because they will have to start out as less-than-100%-perfect than TFA requires. And that imperfection will lead to mistakes.

Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.

So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.

Comment MUCH easier. (Score 3, Interesting) 239

From TFA:

Do you remember that day when you lost your mind? You aimed your car at five random people down the road.

WTF?!? That makes no sense.

Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right.

Again, WTF?!? Who would design a machine that would take control away from a person TO HIT AN OBSTACLE? That's a mess of legal responsibility.

This scene, of course, is based on the infamous "trolley problem" that many folks are now talking about in AI ethics.

No. No they are not. The only "many folks" who are talking about it are people who have no concept of what it takes to program a car.

Or legal liability.

Itâ(TM)s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.

No, it is not "plausible". Not at all. You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

Wired is being stupid in TFA.

Comment sukmahp3n1s at twitter dot com (Score 4, Insightful) 235

try them as a business communication tool, email beats them hands down

Exactly. While "kids" may "flock" to whatever is "cool" today, eventually you do have to deal with other adults in structured environments.

With email, usernames can be assigned in a structured fashion. And potentially offensive combinations can be weeded out.

With closed systems, it is usually first-come-first-served from around the world (and that's not counting multiple accounts per person). So you might not be able to get johnsmith. And "sukmahp3n1s" does not work so well when dealing with other companies.

Slashdot Top Deals

"This generation may be the one that will face Armageddon." -- Ronald Reagan, "People" magazine, December 26, 1985

Working...