Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Uh oh...Batman becomes real? (Score 2) 40

How can they tell what direction a response comes from, with only one mic?

It came from the person sleeping.

The other problems, though, could be harder.

Which person? How can they tell the difference between the person sleeping and...

The other person sleeping next to them?
The pet in the room?
Curtains, gently blowing in the breeze?
The person shifting in their bed?
Sounds from heating/cooling coming online and the air shifting around in the room as a result?

How can it tell the difference between a response...a change in the state of something in the room...and a change in the object composition of the room itself? Without directionality, I don't see how it's possible. And indeed, as someone else pointed out, they did say that it requires phones with two microphones...which I missed when I read the article. So the point seems valid...and most phones won't be able to do this. Come to think of it, I am trying to think of what phones I know for a fact have dual microphones, and I'm coming up short.

Comment Uh oh...Batman becomes real? (Score 4, Interesting) 40

Turning smartphones into sonar devices to monitor movements. I'm torn between "this is really cool!" and "these people are so full of shit and just trying to publish something to get tenure!"

I wonder how they solve the problems of directional discrimination without multiple microphones? How can they tell what direction a response comes from, with only one mic? And how do they intend to make this work on multiple phones, for that matter...with their vast differences in both microphone and speaker setups? I'm really skeptical of this.

They also talk about using ultrasonic frequencies...which I also doubt most phones can actually produce.

Comment Not "America," just "The South" (Score -1, Flamebait) 479

I've seen a lot of posts to this that seem to believe that all of America is like this. Let's be clear: this kind of crap is almost exclusively found in the Southeastern US. You don't see this in the Northeast (they believe in science there), you don't see it in California, or in the Pacific Northwest. Occasional pockets in the Midwest also get this batshit crazy, but there's a reason we hear about this for schools in Mississippi, Louisiana, Arkansas, Missouri, etc.

Or, to put it in something that could be a the end of a very (too) honest public service announcement:

"Georgia Public Schools: someone has to build the cars!" (Credit to the show "Squidbillies")

Comment Re:The people (Score 1) 479

I'm not an Athiest (I'm Jewish), but even I don't want religion taught in schools. When people say "teach religion in schools" (outside of some comparative religion/philosophy class), what they really mean is "teach Christianity in schools." Try teaching Islam in a public school and you'll see all of those "we need to put religion back into public school" advocates go crazy.

I might be religious, but I try not to force my religion on others. I'm willing to discuss it with others if they ask questions, but I don't discuss it in a "my religion is so great, you need to convert now or else" manner. To me, religion is a personal matter and definitely not something for public schools to cover in a science class. You want to believe that the Earth was created 10,000 years ago when God sneezed it into his cosmic hanky? Go right ahead. You can even tell your kids that at home. Just don't try teaching MY kids that in public school because you can't deal with your kids learning about evolution.

If I had any mod points at the moment, I'd mod this up until we had to crane our necks looking upwards to be able to read it from underneath. Bravo, sir, bravo.

Comment Missing the 'why' of it. (Score 5, Insightful) 156

Companies where the open office approach succeeded had something in common: the population of the office chose it for themselves, early on. They had an open office environment because that's how they wanted to work, and because the dynamic that existed between the employees was compatible with it. Then later, a lot of other companies had executives look at both the success of those companies and the lower real estate costs that the model uses, and decided they would "choose" it for their own staff. And that's not quite how it works. It's rather like deciding that your goldfish would be better off in a salt water tank because of how big the fish were in some other tank you saw, and then finding yourself confused as to why the fish all died. Not all cultures are the same, and you can't change the culture by imposing something upon it that is toxic.

Comment Re:And...and... (Score 4, Funny) 156

...everybody should get naked. There...I said it.

It's the logical end state of this whole open office thing. Complete transparency and no place to hide.

With tech workers?? Do you actually WANT to see what some of these pale, flabby people look like without clothes on???

Though, then again...if that was walking around me all the time, I'd keep my eyes focused squarely on my monitor and my work. My productivity would soar...hmmmmm....

Comment Re:Funny, that spin... (Score 1) 421

Question: What role do people who think that AI research is dangerous hold in the field of AI research?

Answer: None...because regardless of their qualifications, they wouldn't further the progress of something they think is a very, very bad idea.

Asking AI experts whether or not they think AI research is a bad idea subjects your responses to a massive selection bias.

Yes. Nobody who worked in the Manhattan Project had any reservations whatsoever about building the atomic bomb, right?

Experts work in fields they're not 100% comfortable with all the time. The actual physicists that worked on the bomb understood exactly what the dangers were. The people looking at it from the outside are the ones coming up with the bogus dangers. You hear things like, "the scientists in the Manhattan project were so irresponsible they thought the first bomb test could ignite the atmosphere, but went ahead with it anyway." No, the scientists working on it thought of that possibility, performed calculations the definitely proved it wasn't anywhere near a possibility and then moved on with it. People outside the field are the ones that go, "The LHC could create a black hole that will destroy us all!" The scientists working on know the Earth is struck with more powerful cosmic rays than the LHC can produce regularly, so there's no danger.

It's just that they don't work in the field of AI, so therefore they must not have any inkling whatsoever as to what they're talking about.

Which is a 100% true statement. They're very smart people, but they don't know what they're talking about in regards to AI research, and are coming up with bogus threats that most AI experts agree aren't actually a possibility.

The topic of the Manhattan Project is a red herring. Those people were choosing between two evils, because the Project was about building a weapon to stop a genocidal maniac from taking over the planet. By the time they were done, D-Day and V-E Day had happened, true, but those victories were far from foregone conclusions when the scientists started.

Nobody's building AI to try and prevent something on the same level as world domination by Hitler, sorry.

Comment Re:Alan and Alvin (Score 1) 106

based on a presentation from Alvin Cox, a Seagate engineer[...]Alan Cox said, "I wouldn't worry"

Can we get these two gentlemen to agree on a statement of risk? Or maybe just a little, you know, editing from the Slashdot editors?

I'm wondering if the "editing" from the Slashdot editors wasn't the problem in the first place. How many Slashdot summaries wildly overstate/oversimplify/remove from proper context the real meat of a story? How many Slashdot comments essentially say, "RTFA...you'll see that [it only applies to this situation|they mean this instead of that|this was done on purpose under wildly crazy conditions to see if it could ever be true at all|this person has no credibility|this is really advertising for someone's product]"?

Comment Re:Funny, that spin... (Score 5, Insightful) 421

In light of the fact that Stephen Hawking, Bill Gates and Elon Musk are not even remotely experts in A.I. your opinion is fairly odd.

Question: What role do people who think that AI research is dangerous hold in the field of AI research?

Answer: None...because regardless of their qualifications, they wouldn't further the progress of something they think is a very, very bad idea.

Asking AI experts whether or not they think AI research is a bad idea subjects your responses to a massive selection bias. And discounting the views of others because they don't specialize in creating the thing they think should not be created does the same. You do realize that at your core, that's your only point...not that Hawking is an idiot, or that Gates doesn't know anything about technology. It's just that they don't work in the field of AI, so therefore they must not have any inkling whatsoever as to what they're talking about.

Comment Re:"Kaspersky's relationship with the Kremlin" (Score 0) 288

Kaspersky probably is in bed in some way with the Kremlin, it has nothing to do with the quotes you listed.

Pretty much everyone figured it was a US/Israeli combo for Stux and Flame, not just Kaspersky.

What the OP fails to mention is that Kaspersky also focuses on Equaton, Duqu, and every other campaign that's been attributed with any degree of credibility to the US. And that they don't go near any of the things like Sofacy/APT28 that emanate from Russia.

Comment Re:Antivirus business (Score 2) 288

And do they have a a successful antivirus business?

They must, because they're a fairly prominent sponsor of the Ferrari Formula 1 team.

Now, the only question I have about that is whether they know they're sponsoring Ferrari, or if they just know they're sponsoring "the only car that's completely red."

Comment Re:Markets, not people (Score 2) 615

let the markets sort themselves out.

No worries, millions can move into the "big rig hijacking" business! A semi-trailer full of something easy to sell on the street, or a tanker full of a chemical useful in making meth, or of gasoline (gasoline smuggling was the mafia's most profitable business for years) - all very valuable targets. Today that theft is kept somewhat in check by the real risk of getting shot in the process, or of wrecking the rig if your try a scene out of a Fast and Furious movie. But an AI truck with safety reflexes on a lonely stretch of road? Well, the markets will sort themselves out.

As for the legal trade, driving is a crappy job unless you own your truck, and I rather suspect the owner/operators of today will become the owners of tomorrow. Truckstops may go the way of the buggy whip, but I can't see that happening fast - like all infrastructure changes, the capital outlay is so high this will be a 20-50 year transition.

How are these not targets already? To me, it seems like it'd be a lot simpler to hijack a truck driven by a human who can accept alternate programmed instructions (also known as "threats," in this context) given in natural human language, than a computer-driven truck. You can't just mess with GPS to hijack a truck; telling a truck that he's not where he thinks he is won't work as well as some people might think, and there's the dual-threat of counter-spoofing technologies (easy to build in if you want to..and if GPS spoofing gets used for hijacking they'll want to) and GPS-interference monitoring (which is happening today as we speak). And even then, the "getting shot in the process" risk runs both ways...at least with theft of an automated truck you don't have the safety of the driver to worry about.

Comment Re:Bullshit. Pure. Simple. Bullshit. (Score 1) 152

If that's what he's saying then it doesn't need to be said, so why is he saying it?

Coming up after the break, how inaccurate rulers mean that shit won't fit.

I asked myself that same question, but I think there's an answer. If you write a version of Angry Birds that sucks, then meh...some people waste a buck each on a crappy game, give it a bad review, and life goes on. If (as actually happened) you radically change the UI on a ubiquitous application *cough*Microsoft Word*cough* then it frustrates a lot of people and wastes a lot of time, but still not necessarily the end of the world. But BI apps drive decision making at a scale that boggles the mind. Things like epidemiology (containing Ebola in West Africa, or trying to reduce HIV infection rates), cancer research (listed up above, and from personal recent experience I can tell you, they're doing some incredible fucking stuff with this), and even decisions that impact negotiations between nation-states all rely upon BI. Because of the cost of the solutions and the effort needed to implement them, no decision they support is really small; nearly all of them have massive impact and thus huge ramifications if the BI solutions drive people in the wrong direction. So while he didn't quite say it this way, I think the point is that BI apps bear a greater moral burden to be effective than most apps because of the impact (good or bad) that they have.

What I wonder about is why he didn't touch upon the other moral issue of BI: usage. One of the first big BI implementations was in Germany, for example. It was used to do number-crunching to manage and provide efficiency of scale for their overall program of concentration camps. (And no, this isn't Godwin's Law in effect...I'm not comparing anyone to Hitler, just raising an interesting historical fact.) IBM designed, built, and supported the solution...this was far, far beyond just making an app that someone else bought and did something bad with, without direct involvement by the app's creator. BI solutions aren't "buy it, install it, use it" products; they need a metric assload of support and consulting services to get them off the ground, and they are purpose-built to the customer's needs. So what are the ethics around what the customer intends to do, and where do you draw the line and say "No, I'm not going to sell you my product or services to help you do that"?

Comment Re:Bullshit. Pure. Simple. Bullshit. (Score 1) 152

But if it doesn't work, everyone can go down a bunch of rabbit holes and it takes years to figure out that they've been chasing the wrong approaches all along.

  1. Come up with a thousand approaches to a problem
  2. Crowdsource incorrect approaches
  3. Simultaneously discover 999 things that don't work
  4. Success!

Come up with a thousand approaches to a problem.
Try all of them at once.
Discover that you just broke your statistically-valid group that has the problem into a thousand groups so small that you can no longer detect the difference between success and failure for any one group.
Also realize that you just doomed 99.9% of the total test population to failure...and in my example, these are cancer patients, so you also just got yourself barred from practicing medicine. What are you, Dr. Mengele?
Fail!

Slashdot Top Deals

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...