Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment: Re:Funny, that spin... (Score 1) 406

by Shoten (#49768793) Attached to: What AI Experts Think About the Existential Risk of AI

Question: What role do people who think that AI research is dangerous hold in the field of AI research?

Answer: None...because regardless of their qualifications, they wouldn't further the progress of something they think is a very, very bad idea.

Asking AI experts whether or not they think AI research is a bad idea subjects your responses to a massive selection bias.

Yes. Nobody who worked in the Manhattan Project had any reservations whatsoever about building the atomic bomb, right?

Experts work in fields they're not 100% comfortable with all the time. The actual physicists that worked on the bomb understood exactly what the dangers were. The people looking at it from the outside are the ones coming up with the bogus dangers. You hear things like, "the scientists in the Manhattan project were so irresponsible they thought the first bomb test could ignite the atmosphere, but went ahead with it anyway." No, the scientists working on it thought of that possibility, performed calculations the definitely proved it wasn't anywhere near a possibility and then moved on with it. People outside the field are the ones that go, "The LHC could create a black hole that will destroy us all!" The scientists working on know the Earth is struck with more powerful cosmic rays than the LHC can produce regularly, so there's no danger.

It's just that they don't work in the field of AI, so therefore they must not have any inkling whatsoever as to what they're talking about.

Which is a 100% true statement. They're very smart people, but they don't know what they're talking about in regards to AI research, and are coming up with bogus threats that most AI experts agree aren't actually a possibility.

The topic of the Manhattan Project is a red herring. Those people were choosing between two evils, because the Project was about building a weapon to stop a genocidal maniac from taking over the planet. By the time they were done, D-Day and V-E Day had happened, true, but those victories were far from foregone conclusions when the scientists started.

Nobody's building AI to try and prevent something on the same level as world domination by Hitler, sorry.

Comment: Re:Alan and Alvin (Score 1) 106

by Shoten (#49768665) Attached to: No, Your SSD Won't Quickly Lose Data While Powered Down

based on a presentation from Alvin Cox, a Seagate engineer[...]Alan Cox said, "I wouldn't worry"

Can we get these two gentlemen to agree on a statement of risk? Or maybe just a little, you know, editing from the Slashdot editors?

I'm wondering if the "editing" from the Slashdot editors wasn't the problem in the first place. How many Slashdot summaries wildly overstate/oversimplify/remove from proper context the real meat of a story? How many Slashdot comments essentially say, "RTFA...you'll see that [it only applies to this situation|they mean this instead of that|this was done on purpose under wildly crazy conditions to see if it could ever be true at all|this person has no credibility|this is really advertising for someone's product]"?

Comment: Re:Funny, that spin... (Score 5, Insightful) 406

by Shoten (#49766301) Attached to: What AI Experts Think About the Existential Risk of AI

In light of the fact that Stephen Hawking, Bill Gates and Elon Musk are not even remotely experts in A.I. your opinion is fairly odd.

Question: What role do people who think that AI research is dangerous hold in the field of AI research?

Answer: None...because regardless of their qualifications, they wouldn't further the progress of something they think is a very, very bad idea.

Asking AI experts whether or not they think AI research is a bad idea subjects your responses to a massive selection bias. And discounting the views of others because they don't specialize in creating the thing they think should not be created does the same. You do realize that at your core, that's your only point...not that Hawking is an idiot, or that Gates doesn't know anything about technology. It's just that they don't work in the field of AI, so therefore they must not have any inkling whatsoever as to what they're talking about.

Comment: Re:"Kaspersky's relationship with the Kremlin" (Score 0) 288

Kaspersky probably is in bed in some way with the Kremlin, it has nothing to do with the quotes you listed.

Pretty much everyone figured it was a US/Israeli combo for Stux and Flame, not just Kaspersky.

What the OP fails to mention is that Kaspersky also focuses on Equaton, Duqu, and every other campaign that's been attributed with any degree of credibility to the US. And that they don't go near any of the things like Sofacy/APT28 that emanate from Russia.

Comment: Re:Antivirus business (Score 2) 288

And do they have a a successful antivirus business?

They must, because they're a fairly prominent sponsor of the Ferrari Formula 1 team.

Now, the only question I have about that is whether they know they're sponsoring Ferrari, or if they just know they're sponsoring "the only car that's completely red."

Comment: Re:Markets, not people (Score 2) 615

by Shoten (#49706889) Attached to: The Economic Consequences of Self-Driving Trucks

let the markets sort themselves out.

No worries, millions can move into the "big rig hijacking" business! A semi-trailer full of something easy to sell on the street, or a tanker full of a chemical useful in making meth, or of gasoline (gasoline smuggling was the mafia's most profitable business for years) - all very valuable targets. Today that theft is kept somewhat in check by the real risk of getting shot in the process, or of wrecking the rig if your try a scene out of a Fast and Furious movie. But an AI truck with safety reflexes on a lonely stretch of road? Well, the markets will sort themselves out.

As for the legal trade, driving is a crappy job unless you own your truck, and I rather suspect the owner/operators of today will become the owners of tomorrow. Truckstops may go the way of the buggy whip, but I can't see that happening fast - like all infrastructure changes, the capital outlay is so high this will be a 20-50 year transition.

How are these not targets already? To me, it seems like it'd be a lot simpler to hijack a truck driven by a human who can accept alternate programmed instructions (also known as "threats," in this context) given in natural human language, than a computer-driven truck. You can't just mess with GPS to hijack a truck; telling a truck that he's not where he thinks he is won't work as well as some people might think, and there's the dual-threat of counter-spoofing technologies (easy to build in if you want to..and if GPS spoofing gets used for hijacking they'll want to) and GPS-interference monitoring (which is happening today as we speak). And even then, the "getting shot in the process" risk runs both ways...at least with theft of an automated truck you don't have the safety of the driver to worry about.

Comment: Re:Bullshit. Pure. Simple. Bullshit. (Score 1) 152

by Shoten (#49689281) Attached to: How Responsible Are App Developers For Decisions Their Users Make?

If that's what he's saying then it doesn't need to be said, so why is he saying it?

Coming up after the break, how inaccurate rulers mean that shit won't fit.

I asked myself that same question, but I think there's an answer. If you write a version of Angry Birds that sucks, then meh...some people waste a buck each on a crappy game, give it a bad review, and life goes on. If (as actually happened) you radically change the UI on a ubiquitous application *cough*Microsoft Word*cough* then it frustrates a lot of people and wastes a lot of time, but still not necessarily the end of the world. But BI apps drive decision making at a scale that boggles the mind. Things like epidemiology (containing Ebola in West Africa, or trying to reduce HIV infection rates), cancer research (listed up above, and from personal recent experience I can tell you, they're doing some incredible fucking stuff with this), and even decisions that impact negotiations between nation-states all rely upon BI. Because of the cost of the solutions and the effort needed to implement them, no decision they support is really small; nearly all of them have massive impact and thus huge ramifications if the BI solutions drive people in the wrong direction. So while he didn't quite say it this way, I think the point is that BI apps bear a greater moral burden to be effective than most apps because of the impact (good or bad) that they have.

What I wonder about is why he didn't touch upon the other moral issue of BI: usage. One of the first big BI implementations was in Germany, for example. It was used to do number-crunching to manage and provide efficiency of scale for their overall program of concentration camps. (And no, this isn't Godwin's Law in effect...I'm not comparing anyone to Hitler, just raising an interesting historical fact.) IBM designed, built, and supported the solution...this was far, far beyond just making an app that someone else bought and did something bad with, without direct involvement by the app's creator. BI solutions aren't "buy it, install it, use it" products; they need a metric assload of support and consulting services to get them off the ground, and they are purpose-built to the customer's needs. So what are the ethics around what the customer intends to do, and where do you draw the line and say "No, I'm not going to sell you my product or services to help you do that"?

Comment: Re:Bullshit. Pure. Simple. Bullshit. (Score 1) 152

by Shoten (#49689169) Attached to: How Responsible Are App Developers For Decisions Their Users Make?

But if it doesn't work, everyone can go down a bunch of rabbit holes and it takes years to figure out that they've been chasing the wrong approaches all along.

  1. Come up with a thousand approaches to a problem
  2. Crowdsource incorrect approaches
  3. Simultaneously discover 999 things that don't work
  4. Success!

Come up with a thousand approaches to a problem.
Try all of them at once.
Discover that you just broke your statistically-valid group that has the problem into a thousand groups so small that you can no longer detect the difference between success and failure for any one group.
Also realize that you just doomed 99.9% of the total test population to failure...and in my example, these are cancer patients, so you also just got yourself barred from practicing medicine. What are you, Dr. Mengele?
Fail!

Comment: Re:Bullshit. Pure. Simple. Bullshit. (Score 5, Informative) 152

by Shoten (#49682381) Attached to: How Responsible Are App Developers For Decisions Their Users Make?

"In a blog post, Rado Kotorov, Chief Innovation Officer at Information Builders asserts that the creators of enterprise apps implicitly assume some of the responsibility for other people's decision making. He says it's not just developers, but anyone who is involved, from defining the concept, to requirements gathering, to final implementation. Thus, the creators of the app have an ethical obligation to ensure that people can reach the right conclusions from the facts and the way they are presented in the app."

I call bullshit. This is simply another step down a slippery slope that removes more personal responsibility.

This is the very definition of the nanny State.

RTFA.

If you look at the article, you'll see just how blatantly Slashdot has mislead us with their summary of the article. The article isn't about "apps" or even just "enterprise apps." It's specifically and only about business intelligence (BI) applications, which are intended to lead their users to make decisions and conclusions. What he's saying, fundamentally, is that "as the makers of business intelligence applications, we have a responsibility to actually not make apps that suck, since the conclusions our users will come to have major ramifications." I agree with him, in that context.

Take it and apply it to a specific situation like cancer research, and the difference between meeting his ethical standard and failing it is the difference between saving lives or losing them. And this is actually a real example; recent cancer research has largely focused upon big-data mining and BI around specific characteristics of various forms of cancer, and matching up with an incredible degree of precision which combinations of treatments work best on certain kinds of cancer. They go so far as to actually examine the genome of tumors...it's fucking cool. This is the kind of use that a BI system can fulfill, if it works. But if it doesn't work, everyone can go down a bunch of rabbit holes and it takes years to figure out that they've been chasing the wrong approaches all along.

Comment: OSGP? Never heard of it (Score 1) 111

by Shoten (#49653683) Attached to: Poor, Homegrown Encryption Threatens Open Smart Grid Protocol

I work for a smart grid consulting company...before that, a major (nearly a century old, and widely-recognized) civil engineering firm, again in the power industry. Before that I was the official smart grid security spokesman for a large IT company, and briefed Gartner, Ponemon, Forrester, etc. I've been deep into the guts of generation, T&D, energy marketing, and smart metering infrastructure at dozens of power companies over the past decade.

I've never seen OSGP in the field, not once. The OP talks about "millions of smart meters" using it, but damned if I can figure out which meters those are. Landis+Gyr? Nope. GE? Uh uh. Itron? Hell no; they have their own end-to-end architecture (and it works really, really well, which is why Itron is now the 800-pound gorilla of the smart metering world) Sensus? Nope, they bought FlexNet from Motorola and use that, and it has its own (decent) encryption. Elster? Definitely not...I've seen Elster's architecture up super-close, and this protocol is nowhere to be found.

In fact, if you look up OSGP, you'll see all kinds of announcements from the alliance behind it, but not a lot of actual success. Sounds to me like someone found vulnerabilities in an also-ran protocol, but the security issues aren't the only thing wrong with it...which is why nobody seems to actually USE it.

Comment: Re:Why bother to use the word "traditional"? (Score 2) 65

by Shoten (#49645581) Attached to: Top Cyber Attack Vectors For Critical SAP Systems

SAP systems are not protected from cyber threats by traditional security approaches

That implies that there is some sort of protection while leaving out the word "traditional" implies the more correct situation where they are not protected at all.
That not necessarily a bad thing so long as the practice is to secure their stuff with third party approaches afterwards (eg. need to get on a secured VPN before you can communicate with the software).

Onapsis' bread and butter is a non-traditional security product meant specifically to secure...wait for it...SAP. So, that gives you an idea what the anonymous OP is up to.

Comment: Re:Poster sounds sympathetic, but sounds like thre (Score 1) 254

by Shoten (#49610903) Attached to: VA Tech Student Arrested For Posting Perceived Threat Via Yik Yak

It may seem to you. I asked real students on campus, who had no idea what 4/16 was. Yet a student has lost their educational opportunities here, and likely life ruined.

I'm betting a lot of people don't know the date of the Boston Marathon bombing. But threats are meant for people who DO recognize the significance, and the people who watch for threats do know what these dates are. The key point here is: did what he post actually look like it might be a threat? I say yes, and the fact that the people you asked didn't know the date doesn't have any effect on the situation.

"Be *excellent* to each other." -- Bill, or Ted, in Bill and Ted's Excellent Adventure

Working...