Crazy, eh? It's almost like the information security director wasn't doing a good job. I'm guessing you could find a number of non-optimal things in the setup, given that the person in charge of security was probably not terribly interested in catching himself.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
None of them are phrased or set the way a real story would be, and none of them have a clever or entertaining premise; a reference is not a joke. These are sad every year, but this crop seems especially pathetic.
I mean, The Onion is almost never funny, but looking at this crap makes you appreciate the tiny amount of work and thought they put in; there's usually at least an attempted joke there.
Well... I think what they'll probably do is continue testing - and they probably won't be widely deployed until they're as safe as human drivers (on average, they'll probably be safer in some ways and less safe in others). Soon after that, they'll be safer than humans (because they can share knowledge, are easy to upgrade, and once there's lots of them they'll be able to communicate in ways humans can't).. well, that is, if we keep going.
I say "if", because the more likely problem is Luddites who will want them banned after the first death, even if their overall safety record is better than humans. An enormous number of extra people will die because of how slowly we'll adopt self-driving cars. This is because people are dumb and ruled by emotional reactions: when people cause collisions (which they do thousands of times a day) it's just an accident, but the first time a self-driving car runs over a kid it's going to be pandemonium - and a good percentage of people will want to go back to the old higher death rates.
As to your argument, it's difficult to compare a computer to an ant or a person on some single scale of intelligence. An ant is very good at some things, but completely incapable at most everything else. Computers exceed humans at many tasks, while lagging behind in others. No computer today could learn how to drive well by itself, or have much conception of what driving is - but we've demonstrated that computers, designed and refined over time by people, can get very good at complex tasks. I think we're still a ways off from having safe computer drivers, but it's not in any way impossible or far distant; computers are already much closer to "humans" than "ants" on the "ability to drive" standard, and there's no reason they couldn't be better than humans at driving within the next 10-20 years.
There's probably lots of things certain people won't do with a camera pointed at them, even if it's supposedly disabled. This will probably end up saving them some money on hijinks related car damage.
The OP suggests a weirdly specific Shibboleth, and half the comments are people saying he picked the wrong one - like "public key encryption isn't the right thing to test - you should be testing knowledge of computer architecture or regexes, or how to set up a web page with a specific stack" or whatever.
For developers, we usually test whether they're good at programming. We let them choose whatever language they want (because in the end they're all mostly the same, and a good programmer will be able to use any of them) and have them work through some simple but realistic programming exercises (eg. from this data structure, figure out whether person X manages person Y). Most fail in a way that demonstrates they won't be able to do the job, or will take too long to get going at it. It also usually identifies people who have weird religious attachments to certain tools, languages or methodologies (Many times I've heard crap like "Oh, I can't type this simple answer into a regular text editor, I need XYYXYXZZYX with autocomplete on" or whatever).
Anyway, back to the OP, yes I would expect that most developers should have some idea how they would encrypt a file, even if they haven't used the tools themselves personally (this isn't a core job in most development jobs I know of). But I wouldn't think they're dumb or unqualified if they don't. Why use a weak correlation like "a good developer probably knows how to encrypt stuff" when you could just test whether they can do development stuff directly?
And we do the same stuff for other jobs. When we were interviewing a graphic designer to work integrated with the programmers, we had them do some graphic design in the interview, fixing up pages we had purposefully borked in a real project. Again, most disqualified themselves pretty quickly when faced with realistic job tasks.
My guess is what they've really determined is that:
1. Better photographers take better pictures, and also are more competent technically (ie. they take sharp, well-lit pictures)
2. People put more effort into getting technicals right when they're shooting something beautiful
Taking sharper photos of dull objects will only get you so far; the correlation is due to stuff that's deeper and harder to control: the subject and the photographer's skill/effort.
Surely this wasn't intended behavior? The more we poke at reality, the more it seems like a simulation that works really well, but where you can see some artifacts once you get in close.
If you look around the web, you'll find packed forums full of people complaining about Lollipop being horrific on a Nexus 7. I have 2 Nexus 7's (bought for the kids on a long car ride), and I upgraded one of them... nobody uses that one anymore. Everything about it is slow, and even very simple apps are often unresponsive for a couple minutes after you wake the device. I'm sure someone could explain why this is my own fault somehow for having applications installed or something (that's the responses people are getting on lots of the forums), but for me the solution will probably be going through the pain of downgrading.
I used to recommend Android tablets... not so sure any more. I hate Apple and iTunes and the iOS interface, but my iPad has never screwed me nearly this hard.
I have two Nexus 7 tablets; I upgraded one and am seriously considering downgrading it back to 4.x, even though that's a bunch of fiddling. The new OS is slower, ugly (this is subjective, but their new style doesn't do anything for me), less responsive (especially just as you bring it back up from sleep), and I think lots of the UI is less useful (eg. the drag down system-y menu doesn't immediately have the stuff I want like it used to).
If you search for Android downgrade instructions, you'll find forums full of people with similar complaints who want to go back.
If you could see yourself from the outside, you'd realize how perfectly - amazingly, beautifully - you validated his comment.
I agree with the idea, but it's not a "simple fact"; coming to that conclusion requires a long argument involving a lot of scientific reasoning, experience, the particulars of our status quo of technology, population, and environmental inputs, and a certain (if reasonable) valuation of the potential trade-offs.
Proper environmentalism isn't "simple facts" - because it's not a religion of Earth purity. It's about legitimately complicated choices and consequences, and evaluating those choices over a longer term.
Well, there's a few reasons - but I think the biggest help we'll have is that I expect the change (based on current progress) to be gradual. That is to say, we'll have time to adapt and build measures as the technology improves, rather than have to deal with it all at once. In a lot of ways, the change is well in progress already. Someone who wants to do something bad has, through the Internet, many more knowledge and contact resources than they would have had in 1985 (or 1885). The information age has already created security problems, but we've adapted, and we leverage the same technologies to keep us safe.
Another reason, that is a bit more "out there", is that I believe this sort of technology will likely be able to solve a lot of humanity's problems before it gets to "supervillain's assistant" or "self-interested omniscient being" sort of level. And people who generally having their needs met (or perhaps "overmet"... and may also be watched 24/7... yeah...) are less likely to cause problems with their supercomputer access. People may find that they want to play just one more level before they blow something up. And maybe finish their hyper-Doritos.
While they sound kind of hokey, I think there are some credible threats in this "self aware computer gets squirrely" vein - depending on how the AI is developed. If we build an AI based on, say, scanning a brain and recreating it - however that might work - then we end up with a very predictably unpredictable agent. This emergence could be a very "singularity" type event where we go from fairly dumb AI to very smart, world changing AI in a short time. From there on out, it would get hard to predict very fast; a weird mix of scary/great/over that humans may not keep up with.
But if strong AI grows out of, say, a Watson type "oracle" program that just gets smarter over years and years (and this style of development seems more likely), then the kinds of problems I'd expect would be much more comprehensible. Still potentially scary, but likely more manageable.
...and some of those people would want to do bad things. A bad person would be more capable of doing harm when aided by an AI doing planning, co-ordination, or execution. There's no guarantee that AIs on the "other side" would be able to mitigate the new threats (the two things aren't the same difficulty).
I think there's lots of risks associated with the rise of AI (though it doesn't seem that tech is coming all that fast at the moment). That said, there's risks involved with all sorts of new tech. That doesn't mean this is alarmist nonsense; it's worth discussing potential ways to mitigate those risks - but there's also good reason to believe we'll be able to manage those risks as we've managed changes in the past.
Yes, sorry - it depends on what you count as consumption. I think that taxing consumption is the best start for building a practically effective progressive tax, but I didn't really get that out in the comment above, my bad.