You realize venerable is a positive word, right? It implies age, sure, but the point of it is "respected" not "decrepit" or "ready to retire".
So far it's done pretty well.
My kids are all young, and they play Mario 3D World, Mario Kart, and some of the Nintendoland games. Overall, it's a better fit than the 360 for now; the only thing they were playing on 360 much was Skylanders and Happy Action Theater (which I do miss; not a lot of games can handle a room full of 5 year olds). Some of my oldest kid's friends are starting on Minecraft, but we can do that on PC.
At some point I'm sure I'll end up getting another console - but hopefully I can skip this generation, or at least get a good discount.
A few months back I was picking a new console to replace my 360. XBox One would have been a slam dunk if it would have kept playing all the kids' games. Instead, we traded them all in and bought a Wii U.
Backwards compatibility is a huge feature for building up a user base across generations... but introducing it years after console launch, after pretty much saying they wouldn't, after a good percentage of your users have already switched to something, seems really uh... non-optimal.
I don't know why I'm continuing this, but if you're going to just reflexively gainsay, you might at least say why the experiments I linked to don't prove what scientists say they do. Bell's work was a long time ago, and while it's still not 1000% nailed down it's very solid. The experiments are all on that side - the only thing on the "alternative" side is vague "I don't think the universe would work that way" crap that has to be very convoluted to match up with experimental reality.
No, they weren't in that state the entire time - the results of real experiments don't correspond with that, or with "hidden variables".
It's complicated, but the Wikipedia article on http://en.wikipedia.org/wiki/B... seems like a good place to start.
Yeah - this is certainly my impression from looking at Google. I've seen a lot of quality programmers who started out there and then left as they got older (and were greatly helped in cashing in by having cool sounding Google experience on their resume).
For me, I went to their offices for a bit (they gave us a tour during Google Code Jam), and while the general idea sounds fun I quickly soured on the prospect of actually working there. I don't care about free cereal or the game console in the break room or whatever. At this point in my career I want an office (rather than a cubicle space 3 feet down from the next guy and backing onto a high-traffic hallway), and I want to go home sometimes.
Crazy, eh? It's almost like the information security director wasn't doing a good job. I'm guessing you could find a number of non-optimal things in the setup, given that the person in charge of security was probably not terribly interested in catching himself.
None of them are phrased or set the way a real story would be, and none of them have a clever or entertaining premise; a reference is not a joke. These are sad every year, but this crop seems especially pathetic.
I mean, The Onion is almost never funny, but looking at this crap makes you appreciate the tiny amount of work and thought they put in; there's usually at least an attempted joke there.
Well... I think what they'll probably do is continue testing - and they probably won't be widely deployed until they're as safe as human drivers (on average, they'll probably be safer in some ways and less safe in others). Soon after that, they'll be safer than humans (because they can share knowledge, are easy to upgrade, and once there's lots of them they'll be able to communicate in ways humans can't).. well, that is, if we keep going.
I say "if", because the more likely problem is Luddites who will want them banned after the first death, even if their overall safety record is better than humans. An enormous number of extra people will die because of how slowly we'll adopt self-driving cars. This is because people are dumb and ruled by emotional reactions: when people cause collisions (which they do thousands of times a day) it's just an accident, but the first time a self-driving car runs over a kid it's going to be pandemonium - and a good percentage of people will want to go back to the old higher death rates.
As to your argument, it's difficult to compare a computer to an ant or a person on some single scale of intelligence. An ant is very good at some things, but completely incapable at most everything else. Computers exceed humans at many tasks, while lagging behind in others. No computer today could learn how to drive well by itself, or have much conception of what driving is - but we've demonstrated that computers, designed and refined over time by people, can get very good at complex tasks. I think we're still a ways off from having safe computer drivers, but it's not in any way impossible or far distant; computers are already much closer to "humans" than "ants" on the "ability to drive" standard, and there's no reason they couldn't be better than humans at driving within the next 10-20 years.
There's probably lots of things certain people won't do with a camera pointed at them, even if it's supposedly disabled. This will probably end up saving them some money on hijinks related car damage.
The OP suggests a weirdly specific Shibboleth, and half the comments are people saying he picked the wrong one - like "public key encryption isn't the right thing to test - you should be testing knowledge of computer architecture or regexes, or how to set up a web page with a specific stack" or whatever.
For developers, we usually test whether they're good at programming. We let them choose whatever language they want (because in the end they're all mostly the same, and a good programmer will be able to use any of them) and have them work through some simple but realistic programming exercises (eg. from this data structure, figure out whether person X manages person Y). Most fail in a way that demonstrates they won't be able to do the job, or will take too long to get going at it. It also usually identifies people who have weird religious attachments to certain tools, languages or methodologies (Many times I've heard crap like "Oh, I can't type this simple answer into a regular text editor, I need XYYXYXZZYX with autocomplete on" or whatever).
Anyway, back to the OP, yes I would expect that most developers should have some idea how they would encrypt a file, even if they haven't used the tools themselves personally (this isn't a core job in most development jobs I know of). But I wouldn't think they're dumb or unqualified if they don't. Why use a weak correlation like "a good developer probably knows how to encrypt stuff" when you could just test whether they can do development stuff directly?
And we do the same stuff for other jobs. When we were interviewing a graphic designer to work integrated with the programmers, we had them do some graphic design in the interview, fixing up pages we had purposefully borked in a real project. Again, most disqualified themselves pretty quickly when faced with realistic job tasks.
My guess is what they've really determined is that:
1. Better photographers take better pictures, and also are more competent technically (ie. they take sharp, well-lit pictures)
2. People put more effort into getting technicals right when they're shooting something beautiful
Taking sharper photos of dull objects will only get you so far; the correlation is due to stuff that's deeper and harder to control: the subject and the photographer's skill/effort.
Surely this wasn't intended behavior? The more we poke at reality, the more it seems like a simulation that works really well, but where you can see some artifacts once you get in close.
If you look around the web, you'll find packed forums full of people complaining about Lollipop being horrific on a Nexus 7. I have 2 Nexus 7's (bought for the kids on a long car ride), and I upgraded one of them... nobody uses that one anymore. Everything about it is slow, and even very simple apps are often unresponsive for a couple minutes after you wake the device. I'm sure someone could explain why this is my own fault somehow for having applications installed or something (that's the responses people are getting on lots of the forums), but for me the solution will probably be going through the pain of downgrading.
I used to recommend Android tablets... not so sure any more. I hate Apple and iTunes and the iOS interface, but my iPad has never screwed me nearly this hard.
I have two Nexus 7 tablets; I upgraded one and am seriously considering downgrading it back to 4.x, even though that's a bunch of fiddling. The new OS is slower, ugly (this is subjective, but their new style doesn't do anything for me), less responsive (especially just as you bring it back up from sleep), and I think lots of the UI is less useful (eg. the drag down system-y menu doesn't immediately have the stuff I want like it used to).
If you search for Android downgrade instructions, you'll find forums full of people with similar complaints who want to go back.