I'm minded though of a saying: "The superior pilot uses his superior piloting judgement to avoid needing to demonstrate his superior piloting skill.". The study tends to bear that out too, as they comment that the decline disappears when you look only at the end results (the score). And in the end, if you're better at juggling dozens of things at once and react faster than your opponent and consistently lose to him, you're consistently losing to him.
That, though, is talking about the development environment itself. Yes, there the developers should be in control of the machines. DevOps, though, is about having developers actually doing operations tasks in the production environment. That's a bad thing, because what developers are good at is very very bad in production. You don't want developers alpha-testing code and fixes where a failed test brings all of your customers to a screeching halt while the developer does a few more iterations to get his fix working right. Plus, frankly most of Operations is boring and tedious (or at least it should be) and as a developer I have little or no interest in making it my day job. I want to solve the problem of making the deployment scripts bulletproof and go on to the next problem, not spend my day watching instances spin down and up (because if I did my job right, that's all the Ops people are going to need to do).
The difference is between developers knowing the operations side and being the operations side. Developers need to know the operations side to know how to write software that Ops can install and manage. And they should be involved in the development environment and installation in the testing environment so any gotchas can be addressed quickly and the developers know exactly what happened and can go back and make sure it doesn't happen again (especially in production). And of course when things really go pear-shaped during production deployments it never hurts to have the developers on tap to tell Ops whether there's a simple, quick workaround that'll salvage the deployment or whether it's time to roll back until they can fix the problem. But those are a far cry from developers doing Operations support and administration work on a daily basis. Frankly they're two radically different skill-sets. They're related, sure, but having a developer doing Ops as a regular job is like having Kelly Johnson keeping a fleet of Piper Cubs in shape. Sure he can do it, and technically he can probably do it better than a regular mechanic, but in a month or so he'd be bored to tears and walking out to go work somewhere where they'd actually let him do his job designing and building planes like the SR-71.
This doesn't really change it, because think how a proprietary SSL library would've handled this. The vulnerability was found specifically because the source code was available and someone other than the owners went looking for problems. When was the last time you saw the source code for a piece of proprietary software available for anyone to look at? If it's available at all, it's under strict license terms that would've prevented anyone finding this vulnerability from saying anything to anyone about it. And the vendor, not wanting the PR problem that admitting to a problem would cause, would do exactly what they've done with so many other vulnerabilities in the past: sit on it and do nothing about it, to avoid giving anyone a hint that there's a problem. We'd still have been vulnerable, but we wouldn't know about it and wouldn't know we needed to do something to protect ourselves. Is that really more secure?
And if proprietary software is written so well that such vulnerabilities aren't as common, then why is it that the largest number of vulnerabilities are reported in proprietary software? And that despite more people being able to look for vulnerabilities in open-source software. In fact, being a professional software developer and knowing people working in the field, I'm fairly sure the average piece of proprietary software is of worse quality than the average open-source project. It's the inevitable effect of hiring the lowest-cost developers you can find combined with treating the fixing of bugs as a cost and prioritizing adding new features over fixing problems that nobody's complained about yet. And with nobody outside the company ever seeing the code, you're not going to be embarrassed or mocked for just how absolutely horrid that code is. The Daily WTF is based on reality, remember, and from personal experience I can tell you they aren't exaggerating. If anything, like Dilbert they're toning it down until it's semi-believable.
Just because the time limit has been raised, that doesn't incur a liability for the debt on the part of anyone who isn't already liable for it. And generally children aren't liable for their parent's debts unless their signature's on the contract. The parent's estate might be liable, but good luck collecting from that once the estate's finalized and closed out. I suspect this'll be what any competent attorney will raise as an issue if the victims get one: "Regardless of anything else, this is not my client's debt and the debt being collectible doesn't on it's own make my client liable for it.".
So, Zynga's racking in the bucks, then?
True, but I've noticed that the F2P games that use that model are now trying to entice players back into monthly subscriptions. I think it's inevitable: if all you can buy is cosmetic, there's no real incentive to spend much money at all and the company starts wondering where all the cash they were supposed to be getting is. I'm of the opinion that the whole "free to play, and we'll make our money off the cash shop" is right in there with "free site, and we'll make our money off the advertising" as a business model.
The attitude stems from something more basic. In conventional games, even bad ones, once you have the game you have everything and how well you do is then up to your own skill and ability. In many free-to-play games, though, the game itself is just the hook. Once you're in, you find that you can't, for all practical purposes, go beyond a certain point without spending money and how much further beyond that you can go depends on how much you can afford to spend. It's why the derisive term is "pay-to-win". In large part how well you do in that type of game doesn't depend on your skill or ability, it depends on how deep your wallet is. And a lot of gamers are offended by the idea that a skilled, knowledgeable player who happens to not be that well-off will by design be less successful in the game than an unskilled, not-very-good player who happens to have well-off parents who'll toss him a couple of hundred dollars a week to fund his entertainment.
How do you figure?
Well, the Federal courts ruled that Proposition 8 violated the Due Process and Equal Protection Clauses of the US Constitution. That sounds like "unconstitutional" to me. The 9th Circuit panel affirmed that ruling, and the en banc appeal was denied. The US Supreme Court heard the appeal and dismissed it on standing, and ordered the 9th Circuit to dismiss the appeal to them as well (which Prop 8 supporters should consider a good thing because had the SC left the 9th Circuit's affirmation in place it would've created binding precedent for the entire 9th Circuit, but the dismissal order reverts it back to a district court decision).
So, question: what does a company do with a senior executive who's harming the company because large numbers of valuable employees and executives don't want to work with him, or at a company where he's in charge, because of his political views? Nothing in California law requires individuals to ignore political views when deciding whether to associate with someone. And it seems to me that deciding to let someone go because he's causing too many other employees to leave is perfectly allowable. So what's a company to do in such a case?
As noted, it wasn't Linus that started the blow-up. It got to this point because Sievers was ignoring more professional, less blunt instructions about it. And yes I'd rather deal with Linus. Because if I pulled the kind of crap Sievers had I'd've expected to have my manager drop my final paycheck on my desk and tell me I had 5 minutes to pack my things and the nice gentlemen from Security would be escorting me out of the building, and no I wouldn't be receiving a separation package because I was being terminated for gross incompetence. I'd rather deal with a manager who'll chew an incompetent developer out for being incompetent, as opposed to one who'll just send off iteration after iteration of "professional" memos about the developer having a problem and never actually do anything about the problem. At least with Linus I could be pretty sure I knew exactly where I stood with him.
Then again, I've written code that did exactly the same thing Sievers' code did. But I did what Sievers should have done in the first place, hung it off on it's own specific enable flag so it couldn't be turned on inadvertently, because I knew it was going to bring the system to it's knees and that was something that should never be able to happen as a side-effect of something else.
Did you read the thread? This wasn't just Linus complaining, it was 2 other kernel developers that originated the complaint. And this wasn't a minor thing, this was Sievers introducing a bug that caused the system to fail to boot! by using a long-established kernel boot parameter ("debug") and having it trigger a data dump large enough to cause the boot process to fail, and then refusing to fix it on the grounds that the kernel didn't own the "debug" parameter (http://lkml.iu.edu//hypermail/linux/kernel/1404.0/01327.html).
If I worked for you and you canned Linus over this, the very next day my resume would be being shopped around and I'd be spending my off time perusing every job-lead source I could think of because you're the kind of manager who causes projects to go down in flames and I'd much rather get out while I can do it on my own terms.
Whenever I see one of those overblown handles that seem designed to intimidate and impress people, my first thought is that the player isn't good enough to do it on his own merits. I prefer names along the lines of how Ian Banks' Culture ships named themselves. To borrow a comment. "Let's see you explain to your admiralty that your fleet was wiped out by the Bureaucracy and the Red Tape, and when you tried to disengage you found yourself trapped by the Complete Lack of Morale and the High Command's Total Incompetency.".
There isn't a purely technical solution to this problem. The only solution is legal: first define a standard do-not-track header for HTTP (done), then impose a legal penalty for anyone who fails to honor it. And by all that's holy, learn from the errors of the Do Not Call list. The ability for individuals to go directly to small-claims court to recover was a good thing, but there's a couple of corrections that need to be made. First, have the law make the penalties mandatory. Don't give the judge the option of not imposing them just because he feels it isn't reasonable to demand that much from the advertiser. He should have the discretion to decide whether the DNT header was sent and whether the defendant tracked the user, but if the header was sent and the user was tracked then it is an abuse of discretion to not impose the stated penalties. Second, dump the exceptions for political and charitable stuff and surveys and the like. Any exceptions that are made should be limited to the site being visited only, even something as benign as "technically necessary" shouldn't apply to third-party sites.
Non-EBS-backed instances aren't good for test systems. To run them you need to have an AMI built with everything you need, and you need to keep that AMI updated with current test cases and so on. That's more work than just maintaining an EBS-backed instance would be. Especially considering that you're going to need the test instance to persist for anywhere from several days to several weeks while testing is in progress. We aren't talking unit tests, remember, we're talking about a complete release test of the entire system end-to-end. Even for unit tests, you've got too many test cases that need to be maintained so they can be used every run, plus all the special test cases developers need while diagnosing and debugging issues. Having all of that evaporate when the instance is shut down defeats the whole purpose of testing, you're losing everything you'll need for the next test iteration. Unless of course you go to the trouble of taking everything in the instance and transferring it back to the AMI so that it'll be there the next time the instance is spun up, in which case why not just leave the instance on EBS and be done with it?