As it seems even tech giant google gets it wrong with its own certs. Lets hope that Let's Encrypt will make these problems of yesterday one day.
Well, the web mailer wasn't affected because the site uses different certificates, and neither were Google's other gmail clients, e.g. the Gmail app on Android, because those all use the Gmail API (again, with different certificates) rather than SMTP. So if you're paranoid enough, you may suspect malice rather than sloppiness.
The certificate was used to issue Gmail's certificate for SMTP, and the expiration at 11:55am EDT caused many e-mail clients to stop receiving Gmail messages
If the certificate was "for SMTP", the problem would have affected not just end users, but also peers, i.e. other e-mail providers who wanted to deliver mail to @gmail.com addresses. Or at least they may have automatically fallen back to unencrypted SMTP delivery (which was pretty much the default before Snowden, but anyway).
Oh come on. The hex revision numbers are there because the programmer was too stupid or too lazy to figure out something people could actually use. Typical programmer attitude---code for other nerds, not normal people.
No. git is a distributed version control system, which means that, among other things, operations like "commit" and "merge" that create new commits must operate purely locally, without synchronizing with any remote copy of the repository, and then, much later, when the user decides to push those commits to a remote copy, and other users push their new commits to the same remote copy, the remote copy must be able to tell which of the incoming commits it already had locally, which ones are actually new to it, and whether or not multiple incoming commits from different source repositories represent the same commit (for example because those two source repositories pushed to each other before) and thus must be collapsed into one new local commit, and which ones are different commits and this must be imported as separate local commits. The fundamental problem that the DVCS has to solve here is merging/synchronizing multiple directed acyclic graphs coming from different remote sources, all of which can independently add new nodes to their local version of the graph at any time without communicating with any of the other copies or any other sort of "central repository".
This means that you have to have some sort of globally unique identifier for the nodes of the graph, and those identifiers must be creatable locally, using only information that's available in the local copy of the graph, and then still be unique across all copies of the graph that might exist elsewhere. That's what the SHA1 checksums achieve. They also have the nice feature that they're not random numbers, but actual checksums over the entire contents of the graph up to and including that node. But the fundamental issue is that you can't have human-readable commit identifiers like "1.2" or "1.4.1" because there is no central authority that could generate those names and guarantee that they're unique across all copies. Mercurial uses the same solution (they have a linearly increasing "commit number" on top of that, but those numbers are only valid locally, i.e. they might be different in each copy of the graph).
I can believe this. But what if, instead of falling against the switch, the copilot, recognizing that he was about to pass out (e.g. recognizing symptoms of an impending stroke), intentionally attempted to move the switch to the "unlocked" postion (to make it easier for the captain to get into the cockpit quickly)? Due to a combination of confusion, physical incapacitation, and infamiliarity with a probably rarely-used control, he could conceivably have turned the switch to the wrong position even while he was attempting to do what he thought would be the best possible action.
The switch is designed such that the middle ("norm") position is the only one that's stable and will be retained without the user pushing the switch. I.e. the switch will always move back to "normal" when not actively pushed to either "lock" or "unlock". And with the switch in stable position, the door can always be unlocked from the outside -- with a short delay that gives the person inside the cockpit time to actively suppress the unlock using the switch. If the person in the cockpit does nothing, the door unlocks. So without deliberate and repeated activity from the person inside the cockpit, there is no scenario that would indefinitely prevent people outside the cockpit from entering.
Chrome already has it's own method for doing remote desktop.
It won't be supported because it will be competing directly with something Google already does.
The study sounds like nonsense (at least as presented in this post).
Refactoring doesn't make code easier to analyze or change.... But it may make code more maintainable.
What is code maintenance, if not analyzing and changing the code??!?!
Refactoring means code gets written anew, so it becomes more maintainable because it was written by the same people who have to do the maintaining. Before the refactoring, you have to maintain crappy code written by some dude that quit last year. After the refactoring, you have to maintain crappy code written by yourself. Definitely easier.
How do you shoot up twelve people in the middle of Paris then get away? Wtf?
What's more, it all happened at place that was supposedly under "police protection"...
Docker containers are like VM's but smaller. I think what it means is that a Windows server / VM will be able to run dozens-hundreds of Windows micro-services inside a Docker for Windows infrastructure. Or basically once finished you as a developer can now write Windows apps that don't need to install and will run on any Windows, no more version dependencies! Just like Docker is doing for Linux today.
Yeah, but wouldn't it have to be rewritten from scratch on Windows? AFAIK there is no chroot, cgroups or anything like that in Windows (I guess there might be equivalents). And I have no idea what you would do about the registry blob in this scenario.
it emits the 1.5MWh over a period of 32 days, not one hour
FYI 1.5MWh running for 32 days is 1152MW.
No, it's a little shy of 2 kW.
If all it is, is a battery, then by itself it would be worth almost as much as cold fusion, as it can store and produce 600+ horsepower for an hour (1.5MW hours).
It doesn't store it, it has a power supply (even officially). And it emits the 1.5MWh over a period of 32 days, not one hour. And oh yeah, it never seems to work unless Rossi is present to "supervise" the thing.
Isn't "Cutting the Wind" cheating?
Isn't anything is cheating or not cheating relative to a constant set of rules that are applied consistently? The current set of rules happens to allow wind-cutting and refreshment points along the track, but not 1000m downhill slopes or using a motorcycle.