In a previous life, we passed around virtual machines rather than doing paperwork. Paperwork is to be sure you have a plan to solve the explosion-and-revert problem.Managing machines instead of paper allowed us to include a process for doing an immediate revert on explosion (;-))
The VMs we passed around were Solaris zones, so they were very lightweight. If I wanted to apply an emergency patch to production, I first applied it to an image, put an instance on pre-prod, a physical machine, and varied it into test. After the smoke-test, I varied it into the pool on the load-balancer, and watched it closely. If it fixed the problem and didn't explode, I put lots of instances on the production physical servers and put them into the load-balancer, quiescing the un-patched instances but not erasing them. If the patch blew up after all, I could revert to the previous buggy release as fast as the load-balancer could disconnect people. Not quite as fast as doing an atomic change on a single server, but fast.
This is a minor variant on some old unix norms: 1) you aren't prohibited from doing even silly things, as prohibitions will keep you from doing something brilliant. 2) You can do anything, but you can't hide what you did, 3) you can change things atomically while running, and 4) if you do something dumb, you can revert it immediately.
The process is a variant/predecessor of ITIL, with pre-set apply and revert steps for emergency changes, which are the high-value part of the whole ITIL change process. Non-emergency changes were a little more heavy-weight, as we tested the patch in an instance in QA, then did a simulated UAT overnight (it was automated, but exceedingly slow), reviewed the results and then the de-facto board decided if we could release the image to production, QA and dev. Your paper-oriented CAB does approve all patches to QA and dev, right? I'll bet they missed that part (:-))
--dave
I did once have a customer where I had to do paper-based CAB approvals, but that was because we weren't funded to have a proper dev, and had no QA at all. As you might guess, we still had at least one fiasco. I shortened the contract as much as I could without doing a no-bid in the middle.
Please elaborate. On the face of it your response is unconvincing. In a domestic conflict there are going to be a substantial number of the standing military's ranks that will be sympathetic to the Constitution -- the lack of honor by many in the military notwithstanding. How many of them would it take to so debilitate the treasonous government's military that it would be no more effective on US soil than it was on middle eastern soil?
The report doesn't really go into an important measure.
What is the defect density of the new code that is being added to these projects?
Large projects and old projects in particular will demonstrate good scores in polishing - cleaning out old defects that are present. The new code that is being injected into the project is really where we should be looking... Coverity has the capability to do this, but it doesn't seem to be reported.
Next year it would be very interesting to see the "New code defect density" as a separate metric - currently it is "all code defect density" which may not reflect if Open Source is *producing* better code. The report shows that the collection of *existing* code is getting better each year.
According the wikipedia, the number of pictures being seen as the same with probability p is =sqrt(2d * ln(1/1-p)) If d is 52,000,000 and we use a 99% probability, then for each 21,884.6 pictures we get a false positive with a perfectly accurate matcher. And there are no perfect matchers.
This is a variant of the birthday paradox, where it only takes 100 people to get a 99.9% chance of them having the same birthday, and a mere 23 people to get a 50% chance [wikipedia].
The German Federal Security Service rejected facial matching years ago, for exactly this reason, when I was working for Siemens. The Americans did not, and supposedly stopped someone's grandma for being a (younger, male) terrorist.
If they use this, expect a week or so of everyone's grandma being arrested (;-))
--dave
Mathematicians, please feel free to check me on the numbers: I suspect I'm rather low...
BronsCon
Yes, I am in violent agreement with you. I think that this is such an important point that I wanted to (re) emphasize it. You know the drill: tell them what you're going to tell them; tell them; tell them what you've told them...
This myth gets trotted out again. It is arguably easier to find exploits without source. The source distracts from the discovery of an exploit. The binary simply is. The black-hat is looking for a way to subvert a system. Typically she is not interested in the documented (by source or documentation) functionality. That simply distracts from the issue which is finding out what the software actually does, especially in edge circumstances.
This is what fuzzers do. Typically not aware of the utility of the program, they simply inject tons of junk until something breaks.
Source availability tends to benefit people auditing and repairing more than black-hats.
Yes, it took years for heartbleed to surface. If heartbleed (or a defect like it), was discovered due to a code audit, that speaks to the superiority of open source over closed source. If this defect is found by fuzzing or binary analysis, it is much harder to repair, as users are now at the mercy of the holder of the source. Build a matrix of Open/Closed Source vs. Bug found in Source, Bug by fuzzing/binary analysis.
Bug found in source vs Closed Source is not applicable, giving three element. Found in source vs. Open Source (where the bug will be repaired in the source by anyone). Bug found by fuzzing... where the bug will be repaired in the source by anyone (Open Source) or the Vendor (Closed Source).
The question then is (as I started the article): Is it easier to find bugs by source inspection? Assume big threats will HAVE the source anyway. If it was easy to find by inspection, it would be easy to fix (for examples: OpenBSD continously audits, and security has been a priority at Microsoft for the past decade). Fuzzing and binary analysis is still the preferred (quickest) method, giving the edge to Open Source. The reason is simple -- the black-hat cares about what is actually happening, and not what the source says is happening.
I have been using Gnome 3.10 (Fedora 20) on an Acer Iconia W700. This has no keyboard when I use it as a tablet. It does have multi-touch, and gyro/magnetic/ambient light/etc sensor.
Tried XFCE (my usual desktop for the past decade) -- it doesn't do well with the 192dpi display. I then decided to try Gnome 3, because of all the complaints (it forces tablet view on users).
- No keyboard means typing to find an application doesn't work. Adding the "Applications Menu" and "Places" Gnome Shell extensions solves this.
- The default on-screen keyboard doesn't support function keys, esc key, control keys. Solution: add florence
- Without a keyboard, yumex is not usable. Can't enter password to activate stuff.
- Can't activate the bottom panel reliably. Using "Frippery bottom panel" helps out (gnome shell extension). Tapping the "!" at the bottom right then does the job. The "Hi, Jack" extension almost works, but isn't reliable enough.
- Rotation doesn't work. I had to put a script on the desktop to activate rotation.
- No multi-touch support in Gnome 3 (really strange, I have a python program that demonstrates multi-touch).
- And now for the cake - Focus is very strange. I can launch a new application but the old application still has some focus! Nasty bug that in interacting with user input.
I would prefer to stay with Fedora. Is there any DE that supports touch better on Fedora? Or do I go with Ubuntu and Unity? Are improvements coming in Gnome 3.12 or 3.14?
Given that your Gnome 3 experience has been much more positive, what is your advice?
What this country needs is a good five dollar plasma weapon.