1) if you make exploitation less likely than an astroid hitting the earth, then for all practical purposes you can say that it is prevented.
2) 'repeatable crash bug behavior' doesn't matter, it will be repeatable if it is run in valgrind/address sanitizer or via a debugger which is really all that matters to a developer. An end user couldn't care less about repeatable crashes and would prefer if it occasionally/usually continued running.
I have no idea why you would believe that "our genetic code is a type of program", I don't think anyone working in molecular biology has this interpretation. And even if you view the genetic code as a type of program, then it is a program that primarily deals with how the individual cells that make up our body operates and _not_ how the brain processes input.
Do you really think MSS has not been developed since the 90s? Admittedly I haven't used it since 2004, but back then it was pretty much the only way to get good, performant 3D audio running with a variety of sound cards. I'd imagine it has grown a whole lot of features and platform support since then.
An annotated game record is available here:
That's what I've done, and what I would do again if I needed to find a quiet space to get some work done.
The naginata is Kyoshiro's weapon of choice in Samurai Showdown II and is of course the only option for 2D fighting game connoisseurs.
It's generally desirable to have the AI and physics run at a fixed time step because it allows you to reproduce results exactly. That way you can record gameplay by just recording the inputs. So usually you will have a 'simulation' thread running at a fixed tick rate and a 'render' thread producing frames as fast as possible. I agree about the Vsync, there is not point whatsoever in making frames faster than the display can display them.
And in fact that's the problem with this frame-time benchmarking, if the workload is such that it's artificially rendering more frames than can displayed it doesn't really matter much if they are displayed at a consistent rate. If you want to see how much better a multi-GPU solution is, you need to test a workload that is being rendered at less than the Vsync rate (or at least around that rate).
... and practice making the perfect espresso - that's a five minutes break where you also have to be focused. When you can make a perfect espresso you can move on to latte art. As an added bonus you can get a job at Starbucks if/when the singularity happens.
The unconditional pointer update approach is by no means atomic unless you use memory barriers or atomic instructions. There is a reason C++11 added <atomic>.
|8 1 2|7 5 3|6 4 9|
|9 4 3|6 8 2|1 7 5|
|6 7 5|4 9 1|2 8 3|
|1 5 4|2 3 7|8 9 6|
|3 6 9|8 4 5|7 2 1|
|2 8 7|1 6 9|5 3 4|
|5 2 1|9 7 4|3 6 8|
|4 3 8|5 2 6|9 1 7|
|7 9 6|3 1 8|4 5 2|
You're almost right, but the main problem is when a team gets obsessed with other issues than actually writing code. You want as little time as possible to be spent on debugging and rewriting. In order to achieve that you need some tools. To avoid rewriting you need some way of specifying what the software is supposed to do before actually making it. UML can be used for that, but it's not a goal in itself. To avoid endless debugging session you need tests, unit tests can be used, but I've found it far more productive to write code that has a lot of debugging code in it. Since I'm mainly writing C/C++, this will be in the form of asserts and #if DEBUG
In conclusion, the primary goal is to develop a product, not to write tests, not to make specifications and not to make clean revision histories. However, when used right, specifications, testing and version control enables you to develop the product faster and with higher quality.
Now, get off my lawn.