... It's a lot more appropriate to compare the open sourcing of Swift to the LLVM/Clang projects than to Darwin. LLVM is by any measure a thriving open source project with lots of contributers, both individuals and from many large organisations (Intel/AMD/ARM/Google/Microsoft, etc. etc.). I also follow Webkit development to some degree and it's far from "the Google style of closed development followed by a public source dump", a fact that should be clear to anyone who takes a minute to look at the webkit-dev mailing list.
1) if you make exploitation less likely than an astroid hitting the earth, then for all practical purposes you can say that it is prevented.
2) 'repeatable crash bug behavior' doesn't matter, it will be repeatable if it is run in valgrind/address sanitizer or via a debugger which is really all that matters to a developer. An end user couldn't care less about repeatable crashes and would prefer if it occasionally/usually continued running.
I have no idea why you would believe that "our genetic code is a type of program", I don't think anyone working in molecular biology has this interpretation. And even if you view the genetic code as a type of program, then it is a program that primarily deals with how the individual cells that make up our body operates and _not_ how the brain processes input.
Do you really think MSS has not been developed since the 90s? Admittedly I haven't used it since 2004, but back then it was pretty much the only way to get good, performant 3D audio running with a variety of sound cards. I'd imagine it has grown a whole lot of features and platform support since then.
An annotated game record is available here:
That's what I've done, and what I would do again if I needed to find a quiet space to get some work done.
The naginata is Kyoshiro's weapon of choice in Samurai Showdown II and is of course the only option for 2D fighting game connoisseurs.
It's generally desirable to have the AI and physics run at a fixed time step because it allows you to reproduce results exactly. That way you can record gameplay by just recording the inputs. So usually you will have a 'simulation' thread running at a fixed tick rate and a 'render' thread producing frames as fast as possible. I agree about the Vsync, there is not point whatsoever in making frames faster than the display can display them.
And in fact that's the problem with this frame-time benchmarking, if the workload is such that it's artificially rendering more frames than can displayed it doesn't really matter much if they are displayed at a consistent rate. If you want to see how much better a multi-GPU solution is, you need to test a workload that is being rendered at less than the Vsync rate (or at least around that rate).
... and practice making the perfect espresso - that's a five minutes break where you also have to be focused. When you can make a perfect espresso you can move on to latte art. As an added bonus you can get a job at Starbucks if/when the singularity happens.
The unconditional pointer update approach is by no means atomic unless you use memory barriers or atomic instructions. There is a reason C++11 added <atomic>.
|8 1 2|7 5 3|6 4 9|
|9 4 3|6 8 2|1 7 5|
|6 7 5|4 9 1|2 8 3|
|1 5 4|2 3 7|8 9 6|
|3 6 9|8 4 5|7 2 1|
|2 8 7|1 6 9|5 3 4|
|5 2 1|9 7 4|3 6 8|
|4 3 8|5 2 6|9 1 7|
|7 9 6|3 1 8|4 5 2|
You're almost right, but the main problem is when a team gets obsessed with other issues than actually writing code. You want as little time as possible to be spent on debugging and rewriting. In order to achieve that you need some tools. To avoid rewriting you need some way of specifying what the software is supposed to do before actually making it. UML can be used for that, but it's not a goal in itself. To avoid endless debugging session you need tests, unit tests can be used, but I've found it far more productive to write code that has a lot of debugging code in it. Since I'm mainly writing C/C++, this will be in the form of asserts and #if DEBUG
In conclusion, the primary goal is to develop a product, not to write tests, not to make specifications and not to make clean revision histories. However, when used right, specifications, testing and version control enables you to develop the product faster and with higher quality.
Yeah, C++ and Java is probably equivalent in the time it takes to solve the actual problem -- but then you spend 4 times as long debugging your C++ code because you have some weird write-past-the-end of an array or use-after-free bug that somehow works most of the time and when it goes wrong it only affects some completely unrelated piece of code from where the bug is that got it's data structures ruined. These classes of bugs can bring down novice programmers who can spend weeks trying to figure out where it's going wrong, and they just don't happen with Java or C# or Python.
That's just bullshit - compilers often get it wrong because of things like alias analysis and register spilling/rematerialization. Not to mention the really crappy autovectorization they do. If you really care about performance you use vector intrinsics and check the assembly output of the compiler for any obvious cockups.
There are also many programmers writing code for embedded procesors and DSPs that don't have compilers as mature as those for x86 and doesn't have any cycles to waste.