Well, for making this "frame rate" theory relevant, the question is not only if anything happens at or close the frame rate, but what is the frame stepping function? And, throwing relativity into the mix, in what reference frame?
A discretized spacetime would mean that the continuous solutions to the Schrödinger/Dirac equations are actually approximations that are better expressed by some discrete time stepping scheme. That could have macroscopic consequences. Especially so if for some weird reason Nature has a rather simple first-order scheme at its frame rate core. But, it does also mean that we would get slightly different results from different objects in free fall, depending on their overall speed relative to the reference frame. This would control the factor between the "local passage of time", and the actual number of "Planck time frames" used by the process. In addition, the discretization of time almost necessitates a discretization of space. This not only means that space has some small grid (not likely either, based on current theory). It also means that there are some absolute directions in space and that some physical processes would behave slightly differently (even if aggregated along macroscopic distances) if they are algined to these directions, or not.
If it is just a bug, then we should expect a quick fix and firmware release from VW. If, however, it was a conspiracy, and there is no way that VW EGR technology can ever be made to pass the NOx requirements (without additional hardware - AdBlu tanks), then VW is screwed.
My point is that a "too good to be true" bug could easily have quite devastating consequences if it's just fixed. If they remove 'false' in the putative "if (isInTest() || (isNOxReductionNeeded() && false) enableEGR();" line and this increases fuel consumption or reduces maximum torque a lot, they cannot simply release that fix.
Embedded automotive control systems and scientific research are quite different domains, but in science I've repeatedly been close to thinking I had solved a problem, just to realize that my benchmark was off and the code was not really working at all. I have not published any of those results (AFAIK) yet, but I've reviewed and seen publications with blatant errors. When you have reached the kind of result you hoped for and believed likely, you are not on guard anymore. Fixing the blatant error might very well mean that the whole work is pointless. The error is trivial, the consequences are not.
The sensors required to detect "test mode" and software driven EGR control hardware are already part of any modern car so there was no decision to "add" them to accomplish this cheat. But there had to be a strategic decision to not to add SNCR, and that is a decision that could only be made at a very high level.
Yes, but that was not a conspiracy. It was very clear in the specs and even highlighted as an advantage. The question is "who knew and approved, at what point, that this design would not work out in practice". Even the idea that smart design of the control regime would make it possible to achieve low emissions without SNCR is not, in itself, equivalent to fraud. It's only when this design becomes all about "detecting test conditions" things get really, really bad.
The linked article makes the point that the sensors and hardware would not be necessary. I think the writer seriously underestimates to what extent a modern car with protection systems will try to juggle different constraints. Things like non-driving wheel rotation (defeated by being on a lab stand) are needed for breaking systems and possibly to some extent to moderate throttle control for stability. Wheel movement patterns are also needed and useful, even if you don't actually have electric power steering.
Regulating the exhaust gas recirculation somehow also makes sense. You might go totally on and off, but you would certainly want to keep it at a sensible level. You want good acceleration and full combustion of fuel while still not emitting to much nitrous oxides. It makes total sense to me that you might want to design your control system to try to judge not only the current emission levels, but also the overall driving pattern (steady straight ahead, repeated stop and go, etc) with some kind of state machine to try to find the best EGR regulation regime. This requires sensors and ways to regulate the feature.
My most innocent guess about how something such as this might have happened was an intent to find a good regime that would give nice bursty performance, while keeping nitrous oxides low overall. Progressively, the control regime was pushed until it ended up in the corner where the case of EGR being properly activated under real-world conditions basically does not happen. Some parts of it might even in the end be a bug between the intended state transitions and the actual ones. Like all bugs that give performance that seem too good to be true on the metrics you really care about (fuel consumption and enjoyable driving), no-one investigated.
Do I think it happened this way? It's hard to say. Probably not. But, in one way, it's even more frightening than an evil conspiracy. It's easy to say "I wouldn't take part in a conspiracy by my employer". It's harder to say "I would never be pressed to write code with goals that could not be fulfilled, eventually find a hack that seemed to work, and maybe ignore investigating why it worked so well"...
This seems to come out of the peculiar microsoft feature of being able to be an administrator user but without administrator privilege most of the time except when needed, and a lot of work to make this escalation happen in an non-intrusive fashion or be faked depending on context. It's a really complicated beast that no other platform tries to do.
MS up to and including XP (excluding the DOS based family) basically had the same as everyone else, you either were an administrator or you weren't, with facilities to 'runas' an elevated user to handle as-needed. The problem being they had tons of software from the DOS based system failing to use the right section of the registry and filesystem, requiring people to go through pains to run as administrator to run a lot of applications. This meant that most XP users just logged in as administrator.
To mitigate it, they embarked upon crafting this fairly complex thing to make running as administrator user safer most of the time. It's funny because at the same time they started doing more and more to allow even poorly designed DOS-era software to run without administrator. They create union mounts to make an application think it can write to it's application directory even when it cannot (and do sillier things like make 'system32' a different directory depending on whether a 32 or 64 bit application is looking). I do the atypical usage of a non-administrator user full time with UAC prompts nagging me about passwords if needed, and nowadays it doesn't nag any more than sudo does in a modern linux desktop. If I understand this behavior correctly, this usage model might be immune to this risk factor.
While impersonation and other techniques is used a lot more and including larger portions of the API, impersonation itself has been along since NT 3.1. Are you a file server process serving a request from a client? Just create an impersonation context for the user who sent the request and pass that along to the file system. You only need to make sure that you create the right context and tell other services on whose behalf you are doing this. This is not identical to setuid and similar, most importantly because a single thread can keep many impersonation contexts.
That this is part of the application compatibility cache service is almost coincidental, the real problem is in the fact that impersonation services are used, but used incorrectly. Impersonation was part of the original NT design, and for relatively good reason.
you first need to copy that data into another block, erase the original one, write all data back and erase your "tmp" block. The churn on blocks happens a lot faster than what you'd think.
If that's the case, then why are they not copying the data to ram contained on the drive itself? Seems like an awful waste of cycles with a relatively simple fix. Is it just a cost issue?
Any wear levelling worth its salt will not do what the grandparent wrote. You simply do not change one page in a block. If you write a single page, that is handled by mapping that page to another (free) block and maintaining a mapping table for which LBAs are currently stored in what blocks. However, if you are doing single-sector writes, or in turn repeated I/O flushes of the same sector, you still see a lot of write amplification. To keep data integrity, the mapping tables also need to be kept updated in a correct way (or at least uniquely recoverable by scanning through all blocks after a hard power off).
But, well, the difference is that Oracle has actively asked everyone else to quote references to their book. Google has produced a product that only respects those references that Oracle has encouraged anyone to use. If Oracle starts pursuing anyone *writing* Java code for copyright infringement ("hey, you called all methods of ArrayList, in the order they are declared"), that's a different thing.
Has Google copied Javadocs? Those texts are not necessary for technical interoperability. Thus, it would be a very different thing. Public symbols should be just that, public.
10.0 times 0.1 is hardly ever 1.0.