Apple had an advantage with their CPU migration: the new CPU was much faster than the old one. The PowerPC was introduced at 60MHz, whereas the fastest 68040 that they sold was 40MHz and clock-for-clock the PowerPC was faster. When they switched to Intel, their fastest laptops had a 1.67GHz G4 and were replaced by Core Duos starting at 1.84GHz. The G4 was largely limited by memory bandwidth at high speeds. In both cases, emulated code on the new machines ran slightly slower than it had on the fastest Mac with the older architecture (except for the G5 desktops, but they were very expensive). If you skipped the generation immediately before the switch, your apps all ran faster, even if they were all emulated. Once you switched to native code for the new architecture, things got even faster.
For Microsoft moving to ARM, they're okay for
Most of the time, even that isn't enough. C compilers tend to embed build-time information as well. For verilog, they often use a random number seed for the genetic algorithm for place-and-route. Most compilers have a flag to set a specified value for these kinds of parameter, but you have to know what they were set to for the original run.
Of course, in this case you're solving a non-problem. If you don't trust the source or the binary, then don't run the code. If you trust the source but not the binary, build your own and run that.
On some large projects, the entire project is in one large repository, which git supports.
In that case, you have to clone the entire repo, which is fine if you're working on everything or need to build everything, but it is irritating if the project is composed of lots of smaller parts and you want to do some fixing on just a small part. For example, a collection of libraries and applications that use them (and possibly invoke each other).
On other projects, it is broken up into modules with separate repositories, but the artifacts from each module is deployed to an enterprise-wide maven repository. The modules can depend on each other and depend on certain versions of the other modules. With loose coupling between the modules like that, you don't need atomic commits between the modules.
If you modify a library and a consumer of the library, then you want an atomic commit for the changes, especially if it's an internal API that you don't expose outside of the project and so don't need to maintain ABI compatibility for out-of-tree consumers. With svn, you can do this with a single svn commit, even if you've checked out the two components separately. With git, you must commit the library change, then push it, then commit the user change and then update the git revision used for the external and commit that then push those. Or, if neither is a child of the other, you must first commit and push the changes to the program, then you must update the external reference from the repository that includes both and push that. You have to move from leaf to root bumping the externals version to ensure consistency.
If you don't jump through these hoops, then your published repositories can be in an inconsistent, unbuildable, state. It's not a problem if all of the things in separate repositories are sufficiently loosely coupled that you never need an atomic commit spanning multiple repositories, but it's a pain if they aren't. With git, you are always forced to choose between easy atomic commits and sparse checkouts. With svn, you can always do a commit that is atomic across the entire project and you can always check out as small a part as you want.
That is 60 years, not 37 years. TFS, if not TFA, which I didn't read, is officially stupid.
Glass house, stones, etc.
The idea of Git eludes you. You don't structure Git projects in a giant directory tree.
The first problem here is that you need to decide, up front, what your structure should be. For pretty much any large project that I've worked on, the correct structure is something that's only apparent 5 years after starting, and 5 years later is different. With git, you have to maintain externals if you want to be able to do a single clone and get the whole repository. Atomic commits (you know, the feature that we moved to svn from cvs for) are then very difficult, because you must commit and push in leaf-to-root order in your git repository nest.
There are a load of companies in the second category that are very profitable and usually respected. It's the ones in the first category that give them all a bad name.
Discussing the requests openly, either within or beyond the walls of an involved company, can violate federal law
Any act of congress that purports to deny our freedom of speech is not a law at all, but a usurpation. Congress has no power to trump the constitution.
The trouble with opportunity is that it always comes disguised as hard work. -- Herbert V. Prochnow