If I had an optical disk that had that kind of write speed and sufficiently cheap media, I'd use it with a log-structured filesystem. The real data would be on some other media, and the optical disk would record every transaction. When the disk filled up, I'd pop a new one in, have it write a complete snapshot (about 40 minutes for a 2TB NAS, and I could probably buffer any changes in that period to disk / flash) and then go back to log mode. Each disk would then be a backup that would be able to restore my filesystem to any point in the period. Actually, given my average disk writes, one of these disks would store everything I write to disk for about 200 years, so it would probably want more regular snapshots or the restore time of playing back the entire journal would be too long. Effectively, the append-only storage system becomes your authoritative data store and the hard disks and flash just become caches for better random access.
The problem, of course, is the 'sufficiently cheap media' part. When CDs were introduced, I had a 40MB hard drive and the 650MB hard disk was enough for every conceivable backup. When CD-Rs were cheap, I had a 5GB hard drive and a CD was just about big enough for my home directory, if I trimmed it a bit. When DVDs were introduced, I had a 20GB hard drive and a 4.5GB layer was just about enough for my home directory. When DVD-Rs were cheap, I had an 80GB hard drive in my laptop, and 4.5GB was nowhere near enough. Now, the 25GB on an affordable BD-R is under 10% of my laptop's flash and laughable compared to the 4TB in my NAS.
If they can get it to market when personal storage is still in the tens of TBs range, then it's interesting.
Apple had an advantage with their CPU migration: the new CPU was much faster than the old one. The PowerPC was introduced at 60MHz, whereas the fastest 68040 that they sold was 40MHz and clock-for-clock the PowerPC was faster. When they switched to Intel, their fastest laptops had a 1.67GHz G4 and were replaced by Core Duos starting at 1.84GHz. The G4 was largely limited by memory bandwidth at high speeds. In both cases, emulated code on the new machines ran slightly slower than it had on the fastest Mac with the older architecture (except for the G5 desktops, but they were very expensive). If you skipped the generation immediately before the switch, your apps all ran faster, even if they were all emulated. Once you switched to native code for the new architecture, things got even faster.
For Microsoft moving to ARM, they're okay for
Most of the time, even that isn't enough. C compilers tend to embed build-time information as well. For verilog, they often use a random number seed for the genetic algorithm for place-and-route. Most compilers have a flag to set a specified value for these kinds of parameter, but you have to know what they were set to for the original run.
Of course, in this case you're solving a non-problem. If you don't trust the source or the binary, then don't run the code. If you trust the source but not the binary, build your own and run that.
On some large projects, the entire project is in one large repository, which git supports.
In that case, you have to clone the entire repo, which is fine if you're working on everything or need to build everything, but it is irritating if the project is composed of lots of smaller parts and you want to do some fixing on just a small part. For example, a collection of libraries and applications that use them (and possibly invoke each other).
On other projects, it is broken up into modules with separate repositories, but the artifacts from each module is deployed to an enterprise-wide maven repository. The modules can depend on each other and depend on certain versions of the other modules. With loose coupling between the modules like that, you don't need atomic commits between the modules.
If you modify a library and a consumer of the library, then you want an atomic commit for the changes, especially if it's an internal API that you don't expose outside of the project and so don't need to maintain ABI compatibility for out-of-tree consumers. With svn, you can do this with a single svn commit, even if you've checked out the two components separately. With git, you must commit the library change, then push it, then commit the user change and then update the git revision used for the external and commit that then push those. Or, if neither is a child of the other, you must first commit and push the changes to the program, then you must update the external reference from the repository that includes both and push that. You have to move from leaf to root bumping the externals version to ensure consistency.
If you don't jump through these hoops, then your published repositories can be in an inconsistent, unbuildable, state. It's not a problem if all of the things in separate repositories are sufficiently loosely coupled that you never need an atomic commit spanning multiple repositories, but it's a pain if they aren't. With git, you are always forced to choose between easy atomic commits and sparse checkouts. With svn, you can always do a commit that is atomic across the entire project and you can always check out as small a part as you want.
That is 60 years, not 37 years. TFS, if not TFA, which I didn't read, is officially stupid.
Glass house, stones, etc.
The idea of Git eludes you. You don't structure Git projects in a giant directory tree.
The first problem here is that you need to decide, up front, what your structure should be. For pretty much any large project that I've worked on, the correct structure is something that's only apparent 5 years after starting, and 5 years later is different. With git, you have to maintain externals if you want to be able to do a single clone and get the whole repository. Atomic commits (you know, the feature that we moved to svn from cvs for) are then very difficult, because you must commit and push in leaf-to-root order in your git repository nest.
"I say we take off; nuke the site from orbit. It's the only way to be sure." - Corporal Hicks, in "Aliens"