Well, a ROM then was pretty small, but today, given how the BIOS has been redefined - w/ UEFI & all that, wouldn't it be possible again? Take a flash memory device that's 32GB, put an OS in it and make that the BIOS. For the rest of the stuff - the applications and all that, take a suitably sized SSD and put it on that. Anything portable would go on a USB drive.
Lock that OS BIOS, making it alterable only by the owner (in the same way that we currently alter BIOS) and all attacks that cripple an OS should disappear. Currently, the highest serial NOR flash memory density product is a 1Gb flash. How much of Windows 7 or 8 kernels can fit into it? How about Linux, XNU or the BSD kernels?
Navigators know more about Navigation than People who don't Navigate
More at.... wait no, that's it.
This news brought to you by the Department of Redundancy Department Department.
Hairy, in this case, it ain't Moore's law: it's multiprocessing. Previously, when Windows apps were uniprocessing, then it made no sense to add cores, and Intel & AMD would just keep ramping up the MHz, and struggle to compete w/ RISC. But once XP became the Windows for everybody, succeeding not just Windows 2000 but ME as well, then apps started being multi-threaded & multi-processed, and then throwing more cores at the problem enabled x86 to catch up w/ RISC. That's the reason most RISC CPUs are dead.
On Moore's law, granted, it's no longer needed for CPUs. It will remain useful for memory - both RAM and SSDs. Also, we are already at a stage where entire systems that existed even in the 90s can be put on FPGAs, thereby enabling shrinkage of systems even more.
One thing a larger wafer does do, though, is decrease effective test time - if you have a wafer being tested that has more die, the probe cards would have to probe those extra die, and then, since they're being done in parallel, you are getting a reduced effective sort test time. Shorter test times translate into reduced cost. So if more tests are moved from Final Test (packaged units) to sort test, that would enable larger wafers to effect shorter test times (per lot). That would be the only way larger wafers could contribute towards cost reductions.
Only problem here - the demand volumes have to be commensurate w/ the extra chips that get produced, or else, this is meaningless. If the volumes are low, it's a better idea to remain on higher process nodes. If the volumes are huge, then it justifies the larger wafers, but only if the market prices cover the costs of actually producing them. Which gets inflated every time new equipment is needed either for new wafer sizes or a new process node.