Correction: Russian territory. This was done in 1918 to keep the Germans from getting stockpiles at the port cities. It can be considered a footnote in history for the West, but it is a sore point for Russia, and adds to the "US cannot be trusted" sentiment.
The US invaded Russia territory post WWI (Arkangel and Murmansk, for example.) The territory wasn't held for long, and the US actually kept the Japanese from invading around that timeframe, but this is something still imprinted on the Russian psyche.
NWN 1 to me (and this is IMHO, so take it for what it is worth; little to none) is a must have. However, I would also take in all the hundreds of very good player written modules as well. The OC for the game was more of a primer on how to write modules right than a decent game in itself. SoU and HotU had decent scripts, but I would say that the top tier player written content (with the CEP and CTP) was some of the best I've played. A number of persistent worlds were outstanding as well.
NWN2 to a lesser extent. The graphics are better, but one couldn't do as much with the toolset.
Of course, the precursors to that, BG1, BG2, are a must.
Going backwards from there, the old Wizardrys and most of the old Ultimas are classics. Ultima 1-6 are timeless, but 7 afterward are sort of like Metallica post-"Black" album... same genre, but really different works with little to do with the previous except name.
Wizardry 1-3 are also classics. I'd probably go for an Apple 2 emulator and the images for them as opposed to the DOSBox version, but that is just me.
Another one is a game that wasn't that popular, but it was interesting for the time. Deathlord from EA. It was like the Ultima series... but was a lot harder, and had quite a large world to do stuff in.
Even though Itanium is all but dead, I did like the fact that you had 128 GP registers to play with. One could do all the loads in one pass, do the calculations, then toss the results back into RAM. The amd64 architecture is a step in the right direction, and I'd say that even though it was considered a stopgap measure at the time, it seems to have been well thought out.
With Moore's law flattening out, the pendulum might end up swinging back that way.
Right now, for a lot of tasks, we have CPU to burn, so the ISA doesn't really matter as much as it did during the 680x0 era.
But who knows... Rock's law may put the kibosh on Moore's law eventually, so we might end up seeing speed improvements ending up being either better cooling (so clock speeds can be cranked up), or adding more and more special purpose cores . At this point, it might be that having code optimized by a compiler for a certain ISA may be the way of developing again.
: High-power CPUs, low-energy CPUs, GPUs, FPUs, FPGAs, and even going from there, CPUs intended for I/O (MIPS.) It might be that we might have a custom core just to run the OS's kernel, another to run security sensitive code, and still others for applications.
Or just have the V2V set to check if the speed limit was exceeded in "x" amount of time and automatically send the ticket. Or have it log if someone stopped with the tip 1-2 cm past a stop line, and send another citation, etc.
Unless it is implemented right, it will be ripe for abuse, just like the red light cameras which have no yellow, or will briefly flash red, enough to pop a picture, then go back to green.
Of course, when the bad guys start messing around with V2V, it will be even worse, especially when someone starts transmitting "rear-end collision is imminent, slam brakes on NOW" on the highway to vehicles" at random times.
I've found SELinux useful. Yes, it can be a pain, but if the device is Internet facing or in the DMZ, it can do a lot to contain a security breach. As always, it can be shut off with a single command, but it is a layer of security that is generally worth having if at all possible. That way, even if the Web server has an exploit, an attacker manages to get into its context, then get root... they still are limited to the directories the Web server is allowed into. It isn't perfect, but it does help.
Unfortunately, the days of a static UNIX that stays the same are long gone. Security issues, feature demands , need to configure large numbers of hosts at once, and other items push vendors like RedHat to do updates.
: One of those is having machines boot faster, thus moving to systemd, upstart, or another mechanism to allow asynchronous starting/stopping.
Windows has the ability to stash login credentials securely, but on Linux, this functionality isn't present, so having the browser "pack its own parachute" with its own encryption would be nice.
I wish for a feature that is in Firefox... and that is the ability to set a master password and encrypt all password manager contents. That way, stored passwords and certificates are independently protected.
My concern about always-on storage is that if someone gets root, they can zero out the backup storage, purge all snapshots, then rsync the zeroed out changes.
I sometimes wonder about using hard disks instead of tapes in a silo. Perhaps something like iMation's RDX, except with modern, high capacity drives, or maybe even a robotic mechanism that can handle bare bones disks, moving them from a storage part to a reader , and so on.
Hard disks are not as reliable as tapes, but if done right, could be used as a way to have backups that can't easily be dumped with a single command as backups stashed on an Avamar or other appliance could be. Plus, there is also the benefit of being able to offsite media as well and rotate it in and out.
: I looked into making a prototype of this circa 2009, and what companies would do the robotics accurately enough to handle bare-bones drives. It is a lot easier if the drives are in an enclosure, but bare-bones means that there are no enclosure "standards" to deal with.
In the early 1990s, AIX allowed you to partition drives (physical volumes) where a logical volume could be residing on the inner or outer part of a drive. That way, DB indexes and critical tables could be placed where access was relatively fast, while the stash for archive logs, program files, and stuff not really accessed could be placed on the outer part. Not SSD speed, but it was a way to help with database performance, especially if one had a lot of spindles.
Slots apps are a good example of this. Virtually all of them will toss you a small amount of coins every four hours, and you gain levels by spending coins, so you can play more elaborate simulated slots, some of which only are playable for 30 minutes. Of course, if you don't want to wait the rest of the four hours, you can do in-app-purchases.
In fact, it seems most games on the smartphone tablet are this way... you need to consume/use "X" resource to gain levels to do more stuff... and the only way to do that quickly is to spend hundreds on some resource (coins, brains, smurfberries) to do so.
IMHO, a smartphone game that goes back to the pre-2011 IAP style of offering a decent game without forcing you to buy stuff -at all-, other than levels would be a hit. A good example of this would be "The Quest" game on iOS, which has a lot of additions to play through.
Nothing is 100%, but an air gap will force a black hat to either get someone physically on site, do some social engineering, or find someone that they can control to do their work for them.
By keeping stuff off the Internet, either air gapping or having a separate network with tightly controlled access points (or perhaps even something like a data diode ), it blocks all but the most well-heeled attackers, and big firms/governments are well adapted to deal with physical threats far more than stuff coming via the Internet.
: I've taken two machines, each on a different network, plugged in a serial cable with one of the lines cut (so bits only moved one way), then used syslog on the secure network, and redirecting the port's output to a file on the insecure network. This wasn't fast, but it got data to people who needed it, while keeping stuff on the secure side off the Internet unless someone physically accessed it. A true data diode does the same thing, except faster... however expensive. As a hack, a dedicated line-level Ethernet tap might be something to be used because the computer plugged into the mirrored port will be unable to change or reply to the network stream coming from the secure side.
It also happens to men.
A former co-worker of mine, who just got a job in another state, had someone stick roofies in his drink at a party. He wound up stumbling to the wrong house, got brained with a baseball bat, and snagged both a criminal trespass charge (because he opened an unlocked door) and a PI charge. None of this he remembers. His memory is gone from when had a drinks at the party until he wound up waking up shackled to a hospital bed due to the head injury.
I've personally handled tens of thousands of LTO tapes, and I've had less than five go bad. Three had soft media errors (where there was no data loss, just stuff that ECC codes were able to handle), and two had issues with being handled by the grippers in the robot.
I've also have recently pulled data from DLT IV tapes from 1998, no errors.
Plus, tape isn't expensive. The hard part is the drives and libraries, as well as suitable backup software. Once past that, individual tape cartridges are quite inexpensive. $50 is about the highest I see LTO-6, and I've even seen them as low as $10 each in quantities.
At Facebook's level, RAIT is possible, so I don't get why they are bothering with relatively small capacity media when LTO is an established, highly reliable format, and can do everything FB wants without having to reinvent the wheel. Even encryption can be set on drives.