No. BIOS code only gets run at boot time.
People have been predicting the death of Unix and the command line for ages. Most people don't care about long term because they're accustomed to a constant cycle of upgrades to make money for large corporations - it's what they're conditioned to do. If we don't want to run browsers that can get infected, email clients that render whatever they're told to render and systems that have poorly written third party software (I'm talking about you, Flash and Java), then who's the smart one?
I keep wondering if I'm doing old school things just because, but every time I try something new, I find that there aren't enough compelling reasons to modernize and at the same time there are enough good reasons to use what works well.
If there's one thing we can generalize about truly intelligent people is that they are always curious. The geniuses can come up with questions nobody else can.
How about doing a dd of the entire drive from the current system to a virtual disk and trying to make that work? Is the Unisys hardware that special? If not, you might be able to get it working by manipulating the virtual hardware of your VM.
I don't like Firefox because they try to take Windows-isms and force them on Mac users. My user experience is one thing in 99% of the programs on my computer - why should how I select text be different for Firefox? Or why can't I launch Firefox normally by holding command-option and hitting the down arrow like I do for every other program but which sends Firefox into some special "safe" mode?
Firefox shouldn't proselytize specific OS behavior.
Isn't this exactly what happens elsewhere, but in the other direction? After all, many people think that KDE, GNOME and other large programs are written for GNU/Linux and just happen to be ported elsewhere. Try to Google something about setting up Apache or bash and you'll find Linux this, Linux that even though neither are exclusive to GNU/Linux in the least.
Expecting rDNS is pretty common. Expecting PROPER rDNS, on the other hand, is another thing altogether.
If a machine doesn't have rDNS, then it can't send email to anyone at AOL, for instance. It'd be quite disingenuous to say that people who send email through a machine without rDNS would be surprised if they couldn't contact you.
On the other hand, there are too many ISPs who have rDNS, but broken rDNS (doesn't resolve in the forward direction, uses names which don't belong to them, et cetera). I block email from all connecting machines which have rDNS (or HELO/EHLO strings) which say yahoo.com, hotmail.com, gmail.com, or google.com, which cuts down on a LOT of spam. The real services always have blahblah.something.yahoo.com, for instance.
I also block HELO/EHLO names which don't resolve in DNS, and on my backup MX I also block when the HELO/EHLO doesn't resolve back to the connecting IP. This, IMHO, is much more effective than only rDNS checking. People don't always control their own rDNS, but they damned well better control whether their mail server is lying or not.
The bottom line is this: are you expecting email from just anyone? If so, you can't block it but you can increase its spam score. If you generally correspond with the same people and occasionally start corresponding with someone new, you could take the time when someone new has a broken mail server. This is what I've done for years (with HELO/EHLO) and most people thank me once I explain why it's in their best interest to fix it.
How does one get Linux shell access on BSD servers?
I've been doing my own email for 15 years now, and it's really not that hard to maintain. Sure, if your flavor of GNU/Linux changed significantly every time there's a new version, it's a pain to keep up to date, but I've been using similar configuration files, updated a little now and then, with the same software installed across many servers for ages (sendmail, procmail, milted greylist, imap-uw, cyrus-sasl, Squirrelmail for OCCASIONAL webmail only, et cetera).
Some people like to tinker too much to maintain a constantly running server. For them, self hosting is NOT a good idea. Some people like to run GNU/Linux distros which are too difficult to maintain, and again, self hosting isn't an answer. A simple GNU/Linux distro or some flavor of BSD can be much easier to keep up to date and therefore more secure.
There are two primary reasons why I will NEVER move to an outside email provider. The most important one is that in this day and age your email can be subpoenaed without you ever even knowing and employees of any given service can't always be trusted to not do bad things. I want full, 100% control of my email. And in spite of what other people have written in comments about the fact that email isn't secure end-to-end, the archives are always in my possession. But add TLS and at least you've made it MUCH harder for people to see stuff traveling over the Internet.
The second reason is that almost EVERY service is non-deterministic (if I'm wrong, please tell me). I am tired of people wondering where email is only to find out that some cheesy content-based filter silently dropped their email or something else happened and the likelihood that Google or Yahoo will EVER look in their logs to tell you is practically nil. My filtering is based on servers being legitimate, not based on some arbitrarily determined rules. If something is rejected, there's always a known reason and it is ALWAYS logged.
Again, please correct me if I'm wrong, but this has been my experience to date.
I have a Commodore A2232 seven port serial card in my Amiga 4000 in my datacenter which provides serial consoles to a number of other machines. While other multiport serial cards have RISC processors or large buffers, this card is simply a 3.58 MHz 65CE02 which polls each port and puts incoming characters into its 16k of memory, which the Amiga can access directly. It's a beautiful example of simplicity at work.
If you get everything into a standard (free)Unix spool file, it'll be readable a hundred years from now. After all, what other kind of archive file could you have from twenty years ago which you could easily use today?
Since 802.11b can be faster than many Internet connections (at least in the United States), a dedicated network can be used to bridge two or more networks which can use each other in case of an outage. For instance, my work is physically close to my home. Both places are on cable modems, but since throttling happens at the modem, the speed between the two is limited by the uplink rate of each place. By setting up a wireless bridge, I can communicate between the two at about five times the speed (500k/sec as opposed to about 100k/sec) while leaving the Internet feeds usable for other applications.
Also, if the connection goes down on one network, a simple route command on one of the NAT / routing machines makes everything go through the other network's Internet connection.
In the case of high wireless network density (I can see about twenty wireless networks from my work), you can also use 802.11b hardware on channels that aren't commonly used in the US such as 12 and 13 (Europe) or 14 (Japan).
Perhaps it's not ideal, but slow is better than none.
"Google Introduces Command-Line Tool For Linux"
is about as relevant as saying
"Google Introduces Command-Line Tool For Blue Computers" because blue is your favorite color. Sure, it'll run on blue computers, but it wasn't MADE FOR blue computers. Nor were these tools MADE FOR Linux. They'd have to be written as kernel modules to be made for Linux.
Anyhow, Linux isn't even an OS - it's a kernel. Just try to run Linux sometime without GNU and let me know how that works out for you.
Sure, so-called "tech journalists" think that every UNIX thing in the world is really a Linux thing, and sure, no "tech journalist" will ever properly call the OS GNU/Linux, but Slashdot? You people have to be a better example for everyone else.