Yeah, most of the cheating I heard about in my CS program 10 years ago was not from people who were necessarily lazy or "party-people" or whatever the usual stereotype is. Most of the temptation to cheat was for people who were completely in over their head with the entire subject and felt backed into a corner. They were wedged between a lack of preparation and social pressure to succeed. ("I did OK in math class, and I like using my computer, so why can't I do this?") The first time many of them had ever thought critically about the structure and function of a computer was day one of CS 101 (consider trying to do college algebra if you had never seen mathematical operators before) and they just got more behind as time went on. This was not helped by the cattle herd design of public university classes.
The ethically smart ones got extra tutoring from classmates and teaching assistants, or worst case, switched degrees when they realized they were hopelessly behind. The not so smart ones abused the help of their friendly/naive classmates or found some other way to BS through the material. Most of the time, this didn't work out even on semester timescales, but I do remember one group project where a guy couldn't write a single line of code unprompted, yet somehow had landed a job at IBM to start at the end of the semester.
I don't have any sympathy for people who cheat in classes, but I agree that characterizing the problem as simple laziness or the "moral bankruptcy of the kids these days" teaches you nothing about how to address the problem. Sadly, the solution probably involves things that are socially or economically infeasible: Smaller intro classes, actual focus on pedagogy and not teaching fads in intro classes, de-emphasis of 4-year degrees as a prerequisite for white-collar employment, more investment and advertisement in focused two-year programs for technical fields, etc.
And we don't have to use Highlander Rules when considering drive technologies. There's no reason that one has to build a storage array right now out of purely SSD or purely HDD. Sun showed in some of their storage products that by combining a few SSDs with several slower, large capacity HDDs and ZFS, they could satisfy many workloads for a lot less money. (Pretty much the only thing a hybrid storage pool like that can't do is sustain very high IOPS of random reads across a huge pool of data with no read locality at all.)
I hope we see more filesystems support transparent hybrid storage like this...
For $26, you can measure the power of each device on and off and figure out who the actual power hogs are:
http://www.newegg.com/Product/Product.aspx?Item=N82E16882715001
Then at least you'll save wear-and-tear on your plugs for devices that are really off when turned off. (Like your washer and dryer, for example. I would be surprised if they draw power when off.)
Gah, total format failure. Forgot I was in HTML mode. You get the idea, though.
I don't see this as an either/or proposition. Backing up protects you from data loss, which comes in many forms:
* Sudden hardware or software failure
* Silent hardware/software failure (or user failure) resulting in corruption you only discover later
* Theft/fire/natural disaster
At the same time you want:
* Easy backup procedure (if it is too hard, you won't do it)
* Fast restore procedure
A sensible backup plan needs to address all of these needs. Incremental tape backups with rotation to an offsite vault is one option which covers most of these things, but isn't particularly easy or automated. RAID is very easy and convenient, but only covers a very narrow range of hardware failures. (If you listen closely, you can hear the screams in the distance of a RAID user who just lost data to software-induced filesystem corruption. Hence the mantra "RAID is not backup.")
Network (blah, blah, "cloud," blah) backup services are a great option for cheap offsite backup that is extremely convenient and continuous. But you should supplement it with some kind of local, fast backup as well. That way you can recover quickly from hard drive failure and corrupted filesystems, but still have a Plan B if your house floods. (Or if you local backup turns out not to be broken when you need it!) Moreover, many network backup services will mail a hard drives for a fee if disaster strikes and you need to restore everything.
In my case, I use CrashPlan and Time Machine to do this. CrashPlan backs up changed files every 15 minutes to several offsite locations. I also plug a Time Machine disk into my laptop periodically to make a local snapshot. Restoration is quick in the common case, but I also have coverage for extraordinary events as well as backups when I travel without my external disk.
I think often people confuse "altruism" with "long term self-interest," and that may be the issue Google is considering here. In the short term, you can make it hard for tenants to move out, and maybe gain a little bit of rent that you would not have otherwise gotten. However, people talk and, in the long term, behavior like that can lose you potential customers. You will be forced to drop your rent in order to keep your units full.
(This relates to the best description of "business ethics" I've heard: Ethical business requires that you balance the needs of and try to act in the best interest of your owners, employees and customers. Otherwise, in the long run, you will find yourself without capital, labor, or revenue. Thus, business ethics is about long term self-interest, not some kind of abstract altruism. Sometimes the "long run" takes a really long time, encouraging people to risk unethical behavior, of course.)
Making it easier to leave Google applications helps grow your potential customer base in the future (such as those who are wary of lock in), at the risk of losing current customers who are unhappy with your service. That is a motivation well-rooted in self-interest, as long as you think your product is better than everyone else's.
The GeForce 9 series was a rebrand/die shrink of GeForce 8, but the GTX 200 series has some major improvements under the hood:
* Vastly smarter memory controller including better batching of reads, and the ability to map host memory into the GPU memory space
* Double the number of registers
* Hardware double precision support (not as fast as single, but way faster than emulating it)
These sorts of things probably don't matter to people playing games, but they are huge wins for people doing GPU computing. The GTX 200 series has also seen a minor die shrink during the generation, so I don't know if the next generation will be more of a die shrink or actually include improved performance. (Hopefully the latter to keep up with Larrabee.)
"An organization dries up if you don't challenge it with growth." -- Mark Shepherd, former President and CEO of Texas Instruments