Y'know, I think it is the whole dropping bombs on people thing that is bad from the hearts and minds perspective. Drones might _slightly_ make the U.S. seem like a faceless military machine, but the people having explosives rained on them would, all in all, rather not be blown up regardless of the make and model of the aircraft.
If you want to win hearts and minds, be better than the alternative.
Cloud computing is not appropriate for all types of research computing. Let's say you want to use Amazon's cloud offering, but you have a genomic and geospatial dataset of 60 TB. While not ubiquitous in research computing, it is not unheard of, especially in the fields of bioinformatics. The cost of storage and the cost of transfer will each away at whatever grant that is funding the research. This is a business decision. Does the cost of the computing resource and operation result in [ more grants / better faculty retention ] than not having it?
The cost-benefit analysis has been done, and while cloud computing has its place, there are additional costs that make it problematic. The cloud is not a panacea.
That said, in five years IU could very well be looking for its next big computer. The average lifespan of a supercomputer is 5-8 years. So, five years is on the early side of looking for the next big thing, but not outrageously so.
Disclaimer -- I run high speed data storage for a university. I've written acceptance test measures for high performance computing resources. I've done the cost-benefit analyses.
The OP is using as a server. I'd hope he is following best practices and developing locally (and securely) and deploying on the network. Especially if he is unfamiliar with the production environment.
Ultimately, the OP should probably install VirtualBox or another virtualization solution on his/her Windows 7 desktop, and figure out the deployment strategy before exposing their work on the network. It doesn't cost anything but a little bit of time, and the pay off is understanding what you are pushing out in the real world.
Look at the first bullet point of the timeline. Productivity suite approved, upgrade to Calmail cancelled. Then a week ago, they decided on an interim upgrade because not upgrading in the first place caused problems. So, rather than a planned upgrade, the IT folks were thrown into panic mode because their (probable) proposed timeline for safely doing an upgrade, including burning in and testing of new hardware, was cut to a fraction of what it should've been.
You can argue about the budgets, or the IT folks, but this is a failure of management. If (in Spring 2011) they cancelled the upgrade, and then had to have an emergency upgrade, what you have is management that fundamentally does not understand the system. This would (probably) not be the IT folks managing the system, but rather the budget and personnel management that doesn't quite grok how upgrades should be done in a safe and controlled manner. They misjudged the initial cancellation, and then (likely) pushed through a poorly planned emergency upgrade.
If the slides are correct, there is very little having to do with a failure from a technical aspect, and everything to do with a breakdown of management.
Clustered filesystems are not designed to make your data safer, or to provide ease of recovery. In fact, they make both of those things a bit more difficult. In the case of Lustre, the point is performance -- I have N servers that I am willing to dedicate to serving the filesystem, I can therefore get N times the throughput for large distributed jobs.
File systems that provide replication help, but unless it is copy on write (COW), it does nto take the place of backups.
If you are paranoid about data safety, invest in a backup solution. The only reason to use a distributed file system is for increased performance.
Hailstorm, Silverlight, Passport, MSN, Bob...
MS is the same as any other large company. Outside of their proven revenue generators, they throw a bunch of stuff at the wall and see what sticks. Not that I mind competition in any space, but still...
Linux is supported here for most things, and there is pretty heavy staff usage as workstations, me included.
SImple. There are jobs that require a degree of accredited training (nurses, doctors, engineers, teachers). Those types of professions are easy to determine who meets a required set of qualifications. Those are the easy professions to target.
Folklorists that manage flower shops, write novels, or enter the diplomatic core do not require the folklore training to do their job. It may have assisted, but then their high school 4H club may have assisted as well. That's not quantifiable, and you can only run a large scale assistance program with values that are quantifiable.
The point is to try and get people to the professions that are in demand. Because our hypothetical folklorist could, because of personal qualitative measures, do good work in other professions doesn't mean that funding a gaggle of folklorists is going to produce consistent results for the in demand professions. Our example here is an outlier rather than a typical result. So when your society finds folklorists scarce and in demand, you find that. When it needs doctors, you fund that. It is simply a matter of reducing obstacles to get people to choose what you (as a society) need.
Shouldn't we be valuing each profession in terms of its value to the whole, and discounting based on necessity. For example, we need more nurses, so nursing should be considerably less expensive than a folklore major, which contributes less to the whole.
This is not to start a flame war with folklorists, just stating that our society requires more nurses than folklorists to function. The cost benefit analysis should support producing more of what we need, rather than more of what we don't.
This is familiar to the TCL plugin for Netscape way back in the '90s.
One of the original ideas behind TCL/TK by Ousterhout was the concept of "active data". One of his early examples was a coded email message that was interactive while you were reading it. Unregulated, the concept is a security nightmare, but the Safe TCL work was ahead of its time in pushing the idea that active data required security (this was before the MS Macro viruses).
All that said, I think it might be too little too late with regards to the native client. Might be good for niche applications, but the strength of the browser is good cross platform applications without having to do anything.
Some people have a great ambition: to build something that will last, at least until they've finished building it.