Why would a vm for the project be annoying? What whole disk? They could look at the OS files installed I guess but there would be nothing belonging to any other project or user on there. If they change something they shouldn't you can roll it back. If you want to write data but not let them read it then write it to an external log server or a write-only disk. Complex security schemes are a lot more annoying than just properly dividing security between services.
I already spend more effort than I like ripping out useless security features. Every project has a virtual machine, or several, and they are isolated from each other. I don't need outdated security features that just get in the way. As it is I'd be more interested in a Linux distro that came with all that crap removed. It's been years since I used groups on a production server, I never found ACLs useful, I usually disable firewalls, filesystem permissions are a hassle far more often than they are useful, etc. Heck, the only time a real person logs into most of my systems is when something goes wrong with permissions or some other protection feature and causes a problem.
Make sure the virtualization servers are up to providing proper security between instances and from the network and then scrap all that stuff in the guest OS.
I used to use fanciful names but anymore I have way to many servers to do that with. So now we get VMHOSTn (VMHOST3, VMHOST55, etc), WEBn, ISCSIn, etc. And usually n represents the last octet of the primary IP address. 10.1.1.1 might be ISCSI1 while 10.4.5.6 might be WEB6.
It's not so much how you put the code together as understanding the way the different components work together. Scratch doesn't hide the details very much - it just provides a graphical representation. Any experienced programmer knows that it doesn't really matter if you use Python, Perl, Java, or C so much as knowing how algorithms work. All that other crud is dealing with your language's syntax and limitations and how the code will be executed.
I've previously made a tool similar to Scratch for writing shell scripts and it was a pretty interesting experiment although I eventually decided the mouse was a slow way to program. I've also done some domain specific languages for games and tools that used a lot of visual components and it can work very well for those.
Recently I've been experimenting with making a tool for programming in a multitouch environment which I think works much better. Right now I'm working on producing JavaScript but thats only because it's easy to use on both iOS and Android. All the normal language features such as defining functions and variables, control statements, etc are simple gestures and instead of naming things with a string the programmer can make a doodle (or type in a string). Existing code is visually expressed and can be edited by touching the area that needs editing. I think the concept is strong although obviously certain details will need tweaking.
Whereas the name iCloud was meant to sound like an Apple product? Anything named in that way is being named to make people connect it with Apple.
I think Apple should be more careful but this is obviously a case where both sides contributed to the problem.
I was using Unix since before many Slashdotters were born and still use AIX, Linux, FreeBSD, ESXi, Windows, and Mac OS on a daily basis but I do most of my casual computing and about half my hardcore geek computing on my iPad. When you have server clusters why would you carry around a bulky PC with a crappy OS? When they put a retina screen into an iPad I'll probably completely dump the PC.
Pretty much anything you can do with a Mac, Windows, or Linux desktop environment I can do better, easier, and faster with an iPad and a server cluster.
I've even been experimenting with making a gesture/touch based programming system that just maps gestures to language features and replaces names with doodles for things such as variables, classes, and methods. Not really as weird as it sounds; probably considerably better for those not native to ASCII.
Sounds like a case of a company having absolutely no idea what makes a good tablet so they just try to do it all. Maybe throw in a lot of buzzwords to make it sound cool. The end product costs about the same as an iPad and the iPad is awesome. Why would I want to try a crappy knockoff that isn't even cheap?
Almost all of these companies would do better if they'd just create cool accessories for Apple products. I can think of many cool accessories nobody has done yet but instead I see consumer electronics companies cranking out crap tablets. You can only sell me a tablet once every couple years (and probably just once if it sucks) but I'll buy several cool accessories a year.
Of course in practice that's often not much of an improvement because the actual useful parts of your program change rapidly and drastically so that all your careful design is so broken that it is impossible to use what you already built without a major rewrite and all your fancy structure ends up making it even slower to adjust. I do think objects and good design can make a lot of sense in libraries and such long-term structures but in the majority of code not really. Real world projects just aren't that stable.
I have a 24" screen and a 17" screen right now and my iPad hooked up as a third screen. I also print, as a PDF, to my iPad which IMO is a lot more useful than printing to paper. And I can grab documents off windows or unix file shares, etc faster than others can bring them up in Windows or print a copy. In my last meeting the computer in the conference room couldn't load a document because it was from a slightly newer version of Office (2007 wasn't new enough) but it came up fine on my iPad.
Obviously not perfect yet but it has a lot of potential. I'm hoping AirPrint, AirPlay, etc become widely supported standards. I'm imagining AirPlay extended to be two way so you can use it similar to VNC for the systems nearby. Would be awesome if it was actually built into the video cards so you could do something similar to a KVM for all your servers without miles of wires. Of course that's just dreaming.
If I were Google I'd just start buying up these media companies every time they go bankrupt. Soon enough the problem is fixed.
E = MC ** 2 +- 3db