Microsoft Working On "Post-Windows" Cloud Computing OS 208
Barence writes "Microsoft is working on a web-based operating system called Midori, as it looks to life beyond Windows. Midori is expected to be a cloud-computing service, and so not as dependent on hardware as current generations of Windows. It's also expected to run with a virtualization layer between the hardware and the OS, and is expected to be a commercial offshoot of the Singularity research project which Microsoft has been working on since 2003." If this story sounds familiar to you, it probably is.
So now (Score:4, Interesting)
The operating system behavior with functions will be even more cloudy.
An application will reside somewhere in the cloud and it will be harder to realize if it is a legitimate application or if it is some malicious program.
Of course - there will be advantages too with an OS like that, especially for distributed computing problems.
Or as in the classic SF story with the question of "Does God exist?" - "Yes NOW there is a God" when all the computers in the net got connected. And the man trying to disable the connection got vaporized by a lightning.
another bad idea for consumers (Score:4, Interesting)
Re:Not as dependent on hardware... (Score:5, Interesting)
Re:who's buying? (Score:3, Interesting)
You're missing the point. Local machines are relatively inefficient; so you could have a local machine that's effectively a thin client with all its processing offloaded into the cloud.
A step like this is an attempt to do away with the local machine; software as service, but also computer as service.
Re:who's buying? (Score:5, Interesting)
Who cares if they are relatively inefficient. Could a thin client browse the web, check email, play youtube videos? Sure! But why should I get my aging mother to buy a new one when her 3 year old PC is still doing just fine? What's the motivating factor? Not only will I have to motivate her to buy a new PC, but I'll also have to convince her to pay for a monthly service so that she can do all the same stuff she currently does for free. And all the documents that she has, many of which are sensitive in nature, are now going to be hosted on the internet. I'm failing to see why any post about cloud computing for consumers is tagged as anything other than "badidea;goodluckwiththat".
And for as 'relatively inefficient' as desktop PC's are, the network connection you rely on is significantly more inefficient. Sure, passing text blocks isn't a problem, even passing low resolution video only requires a few minutes of queuing. But have you ever tried playing a video game over remote desktop where instead of sending the data across the network you are sending full screen images? I'll give you a hint, even if the cloud computing is rendering 9000 frames per second, you'll be getting a max of 1 frame per second on a 19" monitor at a decent resolution.
And there in lies the rub, if you have a system that is powerful enough to play any modern graphics intensive video game, you have a machine that is more than capable of doing everything else the average consumer would do. Buying a new machine, OS, and dealing with all the pain and inconsistencies of depending on SaS is not a worth while investment.
Corporate use? Maybe. But consumer use? no way. This is not going to be the "next Windows".
-Rick
Microsoft's wierd mania for virtual machines (Score:5, Interesting)
First .NET, now this. Why Microsoft's mania for virtual machines, considering they only support x86 targets? Microsoft at one point supported NT for PowerPC, MIPS, Alpha, and x86, and that was with hard-compiled code. So it's not about portability. It seemed to be more like Microsoft's answer to Java - if Sun was succeeding in that market, Microsoft had to go there too.
Rather than trying to use software-separated processes, it would be more useful to improve message passing so that hardware-separated processes could talk to each other better. This, by the way, is one of the big weaknesses of the UNIX/Linux world. In UNIX/Linux, interprocess communication sucks. What you usually want is an interprocess subroutine call, or "synchronous message-oriented interprocess communication". What UNIX and Linux give you are pipes (one way, stream-oriented, asynchronous), sockets (two way, stream oriented, excessive overhead, asynchronous), System V IPC (used by nobody, message oriented, two way, asynchronous), and shared memory (unsafe, one process can crash another). There's no safe, synchronous message passing system. You can build one atop the existing mechanism, but there's a big performance penalty. The result is huge, monolithic applications, or systems that use "plug-ins" that can crash the entire application (i.e. Apache). Fast message passing has a bad history in the UNIX world, due to the Mach debacle, but it works fine in QNX, IBM VM, and hypervisors like Xen. (Windows has fast message passing, although for historical reasons in the 16-bit era it's somewhat clunky and too tied to the windowing system.)
Windows at least has a standardized approach to message passing. The UNIX/Linux world does not. This leads to a proliferation of mechanisms for doing the same thing. Both KDE and OpenOffice use CORBA for message passing, but they don't use compatible versions of it.
small team of hackers (Score:3, Interesting)
jnode.org :-)