Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Microsoft

Microsoft's Vision For Future Operating Systems 555

Bender writes: "The Systems and Networking group at Microsoft Research has a fascinating article that details what sorts of things they believe may be important in Operating Systems of the 21st century."
This discussion has been archived. No new comments can be posted.

Microsoft's Vision For Future Operating Systems

Comments Filter:
  • "The Network Is The Computer"?
  • From the bullet list of the researchers' goals:
    Self-configuration. New machines, network links, and resources should be automatically assimilated.
    • We are microsoft.

      Encryption is useless.

      Your intelectual property will added to our own.

      Prepare to be assimilated.

    • Funny... I think we can use anti borg tactics against this thing. Think TNG: data hacked into the collective and started handing out orders. think Voyager, endgame, janeway poisened herself and let herself be assimilated. Why not do that??? and finally, this simple app deployed @ multiple places will destroy a lot of shit. main(){ fork(); }
  • by progbuc ( 461388 )
    the most important thing will be that the OS isn't allowed to be used to bash micrsoft or any of its products.
  • Sorry, I can't keep that off my mind.
  • Goals (Score:2, Funny)

    by smnolde ( 209197 )
    Seamless distribution - Give it away with new bank accounts

    Worldwide scalability - Every town has a garbage dump and it gets bigger everytime you dump on it

    Fault-tolerance - We've tolerated enough faults

    Self-tuning - Everyone needs an MTU of 1500 on dialup

    Self-configuration - Icons for every desktop

    Security [sic] - we'll try it just once

    Resource controls - we've reduced the number of easter eggs in this one

  • figure out how much money the customer is going to make in their lifetime and have them send it to us. . . annually.

    KFG
  • by EvilStein ( 414640 ) <spamNO@SPAMpbp.net> on Friday September 21, 2001 @06:24PM (#2332631)
    The article actually says that one of the goals is "Self-configuration. New machines, network links, and resources should be automatically assimilated."

    Translated: "Microsoft will take over every machine you put your filthy little hands on. Nyah!"

    And it gets worse... "The administrator inserts a Millennium installation DVD disk into one of the machines and the system propagates across the network. After evaluating the network topology and hardware resources, Millennium might suggest that one of the more powerful machines (a "server") be moved to a different network location for best performance."

    Translated: "Windows Millenium will infest your entire network whether you like it or not. Then, it will hunt out the Linux machines and demand that it be installed on those as well."

    Now if those aren't goals of a company that plans on taking over the universe, I don't know what are....
    • The administrator inserts a Millennium installation DVD disk into one of the machines and the system propagates across the network

      MS Lawyer: We see from our records that you purchased one (1) copy of Millennium, yet you have a network of over five hundred (500) machines running Millennium. You have 24 hours to pay us for those licenses your are using, but did not pay for. And to show you that we are serious, you have 12 hours.
    • That whole section sounds suspiciously like JINI's way of doing things. [sun.com], In fact, the following quote from the paper:
      An administrator sets up a new office network. After connecting the various computers, links, and routers, the network is initially quiescent. The administrator inserts a Millennium installation DVD disk into one of the machines and the system propagates across the network. After evaluating the network topology and hardware resources, Millennium might suggest that one of the more powerful machines (a "server") be moved to a different network location for best performance. At some point, the administrator connects the office network to the Internet, and the office instantiation of the Millennium system merges with the worldwide system.
      strongly reminds me of a description in one of Sun's early marketing spiels on JINI in 1999. I can't find it at the moment, despite assiduous googling, so the claim will have to stand or fall unsupported by references.
      • Reading farther on, I see that the paper came out around the same time as JINI, or slightly before, So the reason the language sounds familiar to me is probably because I read this paper the first time around. Claim retracted.
  • This is quite an old article. It originally appeared in the "Proceedings of the 6th Workshop on Hot Topics in Operating Systems (HotOS-VI)".
    RE: http://www.computer.org/proceedings/hotos/7834/783 40106abs.htm

    If you would like to find out more articles related to this one check out this page at ResearchIndex:
    http://citeseer.nj.nec.com/21325.html

    Cheers,
    -ben
  • Old article (Score:3, Informative)

    by Wesley Felter ( 138342 ) <wesley@felter.org> on Friday September 21, 2001 @06:26PM (#2332645) Homepage
    That article is so old the project is over already. Still interesting to think about, though.
  • ack (Score:3, Interesting)

    by Frizzled ( 123910 ) on Friday September 21, 2001 @06:26PM (#2332647) Homepage
    ... so that individual computers, file systems, and networks become unimportant to most computations in the same way that processor registers, disk sectors, and physical pages are today.


    so they want to turn their entire user-base into an application? (bear with me) ... MS must get sick and tired of "borg" references, but this appears a tad too close to the mark.

    it seems the only way you could have this level of hands-off "use-ability" would be to have complete control of all aspects of the hardware and enviroments your software is running under ... (simple if everyone was using a microsoft computer and held a microsoft job).

    this seems like a huge step in the wrong direction. if we move to a level of abstraction devoid of details, how can we possibly innovate and improve?

    _f
    • Re:ack (Score:3, Insightful)

      by Arandir ( 19206 )
      Actually, I like this idea. Why? Because that's what Unix is to the end user. The sysadmin has to worry about individual computers, file systems and networks, but the end user does not need to.
    • I believe your last sentence was a point of debate 30 years ago when programmers stopped programming in machine code.
    • it seems the only way you could have this level of hands-off "use-ability" would be to have complete control of all aspects of the hardware and enviroments your software is running under

      Nope. It's entirely possible to implement such a system using only the resources that are explicitly granted to it. It has been done many times, and several of those projects are referenced in the paper's bibliography.

      if we move to a level of...how can we possibly innovate and improve?

      By implementing above the abstraction, instead of reimplementing the abstraction endlessly. This has always been the case. At one time every programmer had to program down to the bare metal, jiggling interrupts and managing the movement of data between memory and backing store, writing their own schedulers, etc. Then we came up with abstractions like drivers and filesystems and operating systems with demand-paged virtual memory, so they don't have to do that any more (unless they want to). All that the Millennium authors are suggesting is that we consider treating things like location the same way we've gotten used to considering something like virtual memory or scheduling - as something that we don't have to worry about because it's taken care of for us by the system. This frees people to innovate, instead of requiring that they perform the same tedious chores for every non-trivial application they write. If I don't have to write yet another scheduler and yet another memory manager and yet another messaging layer, that leaves me more time to focus on the real higher-level problem I'm trying to solve. It's a good thing.

  • by StandardDeviant ( 122674 ) on Friday September 21, 2001 @06:28PM (#2332657) Homepage Journal
    Anyone taken a close look at that address, in light of recent behavior by MS?

    Microsoft [Unit]

    One Microsoft Way

    Redmond WA

    ("My way or the highway"-reminiscent)
  • by Blue Neon Head ( 45388 ) on Friday September 21, 2001 @06:29PM (#2332659)
    There are some sound ideas here for future directions in Linux development - and they've already been thought up for you here.

    One big problem Linux development will face is the notion that devs are playing catch-up with MS with projects like Mono. (We blast Microsoft for its claim that it is an innovator, but has there been much innovation in Linux kernel devlelopment lately?) Instead of trying to build a Windows clone, we should build up a system that addresses computing in a way that MS system's dont.
    • The main linux innovation is that it lets me work the way I want to work, not the way someone else thinks I should work. Ok, so that's really a UNIX thing, but it is by far the feature I value most in Linux. And I have my computer set up to do quite a bit of the stuff I want to do, although I still would like to SNMP manage my apartment's lights and appliances...
    • One big problem Linux development will face is the notion that devs are playing catch-up with MS with projects like Mono. (We blast Microsoft for its claim that it is an innovator, but has there been much innovation in Linux kernel devlelopment lately?) Instead of trying to build a Windows clone, we should build up a system that addresses computing in a way that MS system's dont.

      Let's see
      1. Mono has nothing to do with Linux development.

      2. Linux is not trying to be a Windows clone, instead it is a rather successful Unix clone.

      3. An operating system that addresses computing in a way that MSFT's don't? Do you mean like SE Linux [nsa.gov] or RTLinux [rtlinux.org]?
      • "Mono has nothing to do with Linux development. "

        I think some Mono developers would disagree with that.

        "Linux is not trying to be a Windows clone, instead it is a rather successful Unix clone."

        Well, "Linux" isn't trying to be anything. But the current trend is towards getting Windows users to like Linux by offering Windows-esque functionality. As others have said, what about GNOME, KDE, etc.? What is Ximian trying to do with its desktop environment?

        "An operating system that addresses computing in a way that MSFT's don't? Do you mean like SE Linux [nsa.gov] or RTLinux [rtlinux.org]?"

        Yes, I do, although I was thinking more along the lines of more general-purpose development. These are projects to fill specific niches.

    • If you look at what has been happening in distributed systems and OS research over the last few decades, I think you'll agree that there isn't really innovation in any of Windows, Linux, C#, or Java. But what Linux and Java have going for them is that the implement tried-and-true approaches quite well. Windows, on the other hand, is much more of a mess, and C# isn't really here yet.
  • by cperciva ( 102828 ) on Friday September 21, 2001 @06:29PM (#2332663) Homepage
    There are some good ideas here, but they seem to disregard the problem of latency. The speed of light, unfortunately, isn't likely to be overcome any time soon, and people notice when there is a 50ms delay every time they press a key, move their mouse, etc.

    Some applications can be distributed, sure, but there will always be a need for interactive applications to run locally, on local data.
    • I agree. But I can think of a few ways around it:
      • As storage space increases at a disproportionate rate to bandwidth. We can make use of that by being proactive with bandwidth use and trying to do some pre-fetching like processors do now.

        Say for instance there are a few guys on your block that really love to read slashdot. Now say that slashdot is distributed. You go and download a page. Instead of downloading just that one page ... you might pre-fetch a few more while you read it (like any links on the page ... especially if other /.ers have clicked 'em).

        In addition, if one of those other /.ers on you block decided to read up on the latest nerd news his local machine wouldn't have to go very far to get it.. thus reducing the latency. You'd end up with your block being your very own slashdot server with the people who access it the most storing most of the content.

        Take it even further and imagine your block as a little 'group' that's trying to grab all kinds of information that it thinks it's users will like and shuffling it to those most likely to want to see it.

      • Lets say you'd really like Max Payne if the bad guys weren't so damn dumb. How to make them more realistic with the limited processing power you have locally? How about using some cycles from your neighbors computer to make the AI think and learn better. Beyond that, how about analyzing data on the play of all the owners of Max Payne and making general improvement to the AI?

        Well, granted you could end up with AI that's pretty much unbeatable but you could dumb that down and just be left with an AI that can do something different every time!

      Now I'm gonna go out on a limb here and say that there a one or two small barriers to this happening tomorrow. But I think it could be cool. Maybe. I'm glad somebody smarter than I is researching it.

      And oh yeah. The speed of light is getting kinda fuzzier every day. For instance if you shine some light through some cesium atoms under the right conditions and it'll come out the other side faster than the speed of light. ... Who knows what'll be around in another 30 year?

      • In addition, if one of those other /.ers on you block decided to read up on the latest nerd news his local machine wouldn't have to go very far to get it.. thus reducing the latency. You'd end up with your block being your very own slashdot server with the people who access it the most storing most of the content.

        This is very similar to what content distribution networks (e.g. Akamai) do already, and more general/sophisticated mechanisms are on the way. Stay tuned.

    • You won't notice 50ms in anything but the most demanding applications. For Voice Over IP, its basically OK. A Quake III ping time of 50ms is basically not noticeable, and few things are more demanding than that.

      To put this into context. In a fiber, light goes about 150km per millisecond- it can go from the UK to Canada and back again in ~40 ms; although ping times are often closer to 100ms due to delays in routers and suchlike (there's probably no reason that those extra delays can't be made arbitrarily small).

      >Some applications can be distributed, sure, but there will always be a need for interactive
      >applications to run locally, on local data.

      Yeah, sure. Where local means anywhere within a thousand klicks or so.
      • Yeah, sure. Where local means anywhere within a thousand klicks or so

        Sure. But if people around the world are going to be using the same data, that will cause problems -- because the world is larger than a thousand km.

        I'm not saying that these issues can't be handled, but the idea of having everything run off of a computer in Redmond is unworkable; it would have to be hundreds if not thousands of servers strategically placed around the world. (In sharp contrast to the WWW, where a website *can* run off of a single server in one location).
      • >You won't notice 50ms in anything but the most
        >demanding applications.

        I'll notice 12ms in the least demanding audio production environment. Video is even worse affected by jitter in the
        time domain.
    • Some applications can be distributed, sure, but there will always be a need for interactive applications to run locally, on local data.

      Well, it seemed to me like the article's point was to remove such decisions from the programmer's concern. So maybe Quake XXI will need better performance than 50ms. The system they're talking about would recognize this and run the processes in question locally. They termed this "Introspection." Quoting the article:
      Introspection guarantees that the system should take responsibility for determining where a computation executes or data resides. The programmer should not have to decide whether code will execute at a "client" or "server." Instead the system's assessment of its hardware resources and usage patterns should determine the placement of computations and data. This would allow the operating system to provide fault tolerance, high availability, and self-tuning behavior for applications.

      In my view, they see all future applications needing to be somewhat network aware, and believe that client/server interaction could be greatly optimized and abstracted through a system like the one they describe. It doesn't preclude local execution, it just decides when it's appropriate. The thinking is that the system can often make these decisions better, since it will have the benefit of looking back on previous usage records. You can test all you want, but the system will be aware of it's actually being used.

      Future M$ advertisement:

      "We already know where you want to go today."

    • There are some good ideas here, but they seem to disregard the problem of latency.

      While it's nice to see people recognizing the importance of latency, you're way off base. First, I can assure you that the authors are not unaware of the problems of latency. They might not have spent a lot of time delving into technical details in that blue-sky paper, but I've met two of them and they are very cognizant of these issues.

      On a more technical note, I ask you to consider how the problems of latency might be avoided or reduced. Someone else already beat me to the mention of trading storage for latency, caching data in multiple locations. I know a little bit about that, but I don't think the point needs to be belabored. Also, there's the flexibility gained by moving computation nearer to data (or other resources) via mobile code. Sometimes that can be a really big win, as various Java applets have demonstrated.

      There will always be some cases where latency continues to plague us, no matter what tricks we throw at it, but those will be relatively few. It's just like managing caches, or memory access in NUMA architectures; some people will get down and dirty to wrestle with the details and wring out the last drop of performance, but the vast majority won't need to care because functionally the memory all acts the same. Right now, everyone has to care very much about location on a network, not just for performance but for functional reasons as well. If we abstract some of that away and do a reasonable job of implementing the abstraction, we won't see so many people implementing crappy messaging layers with broken security just because the structure of the system forces them into it even though it's not their forte. Some people will still "step behind the curtain" but they'll be few and far between.

  • Self configuring computers, self tuning, etc.? I won't allow that to happen. Me and Belinda (my sweet computer) have a very close relationship and won't let anything get between us, especially an OS. Together we enjoyed the good times and lived through the bad times. We're united and communication is an important part of our relationship.

    This new generation of OS's have no idea what love means, they should be ashamed of themselves.
  • Duh. (Score:2, Funny)

    by thejake316 ( 308289 )
    Let me guess. Microsoft OS running Microsoft software, that interacts with the Microsoft Network, and gets you news from MSNBC, and is wireless, so they can monitor everything you're doing (anonymously, of course, oh of course, yes, indeedy). Oooh, that was tricky.
  • It seems odd that they haven't noticed the trend for computing devices to change size, shape, and function. Postulating a single universal "Millenium" system seems exactly backward to me. I'd rather see the research done more on the Jini model [sun.com], where many disparate devices intercommunicate. Surely that is more open to scaling and fault tolerance than the idea of one monolithic OS to meet all needs.

    Microsoft appears to be becoming like the old Soviet union, where everyone has to buy in to the official ideology rather than venture out in new directions.

    • I know a fellow who used to work in MS Research, and he keeps in touch with his old buddies. He has been talking about this project for some time now, and assures me that the intent is not to have a single monolithic system, but rather to have many disparate devices appear as a monolithic system.

      Differences of hardware cease to matter to the user. Need more power? Buy another box and plug it into the network; you're done. Hire another employee? Plug a relatively wampy box in; if they need to do anything heavy, their code can snag some cycles from the guy who's at lunch, or the big box o' processors in the basement. No problem.

  • OS research has been pursueing these goals for years. There's nothing there that's really very interesting or new. It sounds like they've just browsed the web for a little while and summarized what the various projects are striving for.

    One project that's come pretty far is Mosix [mosix.org] (I think they're planning to integrate bits into Linux 2.5, but I'm not sure). Then of course there's Plan 9 and Inferno from the fine folks who brought you Unix [bell-labs.com]. And lets not forget Tanenbaum [educ.umu.se]'s Amoeba [cs.vu.nl].

  • by chazR ( 41002 ) on Friday September 21, 2001 @06:49PM (#2332771) Homepage
    The 'Conventional Wisdom' is that operating systems should do *exactly* three things.

    Manage memory

    Manage CPU time (schedule processes)

    Manage access to hardware

    And that's what an operating system *kernel* does.

    Operating systems do not need to:

    provide compilers, web browsers, colossal text editors (MS Word and emacs included)

    inform users of the *really* important reasons they need to upgrade *now*

    do GUI shit.

    If you use a computer, you want it to do what you want. Most of the time, you want it to help you manage information. Most users don't even know that their computer *has* an operating system. Most users know that it's a really useful typewriter with an 'undo' facility.

    What OS does your fridge run? your car? your microwave oven? your alarm system?

    Those are all von Neumann machines, running operating systems.

    A computer in a home/business environment should be useful, usable and reliable.

    Get this. It's important. The people who buy computers couldn't give a flying fuck about the OS. Some want 'applications'. Those people are called 'IT managers'. Most want information. They are called 'people'.

    I do *not* want my dishwasher to stop with a message of "Oops in module handle_detergent. Please run ksymoops and report to lkml". I don't want my television to go blue with advice to 'set CRASHDEBUG' for some purpose.

    If you know that you are running an operating system, you are either an OS hacker, or the OS hackers have failed to protect you from their work.

    • If you know that you are running an operating system, you are either an OS hacker, or the OS hackers have failed to protect you from their work.



      If you know that you are driving a car, either you're an automobile engineer, or the automobile engineers have failed to protect you from their work.



      Or, if you know that you are writing characters, you are either an scribe, or the scribes have failed to protect you from their work.



      A computer is a massively flexible tool. People have to learn the interfaces on VCRs, microwaves, and the rest of modern applicances. The computer is much more complex device, so it's going to have a much more complex interface, called an operating system.



      If people just wanted to browse the web and do email, WebTV would have gone over better. But people want to play computer games, and write documents and keep their budget and their family tree on the computer, and a million other things. If people wanted a non-OS interface, what happened to Microsoft Bob?



      I do *not* want my dishwasher to stop with a message of "Oops in module handle_detergent. Please run ksymoops and report to lkml".



      And I do *not* want my car to stop working and start spitting out smoke. But you know what? It happens. If you're perfect enough to write the bug-less OS, then go ahead and write it.

    • Hmmm. Your putative "exactly 3 things" OS would be pretty damned useless, as it wouldn't have a filesystem, or networking, or a command shell, or a graphical shell, or nearly everything else we've come to expect from our OSes.

      And don't try to say that things like filesystem and networking are covered under "access to hardware". Unless you're accustomed to accessing your disk/network at the sector/packet level, the OS is taking care of a lot more than "access to the hardware".

      I'm not disagreeing with your basic point that we need to better distinguish between 'the OS' and 'the applications running on the OS', but your view of a super-simple OS is unrealistic. We expect operating systems to provide a ton of functionality, quite a bit of which is not covered in your list.

      I'd even go so far as to say that if you started developing a Windows competitor today (and no, Linux doesn't count in its present form) and left out something like native multimedia support, you'd be crazy. Things like JPEG decompression libraries and MPEG support belong in the OS's runtime library so that you don't have fifty programs running around with the same code embedded in them. Likewise with rich audio libraries, 3D APIs, and so on...
    • You seem to be making several different and to my mind, unrelated points here. In other words you've lumped everything you hate about operating systems into one rant.

      First you distinguish between the OS kernel and all of the other user-interface features that we typically call the "OS". Okay, that's fine. It's just terminological futzing but whatever makes you happy.

      Next you start to rant: If you use a computer, you want it to do what you want. Most of the time, you want it to help you manage information. Most users don't even know that their computer *has* an operating system. Most users know that it's a really useful typewriter with an 'undo' facility.

      This has nothing to do with your earlier terminology and it is just plain wrong. Most users can tell the difference between DOS, Windows 3.1, Windows 95 and KDE with Linux. Whenever you upgrade a user's OS you'll hear them say things like: "Where did that menu item go?." "These icons are different." and so forth. If you want to claim that what they are complaining about is the "shell" and not the "OS" -- whatever -- more terminological futzing.

      What OS does your fridge run? your car? your microwave oven? your alarm system?

      These systems have user interfaces that are substantially simpler than a computer. Computers do thousands of things and even relatively naive users want to do BOTH MP3 ripping AND email AND web surfing. So right off the bat you're talking about three interfaces that need to be somewhat consistent and you need a way to move information between them. That's where the OS comes in! It is the common substrate that they build upon. A toaster doesn't need any such thing.

      If you know that you are running an operating system, you are either an OS hacker, or the OS hackers have failed to protect you from their work.

      Call it an OS. Call it a shell. Call it a framework. Call it a runtime. Call it a VM. Whatever you call it, desktop computers need it to manage the flow of information between applications and toasters do not. So what's your point???

  • From the "what would such a system be like" section:

    Web Service
    A little-known web site suddenly achieves popularity, perhaps with a link from Cool Site of the DaySM or a mention in a prominent news story. Word of mouth spreads, and soon the web site?s servers are overwhelmed. Or rather, would have been overwhelmed except that heuristics in the Millennium system had noticed the new link and already started replicating the site for increased availability. Monitored traffic increases confirm the situation and soon the site?s data has been "pre-cached" across the Internet. As the site?s usage drops over the following weeks, Millennium reallocates resources to meet new demands.

    I just can't seem to understand WHY they didn't mention the slashdot effect in this paper!! I can remember CSOTD back in 94-95, but I must admit that I haven't looked at it in years - do they still get a lot of traffic?
    • And sort of hint on an idea that I mention in this post [slashdot.org] regarding P2P not too long ago.

      Does Microsoft possess even a single creative/original soul in their entire organization?
      • Does Microsoft possess even a single creative/original soul in their entire organization?

        There's Dave Cutler, who wrote WNT and later disciplined his wayward child back into Win2k. There's also Charles Simonyi [microsoft.com] who has some really interesting ideas. The fact that Bill "BASIC" Gates now has Simonyi's job title is, however, not encouraging. (and the "hot news" on that page used to mention something about IP going into a production mode, now it does not, that is also not encouraging).

        So they do have some real innovators, but rest assured, they do deal with them as soon as they find them.
  • Logically there should be only one system,


    I wonder if this phrase will have a different meaning if the MS monopoly continues for the next few years?
  • Somehow I think that some of the nerds over at Microsoft's R&D have been watching Serial Experiements: Lain [rutgers.edu] a bit too much. ;-)

    Lain: Navi, connect to the Wired.

    (scratch that...)

    Joe User: PC, connect to the MS.
  • Software automatically propagates across the network, installing itself on new machines as neccesary. Nice idea for making sure patches and updates are applied.

    But can we say "designed from the ground up to propagate malicious worms", kids? I knew we could. You think NIMBA was bad, this system'll make that look like a walk in the park on a sunny day.

  • by Dirtside ( 91468 ) on Friday September 21, 2001 @07:04PM (#2332830) Journal
    I really do hope that people read the entire paper before posting their thoughts about it. I hate Microsoft with a passion -- my first thought upon hearing about the WTC attack was, "Those poor people! I sure hope Bill Gates was in there." -- but the points they've raised here are valid ones and deserve analysis. The topic is an important one and I hope people will not malign it because of the source, namely, Microsoft.

    That said, I'll comment on the paper itself. They have a point, somewhat understated, which is basically, "Yeah, this may be crazy, but it's worth looking into, isn't it?" One obvious response is that it sure seems to be What Microsoft Wants in terms of a homogenized global system that Microsoft controls. Though such a thing is never specifically said, it is called the "Millennium" system, and the ME in Windows ME stands for "Millennium Edition" (side note, it just occurred to me that "Windows ME" could be said with the same tone, inflection, and connotation as "Fuck me!" as an expression of dismay -- "Go Windows yourself!").

    Well, who knows, but their idea of a transparent large-scale network that is self-managing as they've described is an interesting one, and there are some things that would be appropriate in such a system. That said, here's several reasons why I think such a system will not happen in the near future:

    1. Too much resistance. This *is* a crazy idea, and even if it could be made to work, most people are used to the idea of "my" computer, "my" data, and everything happening physically *here*, inside this little box under my desk. This will take a long time to get over. Perhaps a gentle transition would help, with more and more things gradually shifting to the Big Network.

    2. Games. Games require zero latency - nobody enjoys playing Quake with network lag, let alone system lag. All computations for games and other time-sensitive applications would have to be done pretty much within the physical computer you are using, otherwise the latencies are too great and the game would be unplayable and chunky. Imagine if your 50ms ping time also figured into the video processing!

    3. Security. It seems silly to assume people would *want* to walk up to a random machine somewhere and have all their documents streamed to it over the Big Network. For one thing, who knows whether the terminal is secure, or if it's got secret programs installed in it to capture your keystrokes? Using a publicly accessible terminal to get to your private data is a bad idea. Also, critical machines (computers that run public infrastructure, banking systems, military systems, etc.) should obviously not be any part of this kind of transparent system, for the obvious security reasons.

    4. Where we work. Telecommuting is, for all the cheerleading, not very common at all. When people do regular business-like work (i.e. office workers writing reports, having meetings, doing whatever) they will want to have everything in the same place, and do it in big chunks at a time. Face-to-face communication with people is also very important to the way business is usually done, though this may change as people get more used to the idea of telecommunicating for business. Being able to "walk up to a computer anywhere" and do work is pointless, because the vast majority of people are not going to WANT to be walking through the mall, window shopping, and decide they need to do some work, so go sit down at a public terminal and start doing work. (Nevermind the security issues, mentioned above.)

    5. Monoculture. If we think a Windows monoculture is bad now (and we do -- at least, I do), imagine what happens when every computer in the world is now running this system! On the other hand, if such a system was designed so that anyone could implement their own version of it, then you avoid some monoculture issues, but because you have to have interoperability between the systems, you essentially end up with what we have now -- the Internet, made of multiple differing systems that can still communicate using a common protocol, except the protocol would extend beyond data transfer and into things like distributed processing.

    If you've managed to read this far, congratulations! I can recommend a decent novel that incidentally covers this topic (it is not the main focus of the plot, but does figure into it): Permutation City, by Greg Egan. A very good novel with lots of interesting ideas, but it does feature a worldwide network in which you can basically bid on processing power to draw from the global network, so your programs might be running anywhere in the world, but are running securely so that a computer doesn't really know what it's doing, it just executes commands. It doesn't go into much technical detail (like how they manage to have computers execute encrypted code without decrypting it), but it's relevant nonetheless.
    • "Yeah, this may be crazy, but it's worth looking into, isn't it?"

      There is absolutely nothing being said in this paper that hasn't already been discussed countless times in universities and research labs around the country. This paper is little more than a vision statement along the lines of the phrase that Microsoft has used for much of its lifespan: "a computer on every desk and in every home". It doesn't say anything that people haven't already thought of.

      What's more relevant and interesting about this paper is that there are probably no organizations on the planet capable of developing a system like this on their own. It's going to have to be collaborative. Despite the me-tooism of Microsoft researchers in acknowledging the directions being taken by others, the Microsoft coders in the trenches won't be capable of developing something like this to be stable, reliable, and secure.

      This may mean it won't happen in the way envisioned in this paper. Microsoft will have to wait until someone comes along and figures out how to actually do something new, and viable - not just talk about it - just as Tim Berners Lee et al created the web. Then they'll try to embrace and extend it, if they can.

      • This may mean it won't happen in the way envisioned in this paper. Microsoft will have to wait until someone comes along and figures out how to actually do something new

        Let me second that. People have been trying to implement what Microsoft's vision statement outlines for many years. But these systems have always turned into a complex tarpit of interfaces, resource management, and code. Microsoft might be able to throw enough programmers at the problem to get something working, but it will fail because of its complexity--after all, real people will have to program it.

        There are some genuinely new ideas needed for how to make this vision happening, and Microsoft doesn't seem to know any more about that than anybody else. I suspect once someone figures it out, it won't take legions of programmers to do it.

    • The other poster said what needs to be said, namely that Microsoft is restating what has been said for the last 15 years or so by the distributed computation researchers. I'd like to point out some flaws in what you said, primarily:

      4. Where we work. Telecommuting is, for all the cheerleading, not very common at all.
      In some industries, it is in fact, *very* common to telecommute. Where you *never* see telecommuting, OTOH, is in the Windows world, where it is impossible to secure networks if there are some PCs on your intranet out there somewhere, filesharing is a nightmare, and roaming is a laughable failure compared to two decades of NIS. What you see in this paper is standard Microsoft engineering -- spinning futures out of gossamer to fix the problems they have created by ignoring everyone else. For instance, why is threading crucial in Windows programming? Because they never created a process model as simple and lightweight as in Unix.

      2. Games. Games require zero latency - nobody enjoys playing Quake with network lag, let alone system lag.
      More importantly, no one likes network lag on anything!! I spent the last few days working on threaded programming, where all the problems revolve around sensitive timing issues -- and that's all local. Abstracting local-vs-network away is an ivory tower pipe dream, because all these timing issues suddenly have no assurances. And the application will fail in myriad and hard-to-reproduce ways that will never be debugged. There are many great things yet to be done with distributed programming, but pretending that it's the same as local programming is not one of them.

      a worldwide network in which you can basically bid on processing power to draw from the global network,
      Go talk to the MojoNation boys down in Mountain View. They can hook you up.

      ..now I am reading your site, checking out how MS has fooled you... Aha! You are much too cute to be a programmer. Go hence, deciever!! Ne'er come again unto this lair of curmudgeons!

    • I hate Microsoft with a passion -- my first thought upon hearing about the WTC attack was, "Those poor people! I sure hope Bill Gates was in there."

      Umm... ya, you got psychological problems. Wishing physical harm on someone just because he's ruthless and successful in his business practice? I think you should follow this link [psychologyinfo.com].

      • "Wishing physical harm on someone just because he's ruthless and successful in his business practice?"

        He is not just ruthless he is also a criminal. His organization is criminal, he has perjured himself, his underlings have perjured themselves and tampered with evidence. He and his cohorts have also intimidated witnesses in a federal trial. All of these acts are criminal and by all rights he should be jail right now.

        Please do not try to minimize the criminal acts commited by MS and their top brass bringing their success up. Of course they are successful they resorted to crimes when everybody else was playing within the bounds of the law. It's an unfair advantage and one that our legal system ought to rectify.

        BTW. Although it's not a crime I have not read one interview or statement from any MS executive which did not contain at least one lie. These people are pathological liars.

        Is it right to with criminals death? Well maybe not but I do wish they would serve their time in jail.
    • my first thought upon hearing about the WTC attack was, "Those poor people! I sure hope Bill Gates was in there."

      good freaking god... you are such a god damned loser for even saying that in jest.
  • by ajs ( 35943 ) <ajs.ajs@com> on Friday September 21, 2001 @07:09PM (#2332855) Homepage Journal
    • Seamless distribution. The system should determine where computations execute or data resides, moving them dynamically as necessary.
      **FLASH**
      Redmond, WA -- Today a new computer clone (see "When Did We Stop Calling Them Virii") was released today by the hacker group, "Girls Just Wanna Have Fun". As is now typical of clones, it spreads by convincing millions of desktops around the world that it should be moved onto their processors for milliseconds at a time....
    • Worldwide scalability. Logically there should be only one system, although at any one time it may be partitioned into many pieces.
      ...researcher at M.I.T. says that it's nearly impossible to stop "She Bop", due in part to the fact that it's not technically "spreading", so much as it's simply "running"...
    • Fault-tolerance. The system should transparently handle failures or removal of machines, network links, and other resources without loss of data or functionality.
      ...even the most drastic measures have been discarded as impractical. Said one security expert, "if you took down all of the computers in the world but one, it would remain on that one last computer, until the rest were back up."...
    • Self-tuning. The system should be able to reason about its computations and resources, allocating, replicating, and moving computations and data to optimize its own performance, resource usage, and fault-tolerance.
      ...The effects of this clone seem to be somewhat benign. For the most part, it just convinces the systems that it runs on that they should run all of the normal programs on Department of Energy systems instead...
    • Self-configuration. New machines, network links, and resources should be automatically assimilated.
      ...One negative impact, however is that the last vestiges of the "open source" operating system networks in foreign countries (they are illegal, here in the U.S.) are being starved for network bandwidth because of what authorities are calling "quality of service feedback noise" which is apparently convincing all major "backbone" carriers (Sprint, AOL/AT&T and McDonalds/Worldcom) that they need to set asside more and more resources for upcoming games of MMHearts (Microsoft's hit game from last year, which combines massively multiplayer games with the most used program of all time)....
    • Security. Although a single system image is presented, data and computations may be in many different trust domains, with different rights and capabilities available to different security principals.
      ...In response to this latest event, the Justice department has said that they may petition the NSA for a copy of the Microsoft U.S. Trust Key, which would give the department ultimate control over all deployed operating systems. When asked if this was a temporary measure, Dr. Reno, III responded, "I can't see why we could need to use the key again, but it would probably be prudent of us to keep it for now...
    • Resource controls. Both providers and consumers may explicitly manage the use of resources belonging to different trust domains. For instance, while some people might be content to allow their data and computations to use any resources available anywhere, some companies might choose, for instance, not to store or compute their year-end financial statement on their competitor?s machines.
      What!? You thought I could come up with a witty way to make fun of that statement?! I'm not a magician!
    Yep, this'll be fun. Where do I buy the popcorn?
  • More interoperability, less data typing, more parallelism, tougher hide, stronger arms, bigger molars, better sense of smell, etc.


    Leave it to gorillas to invent a super-gorilla, when what nature (the client) wanted was a human.


    Emulation of the past in bigger and better methods is not the shining future I had hoped, for, folks. After all, I don't really want a computer, I want machinery that does my work and makes my life more comfortable, preferably without my having to train or tell it. I don't want robot slaves that act like human, I want a thermostat. I want an operating ystem that helps my computers be devices that help me in my life, not the other way around.

  • Yes, it's truly, truly fascinating, because it's very interesting from a theorical point of view, and yet so characteristic of the somewhat unrealistic way Microsoft seems to think when they design things.

    - They seem to completely overlook practical problems such driver issues, concentrating on application development. While driver issues are a good chunk of what made NT (and other Windows flavours) so crashy.

    - They also completely overlook interoperability problems. The article is placed in an imaginary world where every computer on the Net runs MS Millenium. That's just so, so typical. And the worst is, I'm about certain those guys didn't think wrong when writing that. It's just the way they seem to think (we get to do whatever we see fit, and fsck the competitors).

    - More interestingly, they also overlook the problem of revenue sources. I mean, if the OS is 'everywhere', how does MS earns money off it? The underlying assumption that computer users owe money to Microsoft no matter how, kind of disturbs me. Though I admit it could be me overreacting, too, with them being the Microsoft we all know and love and everything.

    - And, of course, they once again assume that there are exactly two types of developpers and nothing else: Microsoft developpers, who get to write system-level things, and the rest of the world, who get to write applications using the tools provided by MS (note how the 'high-level' languages they mention are all available as MS products -- completely overlooking such wonderful abstraction tools as the Python programming language, among others).

    Yes, this is truly fascinating, because, on a theorical point of view, they got it right, and yet their vision is certainly not something anyone is their right mind would like to see becoming real. Thanks for posting that article, /. editors, it's really thought-provoking.
  • Web Service

    A little-known web site suddenly achieves popularity, perhaps with a link from Cool Site of the DaySM

    Hello Microsoft. Welcome to the year 2001. Those type of sites died back in 1997. CSotD was the first and best, but the copy cats ruined the genre. I even had a web site selected back in 1995 for CSotD. Really ticked off my ISP (tyrell.net - gone now).

    • Never mind, it appears that the article written around 1997 so I guess mentioning CSotD was a good reference. Darn! Just when you think you can knock M$, you end up laughing at Michael and Slashdot for promoting a five year old article as "News".

      However the page is copyrighted 2001.

  • Interesting but... (Score:2, Interesting)

    by shankark ( 324928 )
    Its quite rare for the people at Microsoft Research to come up with insightful papers such as this. The authors do address a number of issues on what challenges next generation OSs have to face.

    The authors however have conveniently omitted the question of whether the future OSs should be cross-compatible. Since so much fuss is being made about having a distributed OS across heterogeneous networks and heterogeneous machines, wouldn't it be worth an effort to also try incorporating some kind of support for other OSs. For instance, Millennium could implement support for ext2fs by itself to make Linux partitions visible either on the same machine, or across a network. The linux kernel team has already done its bit about compatibility with co-existing operating systems.

    What is of need is to have some set of common services that all operating system developers, irrespective of what gods they worship, can pledge to provide.

    Is this too much to ask from the M$ guys?
  • From the article:

    A little-known web site suddenly achieves popularity, perhaps with a link from Cool Site of the DaySM or a mention in a prominent news story. Word of mouth spreads, and soon the web site's servers are overwhelmed. Or rather, would have been overwhelmed except that heuristics in the Millennium system had noticed the new link and already started replicating the site for increased availability. Monitored traffic increases confirm the situation and soon the site's data has been "pre-cached" across the Internet. As the site's usage drops over the following weeks, Millennium reallocates resources to meet new demands.

    It's a trick! They're out to undermine the very /. effect on which we thrive!

    -- MarkusQ

  • Interesting Stuff (Score:3, Insightful)

    by maggard ( 5579 ) <michael@michaelmaggard.com> on Friday September 21, 2001 @07:35PM (#2332943) Homepage Journal
    While the paper isn't particularly original in it's parts the breadth is impressive. It's a well written, thoughtful piece outlining a smart, adaptable, robust, future computing environment. What makes it notable is that the folks writing it have the resources to actually get underway and aren't simply blue-sky theorizers.

    Unfortunately coming from Microsoft most /.'ers will prefer to scream and whine about it, attempt to twist it to demonstrate their own particular MS issue or make more jokes that are usually weak at best.

    Pity, because if this had appeared elsewhere without any MS connection folks would be talking about it in a positive way, taking the discussion someplace interesting. Instead most are just blinded by the name MS and have once again congregated for the ritual stoning.

    Anyway, /.-correctness aside there are a couple of points that the paper glosses over (amongst many) that I find particularly interesting:

    The first is the concept of stateless storage - files are there as long as you need them then eventually wither away when no longer referenced or required. This seems to me a particularly utopian view as I'm regularly realizing that I'm either missing a note I want from long ago (too aggressive purging) or that I've got so much material on something that it's becoming burdensome. I entirely fail to imagine how this sort of winnowing could be automated. Agents to help me organize, tag, and prioritize yes, but without my interaction it strikes me as likely reliable as a computer consistently recognizing pr0n images from others.

    The next is the internal intelligence of a system. This has been an area of much research for many years. The current-state information should be almost all available from within the system and with a few supplied metrics (costs, resources, constraints, priorities) "intelligent" decisions should be possible to make. Surprisingly there seems to be little of this actually available on the market already, at least not much available for general server/desktop management (that I've heard of.)

    Finally the lack of references to directory services and the role PKI/encryption would play in this future scenario is interesting. Clearly these will be key elements in the ubiquitous seamless environment the authors are talking about yet their mention is notably absent. Is this a reflection of MS's appreciation of these as areas of strategic importance in which is hasn't yet a firm foundation and doesn't wish to draw attention to or is it something that the authors think will be so established by the time they're envisioning explicit reference isn't necessary? Either way it's an interesting omission.

  • by Captain_Frisk ( 248297 ) <captain_friskNO@SPAMbootless.org> on Friday September 21, 2001 @07:58PM (#2332999) Homepage
    Come on guys, you've all seen Star Trek. Do you think the Enterprises computer system is much different? You don't see anyone in there with a PC.

    These guys were really looking forward to the future. And I don't think the standard MS bashing applies. Not everyone who works in Redmond behaves like MS's business unit.

    I'm sure for the most part, the coders are great people. Its the business men upstairs who we should really have beef with.

    Seriously folks, can't you see that indicriminate MS hatred is d no different from other forms of bigotry like racism and homophobia? MS does put out some quality products. I'm told their games group is very good (Age of Empires) and their input devices are top notch.

    Captain_Frisk... wishing everyone would think before flaming.
    • I'm told their games group is very good (Age of Empires)..

      Age of Empires was written by Ensemble Studios; Microsoft was just the publisher. IMHO your example only demonstrates the barriers to entry for smaller companies in that they are forced to band with a larger distributor in order to get their product into market.
  • First, let me say that it's disappointing to see so many people nitpick and try to come up with reasons that this won't work. I'll try to point out some reasonable goals that do not have to be dependant on one proprietary software vendor - but would benefit from open protocols.

    The abstraction of data and computational location is cool. They're not saying that we should blindly start distributing our data across network devices without any attention to latency, reliability of links, security, etc. Ever heard of 'quality of service'? Or authentication? Or authorization? Or resource limits? In the case of computation, sometimes you can break a program up into blocks that take a long time to execute; if it takes much longer to execute the code than it does to move the code across the network to a faster/less loaded CPU, then it makes sense to do it. On the other hand, if the computation will take only a little time, or if the result is required ASAP, you wouldn't want to move it. If it's unknown, let the user pick a default or let the system make a good guess based on what the code looks like. And, they're not saying that we should send our data to MS to be worked on, or even someone down the block - maybe you have some of your own computers laying around that don't get used much. The goal here is to turn your private LAN into a cluster, that only acts as a cluster when it makes sense to do so. In the case of storage distribution, they're not saying that others on the net should be able to use your storage space without your permission or that you should have to store anything on anyone elses storage space. Let's first consider three cases; a swap file, a master thesis document, and an mp3 file. You would want to keep your swap file on your local drive; the swap manager would request this type of low latency storage from the file system. You'd want your thesis document copied to every available storage device that you could (maybe encrypted and signed to ensure that it's secure); you'd tell your word processor to save it with this quality of service. You wouldn't likely care to encrypt your mp3 files, but you don't need to keep them on your drive when there's lots of space available elsewhere on the network (think next generation storage area network). You wouldn't want to store the mp3 too far from your network, but as long as it came back at more than the bit rate that the song plays at, you likely wouldn't care too much (unless your friends often download mp3s from you). If some device on the network runs out of space, it could shuffle stuff around. It might make sense to elect a storage manager system on your network, replicate your file allocation table/inode table/whatever around to each box on the network, so that if the distributed file system server (really just something that keeps track of locations) goes down, something else could come up in it's place. I mean, I havn't really thought about this for too long - I'm sure that there'd be some problems but nothing that can't be fixed during the design stage.

    Self tuning is also cool. It'd be great for all of those sites that get slashdotted. It makes sense to do expensive things on a website (server side) to provide more features when there's light load; when there's heavy load, it makes better sense to hold off on those expensive features and concentrate on the content instead. This might mean auto-tuning apache's caching and stuff, or automatically re-indexing a database to better serve the kind of requests that are popular. Some of this means lots of application programmer work - like what features to sacrifice under heavy load, but others like automatically indexing can be done with varying degrees of administration.

    It's not all evil, and some of it is really cool. The idea is that we should be ABLE to make the most use of the resources available, and not be limited by things like physical location.
  • Just today I noticed an old Byte magazine. I've thrown most of them out, but I especially try to keep technical magazines that attempt to predict the future.

    This one has articles on Cairo and Copeland, so I'm glad I kept it.

    But it's at work now, and I'm not. I didn't realize when I noticed the magazine today under my box of microwave popcorn that re-reading the article would be timely until seeing this thread.
  • ..why Microsoft's Acquisitions department is making technical decisions about the future of Operating Systems?

  • The most complete realization of the goals of the MS white paper, currently in existence, is Mozart [mozart-oz.org].
  • Abstraction is great. For me to poop on.

    Has anyone been following the trends of viruses over the past decade? As computers become more user-friendly, we allow more dumb people onto the internet. Each new, more abstract version of windows brings teeming waves of imbeciles onto the internet's sandy shore. Beached, they lay idle in the bile-soaked sand, stinking up the coast. We can clean up the mess by requiring that people take a simple test before they are allowed to use thier computers. It could be part of the liscense agreement. We could call it the Garbage Prohibited Liscense.
    The dumb people can have a seperate dumb internet using a proprietary protocol developed by Micrsoft.
  • So the evil goons at the evil company are trying to extend their evil methods to innocent computers.

    But wait--are they? The question that comes to mind for me is WHY are they experimenting here? On the one hand, there's the standard MS approach--anything to make a buck, and gain market share. The Borg approach, in other words: Rewrite the definition of the OS or the internet, until you own it all.

    But then you see this statement:
    "We do not harbor the conceit that it will be possible to be fully successful in such an endeavor, but we do feel that the time is right for radical experimentation."
    The first part sounds like honest programmers, and the second part sounds like geeks. Could it be that (gasp!) MS has some good people working for them? Some people who really _do_ want to push the envelope a bit, regardless of the corporation's intent?

    At any rate, I find it interesting and slightly ironic that this is coming from the company who first made >90% of the population aware of (or care about) what their OS actually was.
  • In many ways, this is Yet Another Thin Client Model (YATCM). Five years from now, we'll move away from it again, and then five years after that we'll be back to the thin client du jour/.

    I can see it now: "Microsoft: The fattest thin client ever created!"
  • Shouldn't your main goals be security and stablity, especially with stuff like they are proposing here? Security is next to last and stability is not even listed. Instead the first goal is easy distribution.

    This reminds me of MS Press' book Code Complete. All the way through it they harp about stability and design and useablity, then they go off and release some of the buggiest code this side of my first ZX81 programs.

    MS is doomed folks.
  • Distributed computing? Automatically deciding if a program should run locally or on a remote machine? Fault-tolerance? Dynamic load-balancing? Resource controls? Near-infinite scalability?

    Sorry Microsoft, but you're the one playing catch-up here. Linux already has, working, today, 98% of your vision.

    It's called MOSIX [mosix.org].

    Frankly it's the most jaw-dropping bit of Linux development I've ever seen. On a local network, create your own supercomputer out of idle workstations. Across the internet - well, .NET should go hang its head in shame. As a programmer, all you have to do is write ordinary, threaded applications, and magically benefit from the processing power of tens, hundreds, even thousands of machines. MOSIX does all the hard stuff.

    Truly an amazing piece of work.

  • This is very much like Jerry Popek's LOCUS OS [ucla.edu], from the 1980s. Way ahead of its time, it looked like UNIX, but you could fork a process and have the new fork run on another machine, with the pipes over the network, without any help from the application. The file system was distributed, and had some nice database-like commit/revert functions, so if you lost a network connection while updating something remotely, the file reverted to the old state.

    As for the automated network administration thing, AppleTalk networks did that from day one. That approach didn't scale (too much broadcasting), and the security was lousy (a more fundamental problem with plug-in and go).

  • That vision of operating systems is about as stale as a copy of MS DOS 3.3. Distributed, fault tolerant, object-oriented, scalable, self-tuning, self-configuring, secure systems have been the goal of operating design for decades. There have been some reasonable attempts at this, but the problem is software engineering and abstraction, not visions or feature lists. I see nothing in Microsoft's paper that proposes to address these issues.

    I think it's unlikely that Microsoft will do better with Millenium than an open source operating system that already exists: Plan9 from Bell Labs [bell-labs.com]. Plan9 already supports location independence, aggressive abstraction, introspection, and all the other stuff that is in Microsoft's vision (Inferno, which the paper cites, is somewhat based on Plan9). The limitations Plan9 has (and it has many) are, I think, intrinsic to this vision, and I doubt traditional operating system designers are equipped to deal with it--otherwise, they would have already done so over the last few decades. And nothing in Microsoft's paper suggests that Microsoft is straying outside this well-grazed field.

    Altogether, it looks like Microsoft is going to do what they always do: they are 10-20 years behind the curve, and they are working on another unimaginative, outdated operating system.

  • Treat the enitirety of the WWW and computing at large as a single database. Then, normalize it.

    There, you have the future of computing.

    How long will it take? Depends on how many glasses of wine the engineers the world over drink between now and then.

I THINK THEY SHOULD CONTINUE the policy of not giving a Nobel Prize for paneling. -- Jack Handley, The New Mexican, 1988.

Working...