Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft

Interview With Original NT OS/2 Developers 229

leddhead writes "Was browsing over at the microsoft site when i ran across this interview with dave cutler of vms/winnt fame. It is interesting to note how he stresses reliabilty over fancy graphics..." It's actually kind of an interesting interview if you ignore its PR-ish feel -- and the MS Word "?" problem if you're reading it in a Linux or Unix browser. The writer says the first NT OS/2 (NT's original name back in 1989) specs will be displayed at the Smithsonian soon. I wonder if this means Linus's first notes will be there someday. One can hope.
This discussion has been archived. No new comments can be posted.

Interview With Original NT OS/2 Developers

Comments Filter:
  • I really don't know what the Smithsonian is, _the_ Museum of Technology or something?
  • It's not that we (speaking in the FreeBSD project sense, since that's all I can speak for) are less willing to accept patches from outsiders. Heck, it's wonderful for people to report bugs _and_ include a solution! It's just that much less work.

    The issue, which has nothing to do with originating at Berkeley, is that coding is largely a matter of correctness. Things are held to a high standard. They can't just "work," but they also have to be coded well and deemed "proper."

    I suppose it seems to a lot of people that FreeBSD's developers are "stuck up," but that's not the case. You have to consider that unlike Linus, many of the developers have decades of experience. Despite the fact that I'm very young and inexperienced, I managed to become part of it all. I feel privileged to work with those that I can learn so much from, even if I may disagree with them sometimes.

    All in all, I think many people get the wrong ideas about FreeBSD, NetBSD, and OpenBSD. I hope to be able to dispel any uncertainties, because the project can come off as being somewhat closed toward outsiders, when it's more "wary" about anyone.

  • You are right about portability. NT can Run in 32bits on the x86. It used to run in 32bits on the PowerPC, the Alpha, the Mips,and possibly others. It will run in 64bits on Merced.

    NT is NOT the basis for windows ce. Windows CE is very much a different product from Embedded NT.
  • Blame Apple. They killed GEM with one of their damned "look and feel" lawsuits. they made DRI take all of the "good" stuff out of GEM
  • This is the funniest thing that I've read all week.
  • If you're getting the BSOD running a fractal zoomer I'd suggest getting some new video drivers. The NT kernel really is pretty stable, it's more likely there's a bug in the video driver crashing the system than in NT.
  • Banky (and great url, btw) wrote:
    If we would just install NT, and only NT, then leave the box sitting in the corner, we'd be OK and have stability problems? Uh, ok.

    So, some clarifications to my point about NT server apps w/r/t stability.

    a) NT file and print services work remarkably well.

    It is easier and quicker to set up a stable file and print sharing box for a common lan having win95 clients with NT than to do the same thing with Linux and Win95 clients, especially for Joe Q. Public. It may not be better, but it is surprisingly stable. At a former job, we had 20 plus machines (IBM Netfinity- thumbs up from me. I'd like to put Linux on one of those and see how many years it could stay up.) that were up for periods of over three straight months. One was up for over 7 months! We also had a server running Exchange and IIS (for Outlook Web Access, for Mac clients) that was similarly bulletproof.
    *however*
    we also had exchange and SNA server running on one box, and it was the devil's own. SNA server on its own box? No problem. All of these machines were loaded, too, with plenty of ram and storage and processors.

    Is it reasonable to expect to have to use a box (or two, for redundancy) everytime you want to implement an app, with two for exchange, two for sms, two for sql, rather than two mongo servers running all three apps? No, but I guess if you're a corporation it's only a drop in the bucket of IT costs.

    b) Machine origin, and additions to base NT drivers, plays a part in NT success.

    In spite of the fact that those IBM servers are fantastic, the extra software IBM throws in to manage them is horridly flawed security-wise. One of the factors relating to their stability may be the minimal amount of whizzy stuff I allowed on them. Too many people when installing NT belong to the 'oh, why not?' school of installing. It is easy to let it get crowded with useless bells, and those cause problems at startup etc. (NT bootup and shutdown does work uncomfortably like a horserace, and sometimes the services break their legs coming out of the gate. Exchange has been notorious for gumming up the works at shutdown, too.)

    Also, let's not forget that awful thing, the HCL. It is unreasonable to expect NT to run on anything with that giant caveat sitting there. I don't think it's unreasonable, since Linux runs on anything up to pocket watches now it seems, but apparently Microsoft does. Would I trust Linux for a commerce server built on a clone? Sure. NT? Ha. The fact that you can't get under the hood to solve problems when by all rights NT should be working is why linux will win.

    Base rule: (and it's the same as with any other server, even Linux) don't have on there what you don't use. I wish there was a way I could have my NT servers running CLI only. I managed exchange remotely anyway.

    Also, there are also machines that are just plain ornery. It is easy to forget that RAM still has errors, that heat problems are occurring, that the new NIC drivers are buggy, etc. and to blame it on NT instead.
  • It's startling to look at their methodology- these guys are merging thousands of changes a night into builds, they hand it off to a very intense team of testers, who then test it for SIX HOURS???
    To put this into context, when the original Mac Finder, Teachtext, MacWrite etc. was being written, the coders would burn all night and then turn the program over to another computer program that would run _all_ _night_ making completely random and senseless GUI inputs all over to try and confuse and jockey the software into collapsing.
    These Microsoft guys- it sounds like they are _manually_ trying out programs. For six hours. Do you have any idea how pathetically inadequate human input is to test such a program for six hours? How prone humans are to falling into patterns that don't cover all the inputs? How prone humans are to skipping _stupid_ inputs that might crash the machine?
    This is a recipe for huge amounts of completely untested code to get out there. I'd suspected something like this, but reading Keepers Of The Build really drives it home forcibly- the project is TOO BIG to test. There is no way in hell they can jockey that software into all possible failure modes in six hours even _with_ 'virtual user' software- and how many sorts of machines are they testing on? Hell, even Apple ended up having to stop testing new software on every instance of machine they ever made- it got too expensive as there were hundreds of Macs and the logistics were impossible with so many software projects lining up to use the labs. It sounds like Microsoft is not even trying.
    Instead they are doing things that _seem_ like they would be effective. They get people who _look_ really intense, they set up a combative situation so people will think 'Boy, they're really trying!'. They use the latest PCs (oh, but 'most powerful multiprocessor systems in the world', hell...) so people will think, 'Wow, they must really be able to debug much more than they could on _my_ machine!'. They are using a flatly ludicrous one-day cycle for the fastest builds, with the peculiar notion that they can track bugs better if the whole build is changing _faster_ than anybody else's development process... I presume when they can't debug it doing this, and new bugs keep happening faster than old ones leave (I bet NT 3.51 would have stood up to the w2000test.com load for longer than w2k), they presumably throw more programmers at the problem....
    How many other people have read this seemingly impressive picture of their build situation and gone "...hey... ...hey, _wait_ a minute!"? It's really wild and kind of scary to consider that not only are they doing this, they still think it will work.
    "For every 5 bugs that we squash, 7 more appear- so let's step up the pace and make the process happen five times as fast as it did! That'll help."
    *shudder*
  • Not true. ./ is an implementation of competing technology ( and a very visible one at that) and consequently frequent downtime does not reflect good on OS this site is using.

    Don't you think conclusion like that could be made ?
  • Well, as a current OS/2 (and Linux) user, the WPS and SOM are still ahead of Gnome/KDE etc in my book.

    And as for stability of OS/2 apps relative to Win 3.1 apps running under Win-OS/2, well, I've never found one that does anything other than throw an exception and terminate when it breaks. OS/2 apps are generally stable, I don't know where you got yours from, although I'll grant that it took until v2.1 was released for many system and WPS bugs to be worked out.

    Now, if SOM and the WPS could be on Linux, and my OS/2 apps ported, then I'd be ready to move for good!
  • Get real. If the application crashes the OS, then it's not the application that needs a "supported platforms" list, it's the OS that needs an "approved applications" list.

    If a user application crashes the operating system, the operating system is to blame: It's not the role of an application to do the operating system's job.

  • >Hell, even Apple ended up having to stop testing
    >new software on every instance of machine they
    >ever made- it got too expensive as there were
    >hundreds of Macs and the logistics were
    >impossible with so many software projects lining
    >up to use the labs.

    I worked in one of the Nortel testing centers for a few months, where they tested telephone switch h/w and s/w.

    They had one of everything that was still in use by any customer, anywhere in the world. The lab ran tests 24/7, it was Very Impressive.

    There was this one old piece of kit that was still being used by only one customer. They were apparently seriously considering giving them something more modern just so that they didn't have to worry about keeping compatibility with the old thing.

    Oh, and forget your piddly little UPS, those guys have a battery room bigger then my apartment, the leads coming off the batteries are as thick as your wrist.

    --
  • Come on! When evaluating stability of an operating environment, you have to look at how it behaves with misbehaving hardware and software. The reality is that most machines are not locked down boxes that are going to be using only "approved" hardware or software. More to the point, even hardware in the HCL can cause problems, as the tests required for inclusion can't root out every possible situation.

    Part of stability is how you handle problems. If user mode stuff acts up, that should _never_ cause a kernel panic or a full system freeze. If hardware acts up, the drivers/OS should be designed in such a manner that minimizes the effects. If we still have problems, then there should be a mechanism to debug and solve the problems. These are all issues that show that Microsoft is _clearly_ not (yet?) stable enough for enterprise applications.

    At least with free OS's, you have a mechanism for finding and fixing the problem. Further, I would suggest that at least with BSD, Linux and other UNIXes, you won't find user-mode apps creating serious problems with the kernel, nor will you find misbehaving hardware killing the system.
  • Has anyone ever sat down and read the two NT books? The ones I'm referring to are "Inside Windows NT" by Helen Custer and "Inside Windows NT: 2nd Edition" by David Solomon.

    The first one is more of a layman's book, but it does describe the goals of the NT team in the beginning whereas the second book is more in depth to the inner workings of NT and things you can prod it with to, I don't know..., make you say, "Hey, that's kinda neat!" :)

    I have them both and trying to get through both. ^_^; Anyone have any reccomendations for good general Linux books? Thanks in advance!

    FeiYen

    P.S. I still wonder about those of you who have BSODs with NT. The only time I ever have those is if I'm using non-HCL approved drivers for video or some other peripheral. So what are you guys running in your boxes?
  • by Anonymous Coward

    "The microkernel approach was essentially a dishonest approach aimed at receiving more dollars for research."

    Linus' essay from Open Sources [linuxworld.com]

    Linus apparently ain't fond of microkernels. The essay talks about portability too.

  • Pardon my ignorance, but I distinctly recall reading that for NT 4.0, graphics control was built into the lowest level of the NT kernel. This apparently had the effect of improving graphics performance at the expense of stability.

    Can someone correct me if I'm wrong? Otherwise, this is inconsistent with the goals MS claims to hold for NT development.
  • So, NT was developed primarily on MIPS to aid cross-platform portability?
    Or maybe they just tried running it on a 486/33 with 8 Meg of RAM
  • I guess logging out is a bit too complicated for me then. Because everytime I log out the box blue screens, not that it bothers me much since i only log out when I head home for the day, and for all I care the box could burst into flames at 5pm, as long as by 8am the next day it's magically put itself out and works. My problem with NT is that such simple operations can cause a crash. I don't know if the fault lies in the win32 api, or the kernel, or what. I really don't care. I've NEVER seen a unix system go completely belly up when a user logged out of it. Come to think of it, I've never seen any system do it before mine. And before you go blaming faulty hardware, we have two other machines in my department alone that exhibit the same behavior, then again they are the exact same machines using the exact same software so it could be a defective driver.

    BTW, I'd prefer not to reevaluate my NT related knowledge, I'd rather just stop using it, wonder if I can convince my boss that my machine should be a linux box...
  • Actually, I think the problem is with the DOS virtual machine. XaoS includes support for *text-mode* realtime fractal zooming, using AA-lib [ta.jcu.cz], and it is this I was trying to use. I also bluescreened just by asking for the usage message (which worked the first time).

    I think I have finally found a use for Microsoft's obnoxious 'policy manager' - fix this security hole by stopping users from running DOS applications.
  • Look at this [widomaker.com]. But anyway....

    Chuck
  • I see that some moderator confused sarcasm with flamebait again.
  • I hate their damned web pages. They are always giving errors.

    When I have a bug in an MS product, I've found that they have a giant database of all bugs and all fixes in Support and Knowledge Base on their search page [microsoft.com].

    Their search site's ASP pages always give errors or sometimes say that no data is available. Hit Refresh, and there are 7 or 8 links. Gotta love IIS + SQL 7.0

    Micro$ost is to Ford (or Renault rather) as Linux is to Toyota

  • > NT is an interesting design, but I'm not sure I'd call it a "microkernel" - more of a hybrid.

    Right. The technical term is, I believe, "modified microkernel". Since the NT kernel system services share the same process (ntoskrnl.exe), their intercommunication is in-process, obviating the need for the message protocol-type cross-process communication used by typical microkernel designs. Apparently this was done for ease and performance:

    • A disadvantage to pure microkernel design is slow performance. Every interaction between operating system components in microkernel design requires an interprocess message. For example, if the Process Manager requires the Virtual Memory Manager to create an address map for a new process, it must send a message to the Virtual Memory Manager. In addition to the overhead costs of creating and sending messages, the interprocess message requirement results in two context switches: the first from the Process Manager to the Virtual Memory Manager, and the second back to the Process Manager after the Virtual Memory Manager carries out the request.

      NT takes a unique approach, known as modified microkernel, that falls between pure microkernel and monolithic design. In NT's modified microkernel design, operating system environments execute in user mode as discrete processes, including DOS, Win16, Win32, OS/2, and POSIX (DOS and Win16 are not shown in Figure 1). The basic operating system subsystems, including the Process Manager and the Virtual Memory Manager, execute in kernel mode, and they are compiled into one file image. These kernel-mode subsystems are not separate processes, and they can communicate with one another by using function calls for maximum performance. NT's user-mode operating system environments implement separate operating system APIs. The degree of NT support for each environment varies, however. Support for DOS is limited to the DOS programs that do not attempt to access the computer's hardware directly. OS/2 and POSIX support stops short of user-interface functions and the advanced features of the APIs. Win32 is really the official language of NT, and it's the only API Microsoft has expanded since NT was first released.

      NT's operating system environments rely on services that the kernel mode exports to carry out tasks that they can't carry out in user mode. The services invoked in kernel mode are known as NT's native API. This API is made up of about 250 functions that NT's operating systems access through software-exception system calls. A software-exception system call is a hardware-assisted way to change execution modes from user mode to kernel mode; it gives NT control over the data that passes between the two modes.

    (from Inside NT Architecture, Mark Russinovich, Windows NT Magazine, March-April 1998).

    See http://www.sysinternals.com/ntdll.htm [sysinternals.com] and Mark Russinovich' other publications here: http://www.sysinternals.com/publ.htm [sysinternals.com] (Note: Unfortunately this page simply points to Windows NT Magazine's article database, which requires a valid subscription to view. If you can overlook the often ignorant, opinionated, partial, pro-NT, pro-Microsoft editorial content -- it's nowhere near the Nazi attitude of the amateurish "Dr. Dobbs"-wannabe Windows NT Systems Journal -- the subscription is almost worth the price, I think).

  • by jonm ( 13708 )
    Strangely I also seem to have the ? problem. My machine: NT4. My browser: IE5.
  • Cutler named his new OS "WNT" by adding one letter each to VMS:

    VMS
    +111
    ----
    WNT

    It wasn't until two weeks after WNT was the official name that Gates learned the truth, and quickly came up with the "New Technology" line to cover it up.
  • The current Windows 2000 SDK and DDK already
    have the APIs for Win64 in them... Just read the code, and look for the lines that say:
    #if defined(_M_IA64)
  • Call me stupid (or just not interested enough in NT to do a whole lot of research), but I always kind of wondered what NT stood for. Apparently I was under some kinda rock when this was announced. "New Technology". Oooooohhhhhh......

    RP
  • I have the impression that many Win32 APIs are implemented in NT as user-mode library code atop the NT system call interface, i.e. not all calls to Win32 system services involve the Win32 subsystem process (the first edition of Inside Windows NT says that "In addition to a flexible, optimized message-passing facility, the Windows NT developers established some 'tricks' that reduce the number of interactions a client [e.g., a Win32 application, or an OS/2 application, or a POSIX application] must make with a server [e.g., the Win32, OS/2, or POSIX subsystem processes]: ... Using client-side DLLs to implement the API routines that don't use or modify global data.").
  • what are Linus notes?
  • I was using SP3 and when I installed a driver for my webcam the system rebooted and gave me a blue screen. Installing a driver should not do that. I believe you are supposed to remove the service pack, then install the driver and then reinstall the service pack? but that seems like a lot of work to install a peice of hardware. I hope that W2K is not like that with service packs. Of course I am not touching w2k unless someone pays me too, or I get a copy for free (like that's going to happen)
  • I must say the "interview" wasn't all that great. Seemed to be more of a marketing peice than an informative interview. The whole thing was spent going on about NT's suburb design and how it's completely crash-proof (I guess that's why I've never been able to keep my box at work up for more than 2 days). The concept for the interview had potential; it ended up sounding more like a 5th grader's report on the day he met . A little less fluf, and a lot more content and it would have been an excellent peice.
  • That's what I was told. The one server was performing the following jobs:

    * print spooling (eventually printing the jobs to the 3 HP LaserJet 5si printers - the ones that look like xerox machines)
    * File sharing (individual files and apps that are not stored locally on the W95 client)
    * Authentication with a Kerberos server upstream
    * Re-Imaging of the W95 clients (essentially, when a user logs off on the client, an app would query the server to do a comparison of the harddrive image stored on the server with the W95 client harddrive. If they differ, the server would re-image the client drive to match the image on the server. This was done to keep the machines in stable working order.)

    This was all done on a dual Pentium II PowerEdge Server from Dell. I don't really remember the specs beyond that, other than the system only had only 2 scsi drives, 2 NICs, and a like 128 or 256 MB of RAM.

    What basically happened was that the server would spool an average of 20-30 print jobs, closer to 60+ at peak usage, constantly through the day; several machines would be authenticating; several machines would be re-imaging; and several would be asking for apps off the server. In the end, all of the disk I/O (from the print spooling, imaging) would bring the system to grinding halt. Mind you, there were about 200 workstations hitting this guy.

    Eventually, the managers of the lab procured another server to handle printing and let the other server handle everything else.

    So, yes, the hardware wasn't up to par for the workload causing the system to crash. From what I recall, the printer spool eating up the harddrive was a major issue.

    Frankly, if management insisted on only using one server, I wonder if using a RAID of Cheetah drives would have helped at all. More RAM wouldn't have hurt as well.

    Curious, has anyone ever had a Linux box perform a similar job?

    FeiYen
  • They said they got portability and extensibility...um, what about reliability, compatibility, and performance.

    And why does it make me feel unsettled when a chief engineer is surprised and grateful that the operating system he just designed and coded actually *worked*?

    I don't get what the big deal is. Are specs and design something new? Or just something new to Microsoft? I'm not an anti-MS troll, but this article really sounds like something fabulous happened. Wow...a decent design leads to a decent product, spec first, then code, premature optimization..., these don't sound amazingly novel to me...maybe they were in 89.
  • Who the heck is the author? The developers seem okay, but she seems quite daffy.

    -awc
  • NT does run on a 64 bit cpu (like Linux and NetBSD) but it only uses 32 bits (like Linux)

    linux is fully 64bit on Alpha!!! (UltraSparc kernel aswell, not sure about app's). The kernel is 64bit. The C libraries are fully 64bit. The App's are fully 64 bit. Linux is 64 bit clean!!!!!!

    in fact the reason there's no netscape for Alpha is precisely because it's all 64bit and NS isn't 64bit clean.

  • The original OS/2 was intended for the 80286 CPU.
    If you ever tried CCP/M (Concurrent CP/M) you would see a pre-emptive multitasked 16bit OS for the 8086 (I ran it on a 80186 based machine, but those two were pretty close, including the 20bit address and 16bit data bus). And what about Coherent from Williams? It was a simple Unix clone that also ran on 8086 and it was originally pre-OS/2 I think. At least I had version 3.2 in 1991.

    YOu see, OS/2 was but one small stepping stone in the history of OS's, and not very new or innovative at all (should I put on my asbestos suit? ;-)

    The most innovative of Linux also isn't the technology, but the development model (open source on the internet) and the management of it (Linus delegating and collecting the threads), which at least hasn't been done on this scale before. And if you think scale doesn't matter in a software project I can only say that you are wrong.
  • Just because the original OS/2 NT specs are being donated to the Smithsonian doesn't mean that they will be displayed. Most of the Smithsonian's treasures are stored in the basement and will never see the light of day. Many basement treasures are of questionable value since the Smithsonian will accept most any donation that has any chance, however remote, of someday being significant.
  • The Smithsonian is actually a collection of museums, mostly situated in Washington, DC's mall area between the Washington Monument and the Captol. There's (let's see what I can name off the top of my head): American History, Natural History, Museum of Art, Air and Space, American Art (I think), plus a few others.

    My two favorites are Air and Space and American History. Air and Space has some of the Star Trek models, a V2, moon rocks, and so on. American history has stuff like Kermit, Archie Bunker's chair, Mr. Roger's sweater and a very early US flag. Neat stuff.
  • by cje ( 33931 ) on Wednesday August 25, 1999 @05:21AM (#1727047) Homepage
    WINDOWS NT MUSEUM EXHIBIT DEBUT LESS THAN IMPRESSIVE
    Exhibit "Looks Pretty, But Offers Little Substance"


    WASHINGTON, DC (AP) - A new exhibit unveiled this week in the famed Smithsonian paid tribute to the Microsoft (NASDAQ: MSFT) operating system originally known as "NT OS/2." Currently known as "Windows 32/64 2005+ SP8R12", the operating system has also been known as "Windows NT", "Windows 2000", "Windows Memphis Moscow Beauregard 2001 Plus", "Windows For E-Commerce And Other Buzzwords", "Windows 16/32/64 PlusPack 312 With FrontPage Extensions", and "Steve". Although expectations were high among Smithsonian officials, the exhibit's debut was not without its problems.

    "It was a real mess," said museum curator Steven Fleischmann. "The exhibit had only been open to the public for about twenty minutes when one of the curtains came crashing down, landing on some tourists from Guam. "Oh, it was awful," recounted one of the tourists. "The curtains fell, and after we got out from underneath them, all we could see was the wall that was behind the curtains. The wall was this mesmerizing deep blue color." Also painted on the wall was a series of alphanumeric characters used by the Smithsonian to track exhibits.

    Then, one of the legs on the table that was holding the specs broke, and it sent the plastic case sliding on down to the floor." Security guards swarmed onto the scene to repair the table and re-hang the curtain, to minimize the amount of time that the exhibit was unavailable to the general public. "I was having a picnic with my kids," complained security guard Jeff Fenner. "It would have been nice if I could have fixed the problem without actually having to come here, but it's awfully damn hard to repair a table remotely." Fenner spoke on condition of anonymity.

    Museum exhibit construction experts are blaming the embarassing incident partly on the exhibit's design. "Look at this," pointed out expert Louise Smith. "They've designed the exhibit so that the curtains are attached to the table. Apparently, the only reason they did this is to increase the visual attractiveness of the exhibit. But this is a dangerous design, as we found out this morning. If the curtains go down, it takes the table with it. I'm not sure that this is a model that should be imitated by future exhibits."

    For its part, Microsoft is downplaying the exhibit fiasco. "Look," said an annoyed Ed Muth, "it's not our fault that the Smithsonian was unable to properly configure its table and curtains. If they had set things up correctly, the exhibit never would have gone down. In short, it's their fault, not ours. Our recommendation is to upgrade to a newer table and more durable curtains." Muth, a Microsoft project manager, also added a recommendation that the Smithsonian purchase "a large support contract from Microsoft."

    Although Fleischmann remains optimistic about the exhibit's future, he still has some reservations. "Look at it," he said, gesturing. "It's very pretty. I think that people will want to look at it. I just have some very real concerns about the whole foundation of the thing, and I don't want to have to maintain an army of custodial staff to rescue the exhibit every time it collapses."

    Nicholas Petreley contributed to this story.
  • Why on earth should a graphics problem affect the kernel?

    Can we say, crap design?

    My friend installed NT on a system with an ATI driver card.. .. Well, that caused consisten bluescreens, too.

    How to avoid said problem?

    --> Boot into a nice command line.
    ---... Ooops.. OS/2 could do that, with mooking. NT can't.

    I guess cleaner OS abstraction really is smarter.
    I LOVE how Linux lets me have my X session over on virtual terminal 7.. In the mean time, I do whatever I want in the other 5 available text terminals (the one is blocked with stderr messages from X)..

    It's also kinda nasty that he can't telnet in or otherwise remote manage the system to remove the drivers and schedule a reboot. But I guess that's "MS clean design" for you. Reminds me of a char with one seat. Sickening.
  • Funny.

    I distinctly remember a co-worker tearing his hair out over BSODs he was getting while installing NT on a right-out-of-the-box HP Netserver. I guess HP makes crappy hardware or they're were fibbing about the server being capable of running NT.

  • Meanwhile, NT runs on TWO platforms. and 64 bit NT will be a complete rewrite! Dear God man, if you can't even port to a 64 bit CPU, how transportable can your OS be?
    By the time NT 3.5 was released it ran or had run on i860, MIPS, x86, Alpha, and POWER. SparcV9 and HPPA ports were in-progress. NT also serves as the basis for WinCE (though this is not widely known) which runs on several more.

    He's right, portability is enhanced by targetting more than one architecture from the outset. NT has done quite well in that respect. Where NT has not done so well is in maintaining that support -- perhaps because most of those other architectures are not commercially viable markets for Microsoft (for various reasons).

    Now, the fact that Linux did pretty well at that despite its x86-originated design is largely a matter of emulating the design of UNIX -- which had been designed for portability back in about 1970.

    Another interesting fact is that Cutler's NT design was strongly (*very* strongly) related to the design of Prism, a VMS follow-on. DEC cancelled Prism so Cutler walked out the door with his whole team and they formed the original NT team.

  • Perhaps you're unfamiliar with the role Cutler had (or didn't have) in the implementation of NT 4, and the differences between the 3.5 and 4.0 driver architecture.
  • Linux only uses 32-bit addressing on a 64-bit chip?! What an insult to the owners of real chips! Linux certainly does use 64-bits of a 64-bit Alpha!
  • Whoooops!!!
    Sorry about that, guess I made the mistake of being stuck in the 1.x.x.x days.

    Ah well, a good rant does consist of a healthy dose of FUD and the daily allowed dosage of false facts...
  • So pray tell, how would _you_ respond if you were to meet up with Mr. Torvalds in a coffee shop?

    Wouldn't you even be in awe for a split second? Or would you just ignore your opportunity to thank the man behind the kernel?

    Ta ta for now,

    FeiYen
    "Judge not others, lest you be judged."
  • Linux has excellent threading support, and has had it since at least 1.3, maybe even 1.1. And it is not micro-kernel-based, though MkLinux is a Linux port to the Mach microkernel. And real threading can be quite easily be achieved on top of a microkernel, cf NextSTEP, WinNT, QNX, MkLinux, etc.
    So I'm afraid all your assumptions were wrong (including Linux being (originally) designed with portability in mind). :)
  • Linux _was_ designed with portability in mind

    No actually it wasn't. I think Linus once said that it would never run on anything other than the Intel x86



    It was designed so that all is needed to be ported is the micro-kernel on top of which is the rest of the OS

    heh heh. Do a search for "Linux is obsolete". This is an old usenet thread from the early 90's where Linus argued against the micro-kernels with Andy Tanumbaum SP? (The author of Minix).



    The irony as I see it is one of the merits of a Microkernel design is portability, yet Linux has become more widely ported than most microkernel OSes.

  • Would there be any chance at all of a Slashdot-style (highest moderated questions win) interview with these guys. I for one would be very interested in their take of the OS market as it stands today and the technical merits and problems of NT and it's competitors (esp Unix, BeOS, Mac).

    I use NT daily at work and (IMHO) it is very good most of the time. My problems mainly stem from what has been tacked onto NT (or, in MS speak, integrated). From what I hear about 2000, more integration is on the way...

    MS gets bashed here all the time (often well deserved). But I know that they also happen to employ a lot of exceptionally talented people, especially in their R&D labs. Unfortunately, I've also heard that the minions charged with the task of translating spec into code often are not so talented. At a MS dev days event I attended a couple of years ago, the MS speaker said they suffer from the guy in a room syndrome- ie their projects too often depend on two or three exceptional people. At the time he was specifically talking about Excel.

    Enough rambling...

    EC
  • i haven't done this myself, but I would guess most of the code in the Linux kernel is written in the USA. Remember, Linus is just the coordinator of the project. I'm sure somebody has calculated contributions based on the domain. Perhaps someone could dig up a URL? Hmm, well now. I did a very quick and dirty little look in the CREDITS file, and grepped out all the @ addresses. Firstly linux is a pretty damn international piece of kit, so it doesn't matter a toss what way you want to call it, but nonethess there is 310 email addresses. Some are duplicates, many are .com .org and .net so they could be from anywhere, and some are radio ones

    Nomethess doing a grep for .edu which is definitely the states with the .com and .gov and .org .net totaled up gives 143 email addresses, so thats less than half the credits file.

    Allowing for a huge degree of error, which i imagine weighs against the concept that all the .coms and .orgs and .net are us, its looks pretty reliable that most of the contributers are not from the states.

    Considering the states as the single biggest contributer, might be plausable however, but ultimately unprovable one way of the other, which is probably a good thing, as considering the issue is a pretty irrevelent thing to be doing anyway, a sense of community is what we should be striving for, but for the goal of pointing out that no one nationhood has any ownership of the kernel, it might be worth my while

    A quick totaling up of some of the eu endings gives a total of 99 credits for .ie,.de,.uk,.fi,dk,.nl.,.fr,.se,.be,.it. Any how those germans love linux, they come in at a staggering 46 contributers compared to a measly runner up 16 for .uk, though wales did produce alan, so all is forgiven. Fair credit has to be given to the impressive showing for .nl with 13.

    some guessword would make me suspect that there is a half to one credits per million inhabitants of a country, ahalf for big ones :-). germany is 80 million, nl is approx 12 (methinks), ie and .be are approx 3 ands o on.

    More mad meanderings leaves me to predict that the states might have 100 legit members in the credits list :-)

  • /. is an implementation of a competing technology with JUST SLIGHTLY FEWER SERVERS and bandwidth than microsoft.com

    Doesn't matter how good Linux/Apache/whatever is if there's not enough bandwidth and CPU to serve everyone who's visiting a site.
  • I agree that user mode "stuff" should not cause a kernel panic. Drivers are not user mode, and most are not written by Microsoft. On the occasions my system does crash (once every couple of months) it usually appears to be the fault of the crappy Novell client running om my machine. This does not mean that NT is unstable, this means that Novell wrote a bad networking client.

    I write NT device drivers for PCI cards. When I'm testing my driver I hammer the hell out of the IO subsystem. I've had my drivers producing 20000 interrupts a second for days on end without a glitch. When my test systems crash, it's because I screwed something up in my driver, not because NT is at fault.

    NT's benefit and curse is that it's supported by a lot of third party vendors. The benefit is that consumers like to have a lot of choices in what they buy. The curse is that a lot of companies release a lot of immature drivers to support that hardware.
  • Are you running NTFS? It doesn't keep it from crashing, but it'll likely save you from having to reload.

    I'd also recomend looking for driver updates, especially for the video card. Video card manufacturers seem to be much more concerned about benchmarking performance than stability lately.
  • Maybe she bites the head off of Microsoft in front of a carnival audience?

    ----
  • Yup, and OS/2 for PowerPC was actually quite a big re-write.
    From what I've seen of the architecture and specs, it would've been nicely portable (that _was_ one of its goals).
    Too bad they never finished the work...

    If any geek wants a read, IBM has a big .PDF on-line for free at:
    http://www.redbooks.ibm.com/abstracts/sg244630.h tml

  • Yes, VMS is monolithic and very definitely hardware-dependent. If you compare the VAX architecture manual and "VMS 4.4 Internals and Data structures" it's hard to see which was designed around which. Yet Digital managed to port the darn thing to Alpha.

    And as for NT's microkernel design, now that's a thing which could be classified as odd. Sure it seems to work but what was the original point of microkernels ?? Small size ? Flexibility ? Reliability ? Having graphics drivers in ring 0?

    (Hey! we could imitate the Torvalds vs. Tanenbaum microkernel debate...:)
  • yup, graphics are at kernel-level now, as illustrated by the blue screen I was faced with yesterday (perhaps an interrupt conflict? NT doesn't have plug-and-pray features)
  • And NT does run on a 64 bit cpu (like Linux and NetBSD) but it only uses 32 bits

    This is correct, NT is pretty much hard-coded for 32 bits.

    (like Linux).

    This is incorrect. Linux is 64 bit on Alpha. The original port to Alpha was 32 bit, but that was quickly replaced with a true 64 bit port. Linux will be 64 bit on Merced when it ships. I don't know for sure about Linux on 64 bit PPC processors or what the status of Linux on 64 bit MIPS is off the top of my head.

  • Yes, I bought both because the 2nd Edition had more excercises/experiments with NT that you can try out.

    I like both. =)

    FeiYen
  • "But as the technology matures, playing fast and loose isn't acceptable anymore. This is characteristic of the maturing process for a product like Windows. People will put up with more from the bleeding edge."

    UH...I'm not sure that anything MS has ever done could be considered "bleeding edge". I mean, they were trying to build a Unix type OS with all of the windows look and feel... I would hardly call that bleeding edge. phlah.... Now with all the resources that any company could ask for they still aren't doing anything bleedign edge... phlah. Bunch of friggin sheep.

    What a friggin load. Ya know..I would at least have a little respect for them if they didn't assualt my intelligence at every waking moment.
  • Because at my last job, my video card crashed my computer 2 or 3 times a day.

    Today's English Lesson: Oxymorons

  • I honestly don't see how bugs can be eradicated from the enduser experience. NT is much more stable than is posited here IF you can keep the number of Microsoft server apps to a minimum, preferably one.

    So: /. users bash NT as an unstable piece of dung. This is because most /. user (or admin) experience is with NT running multiple services. If we would just install NT, and only NT, then leave the box sitting in the corner, we'd be OK and have stability problems? Uh, ok.

    Actually we have one machine at work that crashes for no apparent reason and it doesn't do anything but sit there.

  • who wanted to carry on their work when the 'Multics' project w/ GE was canceled. Soon, I'm going to put up a site with an article from Scientific American from, oh, the September issue, a 'special microcomputing issue' from circa 77 or so which documents the work with 'windows' at Xerox quit well as 'prior art'.

    [beware, an inverse slam tain't necessarily true, heheh]

    Chuck
  • written by G. Pascal Zachary. Amazon has it at:

    http://www.amazon.com/exec/obidos/ASIN/002935671 7/qid=935594240/sr=1-37/002-8663731-751845 2
  • The page cannot be displayed

    There is a problem with the page you are trying to reach and it cannot
    be displayed.


    Please try the following:

    Open the www.microsoft.com home page, and then look for
    links to the information you want.
    Click the Refresh button, or try again later.

    HTTP 500 - Internal server error
    Internet Information Services

  • For an insight into Cutler's infantile character (which so well matches that of his boss, or so they say) and insane hatred of Unix, read Peter Sallus' book "25 years of the Unix Operating System".
    Did someone say professional jealousy?
  • Ohh yeah.. I've run a bank of 8 NT machine with only these two things.. NT (No extra drivers.. just vanilla NT + SP3) and SQL Server.

    Crash.. crash after crash. Tried many a SQL Server version. (from 6-7 patch by patch). Not that the systems wounldn't stay up for a week or two. But eventually they'd crash.

    I'm sorry.. but even the WORSE builds of MySQL has never ever ever panic'ed my kernel.

    Heck, I do XFree programming and even though I crash the display adapter registers.. I never ever have locked my machine.

    NT sucks. Good ideas, too many KERNEL bugs.
    pan
  • AHH! You just put shivers through me. I haven't had that problem for a couple months now. I suppose since it hasn't happend for soooo long, it will probably happen tonight. Thanks a lot! :)

  • Dave Cutler joined Microsoft from Digital before it became obvious that MS was the Evil Empire. At that time (pre-1987, I realize that's prehistoric to many /. readers) IBM was the Emperor to fear. Subsequent to the 1987 announcement of PS/2 systems with MCA and OS/2 the users eventually realized that the Emperor had no clothes.

    Microsoft only came out from under IBM's shadow after the market success of Windown 3 when MS broke off development of OS/2 with IBM in favor of developing NT.
  • So this is where NT belongs, among the relics of history.

    Oh, BTW, if any NT advicate knows how to stop NT from changing the positions of icons in the bar at the bottom, please let me know. My problem with this is that if I have a couple of xtems, the icons in the bar flips positions depending on which is opened when. If more apps are open, icon behaviour is practically undefined. Truly dumb.
  • This debate has been culled and included in the ORA book Open Sources [amazon.com]. It's a great discussion/debate and a fascinating book.
  • You need to reevaluate your NT related knowledge of you have problems keeping NT box up more than two days...
  • Hi FeiYen, you appear to be very knowledgeable
    regarding NT. I have read (in the context of BO
    articles) that Internet Explorer runs not only
    with administrator privileges, but even partially
    in kernel space. Can you confirm or deny this?
    Thanks
  • They said they got portability and extensibility...um, what about reliability, compatibility, and performance.
    Precisely! That's just it! They didn't get the other three, and boy, can't we tell?

    Although I am an anti-Microsoft troll (but not as qualified as i could be), I still use/have to use M$ because I'm not as versed in Linux as I want to be, I don't have a Mac, I'd like BeOS but it's more $$ to pay, etc, etc.. so, I still use M$ much of the time. Esp since I have/had a WinModem, so.. that kinda sucks right there. (the had/have comes from the fact that it's in my laptop, which I don't have right now b/c it's temporarily out-of-service. oh well, c'est la vie).

    Anyway, I thought it was interesting, just to read and try to get more insight into what happened back then, although, it seemed more of a feature story for some newspaper than an interview, because most of it was just quote from the 2 developers, and not people asking questions.. odd, I must say.
  • Any how those germans love linux, they come in at a staggering 46 contributers compared to a measly runner up 16 for .uk, though wales did produce alan, so all is forgiven.

    I'd love to see some statistics on the contributions of people from various countries to various free software projects, relative to the populations of those countries - or to the number of people involved in software development in those countries. I suspect that there are some that have significantly higher per-capita contributions than others (and that Germany'd be one of the ones with higher levels of contributions); if so, I'd be curious what the reason(s) are....

  • What I found interesting was Cutler's comments that he'd rather have stability than the latest whiz-bang features.

    SO.... If he's the architect, why aren't they listening to him and concentrating on stability?

    Must be those marketing guys who want to make sure we are running Insecure Exploder everywhere. Yeah, yeah, that's the ticket!

    Scott
  • by millia ( 35740 ) on Wednesday August 25, 1999 @05:46AM (#1727110) Homepage
    (On a totally unrelated note, is anybody else interested in taking up a collection to get the 'Dave' banner taken out of circulation? That guy gives me the creeps.)

    As was pointed out earlier, the first thing everybody should do (if they're interested in the subject, of course) is to go find a copy of 'Showstopper,' written about the birth of NT. I found mine remaindered. You do get the anecdotes about Cutler's boorishness, but you also get a balanced look at the development of a modern operating system.

    Now, I'm not enough of a kernel guy to argue about design specs, capabilities, etc., but after re-reading the book there are many things that pop to mind about Linux/NT:
    0) Cutler's goals are/were initially very unix-like. When you get right down to it, NT in its initial design sounded (again, to unqualified ears) remarkably similar, with its ways of isolating the kernel, etc. I cannot imagine HOW much fun they have had working DirectX into NT, and I can't even begin to imagine how much Cutler hated it (based on his vision as seen in the interview and in showstopper.)
    1) Cutler was adamant about getting rid of bugs. The surprising stability of W2k beta3 was frankly shocking to me, until I remembered that he was back in the fold. Yes, we are talking about a prime example of bloatware, but even so. Unfortunately, since the code is only reviewed by a limited number of people, they're always going to be behind the curve. Furthermore, Cutler is (probably) not responsible for the behavior of other programs from Microsoft, such as Exchange, SQL, etc., and I think that is from my experience the primary starting point for failure on NT boxes. Add to this the fact that there was again undoubtedly tremendous pressure on Cutler from Marketing to do those whizzy things that would compromise kernel stability, and I'm glad I don't have his job (or that of a programmer on Exchange.) I honestly don't see how bugs can be eradicated from the enduser experience. NT is much more stable than is posited here IF you can keep the number of Microsoft server apps to a minimum, preferably one. But things like Back Office pretend that you can have Exchange, SQL, and SMS all running on the same box- hell, I can't even get through installing such a combination without a crash. As long as Cutler is left alone and has sufficient authority, that *might* happen, but frankly I doubt it.
    2) Portability is gone now for NT. I never did quite understand how Microsoft wouldn't pony up the money to keep NT alive on PPC chips, and I'm even more confused about Compaq shutting down Alpha NT support and development. Isn't having a valid counterweight to Intel more important than, say, 250 million to keep NT alive on those 2 platforms? Does Microsoft trust AMD to survive? In contrast, Linux is ported to gosh-knows how many machines already, and will continue to be. Seems to me keeping it alive on multiple platforms would be an investment for the future.
    3) Graphics. The bane of Cutler's existence. I think had his crew been left alone to create a text mode only NT for v1, we would be looking at a totally different situation. Novell did just fine with text only screens, and Linux did too in the beginning. Trying to force NT to run before it was stable enough to walk was a mistake. It would have been better to layer it on later AFTER stability was worked out.

    Ultimately, I think NT is doomed to failure against Linux- there are simply too many people using Linux now. But more, I think anything manmade is either made for money or art. And things made for art (or love, if you will) endure. Cutler has the artist in him, but his painting keeps getting smeared by people above him. Gates doesn't have art in him and never will- he may be a nerd, but he's no geek.
  • Strange to see that they at one time where creating a posix complian windoze. Geez, everything that they stress in this interview (security, reliablity, optimzation....) is the exact oposite of what windows is. Are these the people that are brain washing the masses, or are these people brain washed themselves?
  • Interesting, especially in light of comments like:

    "I'd much rather see the most reliable and usable operating system than the most whizzy-bang operating system," Cutler says. "To increase reliability we have to make choices. For every 10 bugs we fix, we may introduce three more.

    But do you want to ship with 10 bugs, or do you want to ship with three?"

    Funny how the reality is somewhat different...
  • I think I'd respond to him how he'd want me to respond to him...as if he were a *normal* person.

    I certainly wouldn't be all dewy-eyed and beside myself with awe.
  • If anybody, Mac heads should be worshipping Steve Wozniak...remember...the guy that designed and invented everything and got screwed over and over again by Steve Jobs?
  • .edu is *not* necessarily the States

    When I was a student at the University of Toronto in the early 90s all my e-mail addresses ended with .edu

    Most Canadian universities were doing the same thing then as the .ca domain hadn't been as widely used yet

    Now most addresses are utoronto.ca but in previous years many unis outside of the US used the .edu domain
  • I doubt all you're running is Exchange, Outlook and IE. And you don't know what the HCL is yet you claim your drivers are fine.

    95% of blue screens are due to hardware, either the equipment itself or the drivers. When I started using NT I stared at blue screens all the time, and couldn't figure out shit. MS has some resources for debugging them, but not much. Then I started using hardware only on the HCL from vendors with solid drivers (like Matrox for video, and Intel for NiCs, etc). All of a sudden, no more blue screens...


    -witz
  • Strangely I also seem to have the ? problem. My machine: NT4. My browser: IE5.

    Are you using a special style sheet, or your own font? The nonstandard quote characters that Microsoft likes aren't in all fonts, even on Windows.

  • It already exists, and I believe has been published in part.

    Berlin-- http://www.berlin-consortium.org [berlin-consoritum.org]
  • Yeah .. We get similar crap quite often here on /.
    the diffrence being that /. doesn't even give you
    Internal Error" - it simply times-out. I don't know which is worse ...
  • As a Microsoft geek, I feel like I'm holding a piece of history.

    I'm sure she's referring to her Microsoft stock. At least she can admit that Microsoft is history. (:

    ---

  • Makes you wonder why NT runs on only tow platforms, and not that well on either.

    Great chunks of both (NT & Linux) OS's can be considered kludges and hacks, the difference with Linux is that these hacks are open for peer review.

  • If this was intended to be funny, I'd say you failed.
  • Remember that 1989 was before Windows 3.0. In 1989, Windows was still a nonentity.

    Yep, and Al Gore invented the Internet and Gates is an innovative visionary. Uh-huh, yeah, right.
    Guess those holes on the Mac screen are 'doors' or something. Just can't stand the solipsism [geocities.com] :))

    Chuck
  • What about portability? The kernel itself might be portable, but win32 is very unportable. Atleast to 64bit architectures. That's what you get for using things like DWORD and WORD for uid and pid instead of uid_t and pid_t.
  • by Sxooter ( 29722 ) on Wednesday August 25, 1999 @04:25AM (#1727132)
    In the interview, Cutler says:


    But the only way to achieve portability is to develop for more than one platform at a time.

    Funny. Linux was NOT designed with portability in mind, using a monolithic kernel. Yet by using a fairly bland set of assumptions about uProcessor design, it runs on nearly everything (didn't I just see something about it being on the dreamcast machines now???)

    Meanwhile, NT runs on TWO platforms. and 64 bit NT will be a complete rewrite! Dear God man, if you can't even port to a 64 bit CPU, how transportable can your OS be?
  • by Anonymous Coward
    The article does not reveal very much from the original spec, however I am curious how members of the Linux kernel development team feel about what was said. For instance, they mentioned not worrying about the memory footprint and the importance of sticking to the ideas of the few key people who write the spec. This seems somewhat different than the internet development of Linux. Any thoughts on the two different development models, and kernel planning in general?
  • Now, the fact that Linux did pretty well at that despite its x86-originated design is largely a matter of emulating the design of UNIX -- which had been designed for portability back in about 1970.

    Not exactly. UNIX was originally done in PDP-7 assembler (for the benefit of the younger members of the audience, that "7" is not a typo), redone in PDP-11 assembler, and then redone in C, but that wasn't for portability; the first porting work at AT&T, at least, was done in the mid-to-late '70's with a port to the Interdata 8/32 (various ports had been done by other folk outside AT&T) - V7 was, I think, the first UNIX released outside AT&T that included the results of that work.

    Linux's API was a UNIX API, and the UNIXes of that day (and of the present) have an API that's basically a V7 superset (the V7 API fixed a pile of somewhat ugly non-portable bits, e.g. stat() filled in a structure rather than an array of ints), so at the API level perhaps it inherited portability from UNIX, but the kernel implementation, at least, wasn't based on an AT&T implementation - and I have the impression that the original Linux kernel directly used a number of x86isms.

  • The interview is a no-win situation. On the one hand, it would have been great to hear the view of a couple NT developers on how MS corporate culture opposes their stated goals of portability, etc. (as was hinted at). But such an interview wouldn't be permitted by the culture, which means it would have been a third-party interviewer, and the interview would have been posted somewhere other than www.microsoft.com. But it's unlikely that such a third-party interview would have been permitted............

    So it's a shame.
  • The continual claim that NT only BSODs on "non approved hardware" is utter bunkum. Here we are running, or rather, attempting to run, Procomm on NT4 on brand new PIII Compaq Deskpros, with absolutely no additional hardware over the base spec.

    Every time we try to run Procomm32, NT BSODs. Please note that I understand the difference between an application crash and an OS crash. Note also that I am explicitly stating that NT itself is crashing.

    It seems that Procomm on NT4sp5 causes NT to die horribly. It's fine with sp4. Of course, sp4 isn't Y2K compliant, and as a financial institution we have to have ticks in all the Y2K boxes.

    In any case, there's no way an app should be bringing down the OS. There's nothing dodgy or special about the hardware. There is no possible reason for this to happen, except that NT is basically not robust.

    (I personally think NT3.5 was pretty robust; each subsequent release has been less so.)

    There are many experienced Micros~1 engineers in the team here, some of them MCSEs. None of them has a good word to say about NT. They've had to make the long walk to the other side of the building too many times to fix a broken NT server. They just accept it as an unpleasant reality of life; NT crashes, but managers love NT and NT pays the bills.
  • I really feel for these poor guys. Despite the breathless tone of the "interview", these guys sound dissatisfied to me. Anybody who rates his last ten years work a 2/5 in a PR interview can't be a happy camper.

    The lofty goals they set for themselves were collectively impossible to achieve at the time on commodity hardware, although 3 or 4 out of 5 were probably doable a few years ago and 5 out of 5 is a possibility today. I actually thought NT 3.51 was pretty good, which surprised my friends who knew that I despised most Microsoft products. You just had to sacrifice a set amount of processor bandwidth and memory aside for the OS, and it was pretty stable. The GUI management was terrific in small workgroup settings. NT just wasn't viable on a 486 DX2/66 with 16MB of RAM. I used to tell people that NT 3.51 was pretty good, but that you had to start from a base of a pentium and 32MB RAM and figure your application requirements above that. By today's standards that's like saying start with a Celeron/400 and 128MB RAM -- quite doable, just a little extravagant for the OS alone.

    When I first read that NT 4 moved more stuff into privileged space for performance and memory, I knew it was a bad sign. It must have been foisted on the engineers, because undermined the architecture of the system. The marketing types probably got tired of hearing that NT was a good product but to RAM hungry to run on a typical business machine. If they'd just waited a year, the typical business machine would have caught up.

    Of course, now, to add insult to injury, NT's thunder is being stolen by an OS with a monolithic kernel which is more portable, reliable, and faster in most configurations.
  • "Portability, reliability, extensibility, compatability, performance."

    Well, three out of... wait... no, no, no, two out of, no, wait...... Argh! Nevermind. :)

  • I got the same error... but kept trying and got in. Try it again, just like with Win95. ;)

The one day you'd sell your soul for something, souls are a glut.

Working...