Inside The Development of Windows NT 707
mrpuffypants writes "Winsupersite has a 3 part series this month about the history and development of Windows NT all the way up through Windows Server 2003. The author goes fairly in-depth describing how Windows is developed, managed, and how all 50 million+ lines are compiled daily. Part One covers the history of NT from its early days at Microsoft and Part Two discusses how the deployment of the forthcoming server version of Windows is coordinated daily." *shiver*
NT & VMS (Score:5, Informative)
GameTab [gametab.com] - Game Reviews Database
Re:NT == VAX OS? (Score:5, Informative)
Re:NT == VAX OS? (Score:5, Informative)
That would be VMS (some VAXen ran Ultrix, poor things). IBM and MS started a collaboration called OS/2, then later decided to part ways. Whatever MS's other motives were in the split, MS was staking its entire future on what was to IBM a toy project, so MS wasn't entirely enthusiastic about development at IBM speed. IBM kept the OS/2 name, MS hired Dave Cutler from DEC, Cutler dubbed the new fork WNT: that's the letters after VMS, and any expansion is entirely a backronym.
NT does include some of VMS's heritage, including strong async I/O support throughout. The DOS stuff is really a matter of emulating the interface -- a whole lot of work went into making drive letters and backslashes work everywhere, believe it or not. Not surprisingly, it tends to share more in common with OS/2, with the supervisor design and the object manager for starters.
Re:NT == VAX OS? (Score:3, Informative)
So saying that NT is just VMS part II isn't really accurate, but the same guilty parties are involved. If you can find it, there was a book called _ShowStopper! The Breakneck Race to Create Windows NT_ that does a pretty good job of chronicling the history of NT during its early days.
Know thy enemy, and all that.
Re:Alpha (Score:5, Informative)
You have to remember that NT4 was a 32 bit operating system, even on the Alpha. Therefor, you didn't really gain much by going to the Alpha, except for some nice speed boosts (it was definitely the fastest CPU on the market for years).
It was similar to running NT4 on a Pentium Pro 166 or 200.
The biggest problem I had was finding software. However, everyone's favorite telnet app, putty, comes compiled for NT4/Alpha.
The Register previously offered Windows 2000 for the Alpha, if you asked them for it. I never did, since my UDB was seriously underpowered (128MB RAM, 166 MHz).
Re:Why do Microsoft reviewers always sound... (Score:3, Informative)
Rest of the gaming industry? From my viewpoint it was Carmack alone.
Clearly superior OpenGL? Depends what you're using it for. OpenGL certainly wasnt (and still isnt in many cases) faster on consumer level cards. Direct3D was developed alongside consumer level hardware supporting features that actually exist, OpenGL was designed on paper.
By and large 3D gaming was being written for glide, and developers absolutely loved an open api specifically targetted for game development.
Re:Incremental build? (Score:5, Informative)
Re:Dave Cutler's "Vision" (Score:3, Informative)
Re:Security? (Score:2, Informative)
NT was designed in a pre-internet era, where security for the office PC's was still mainly the guns-and-guards model. Users and passwords were there (from your average managers POV) to keep Jim from accidentally deleting or overwriting Sally's spreadsheets.
None of the 'holes' can be exploited if there's no access to the system whatsoever. It was a non-issue until the
It lost its independence with 4.0 (Score:5, Informative)
They moved the graphics subsystem into the kernel, and it ceased to be a microkernel. When pretty much everything lives in userland, portability is pretty easy. In fact, you can essentially write a new kernel (with the same external interfaces) for each architecture if need be. You also get neat features like being able to restart networking or the graphics system if they crash, without bringing down the system.
The problem that you have on i386 is that context switching is expensive (read: slow as a dog). On other platforms (sparc, ppc), it's not that big a deal.
Now, Windows doesn't look like a microkernel at all. And it's not at all portable, either. From what I understand, the Itanic port is giving them big headaches, and Intel is none-too-pleased about it.
Re:best quote from the article (Score:1, Informative)
Re:Hmm (Score:5, Informative)
There was a simmering fight over whether OS/2 should be "Protected Mode Windows" or whether Windows should be "Presentation Manager for DOS". Since neither platform had that many users or developers at the time, it could have gone either way.
While they make this sound like a Gee Whiz revelation, but in fact Microsoft wanted Windows compatibility in OS/2 from the beginning and IBM wanted a unique API.
Since IBM wore the pants, they won the day originally. However this really bit them in the ass with the the subsequent popularity of Windows 3, because it was difficult to target OS/2, so software was either missing or dismissed as a poor Windows port.
Not that Win32 was a huge success in the early years either -- most software had to be run in the Win 3.1 emulation, and even MS themselves only belatedly produced a 32-bit version of Office and not much other software.
Only the Linux KERNEL is 5 mln lines (Score:4, Informative)
The size of the WinNT kernel is nowhere near the 5 mln lines of code, I believe it is well below 1 mln. lines.
The WinNT is also only compiled for Intel platform, so it does not include code for other platforms.
Re:Michael Landon Is My Cousin (Score:3, Informative)
So did NT4, so does (seems to at least) 2K. When you start the installation, the message about loading it into still flashes on a screen.
Agree with you here 100% -- especially considering that at the beginning they spend quite some time praising modularity, etc. -- how easy it was to port to MIPS/x86/Alpha! Then they seem to have realized that maintaining all of that is not as easy as porting...
Re:Incremental build? (Score:2, Informative)
The build process is divided up as well, each major group has a build lab (e.g. COM, Networking, kernel, shell) where they build their part of the product daily. Builds mean generating free, checked, for all SKUs (server, advanced, datacenter, etc.) and all CPUs (x86, ia64, amd64). When each group's build achieves a level of stability/quality, they integrate their changes into the master build lab, and eventually there is a reverse integrate from master to group to pick up other group's changes which might effect your area.
There's a morning and evening build, but we just started to ignore one of them because it was way too much time to be installing and/or upgrading twice a day.
So "they" aren't just building from scratch every day, the master build lab is building 5+ SKU's, free/checked, for 3 CPU's every day. And the group build lab's (called virtual build labs) are doing the same, but just for their area
Re:Security? (Score:3, Informative)
Actually, it's more like 'the fact that your vest isn't bulletproof doesn't matter until somebody invents the gun.'
UNIX had the exact same problem; the entire point behind UNIX was that it was MULTICS with a bunch of the security stuff REMOVED. Go look on some old security lists; many daemons, such as sendmail or lpd, would give you root just for the asking.
Re:Branding Issue Bugs? (Score:3, Informative)
A 'branding issue bug' is when you have morons who hard code the name of the OS into the source files instead of referring to a specific variable or a fixed file.
As a quote from the interview says: "I went out and handpicked the three best developers on the team and said, 'just go and fix it.' One developer fixed over 7,000 references to [Windows] .NET Server. Let's just say that there are people I trust, and people I don't trust. I told these guys, 'don't tell me what you're doing. Just do it.'"
So clearly a lot of the developers are hard-coding certain things into the code rather than relying on a solid design document. Sloppy, very sloppy.
dave
Re:The NT Kernel Is Good (Score:5, Informative)
Well....... maybe.
I seem to recall an MS employee claiming that it was entirely Microsofts fault Windows was so unstable, even though crashes were normally caused by faulty drivers. His theory was that if MS were more open with the kernel code, driver manufacturers could work more closely and easier with them, and the overall stability would go up. Instead what happened (they claimed) was that they would investigate a crash, find that some dodgy driver was screwing about with the kernel and so they'd tighten up the interfaces, get even more secretive with the code. The driver developers, faced with a brick wall, would then invent even more elaborate (and fragile) hacks to do what they want, so the stability went down, not up.
So, you can't really blame the kernel as a thing per se, but perhaps you can blame the management of it. Linux is now facing a similar problem with the growth of binary only drivers - they tend to hook into the ksyms and cause extremely hard to track down bugs, which is why they are no longer allowed to use those hooks.
Re:How MS "punishes" bug meeting truants (Score:3, Informative)
The NEW FEATURE that the group was working on gets moved to the next release.
Re:WinNT development cycle. (Score:3, Informative)
Used properly, gotos are no more harmful than any other construct.
Re:How MS "punishes" bug meeting truants (Score:3, Informative)
Re:NT == VAX OS? (Score:3, Informative)
IBM quality
You never tried running DOS apps in OS/2 1.0, did you. To say it was buggy was an understatement...
OS/2 development speed wasn't so bad
As long as you weren't waiting for applications which ran on your shiny new requires-4MB-of-RAM-costing-to-boot OS/2? Whatever happened to OfficeVision again ;-)
The real problem is that MS wanted to push forward the stupid MS Win16 API inherited from MS Windows 1.0 for the IBM-PC/XT
No, the real problem was that the marketplace didn't want OS/2: even when Microsoft were fully committed to it (the period thru' mid-89) and pushing it as the best thing since sliced bread, no-one (outside IBM fiefdoms) bought it. If you want to blame anyone for this state of affairs, blame the clone makers who had no interest in handing control of the PC marketplace back to IBM. IIRC, at the time and with all of DOS's flaws, and with both IBM and Microsoft telling the market that DOS was dead and OS/2 was the way forward, DOS was outselling OS/2 by a ratio of about 200:1.
Oh, and targetting the 286 was a bad idea from the get go, but IBM didn't want to eat into sales of their (expensive) AS/400 line.
while IBM would have made it interoperable
documented
As long as you were prepared to fork over $3000 for said documentation
and stable
I get it: you never used OS/2, did you. Because if you did, you'd know perfectly well that when OS/2 shipped it was so flakey it made Windows Me look like a beacon of system stability. Maybe by OS/2 3.0 it had become moderately usable, but OS/2 1.0, 1.1, 1.2 and 1.3 sucked in so many different ways and had uptimes measured in minutes. Datapoint: a customer at the time had about 500 PS/2 Model 60s running OS/2 1.3 w/ Presentation Manager, one in each store. At the time, this was an investment, in hardware alone of about $3 million, plus about another third of a million in OS licences (yes, only ten years ago $700 was considered a very reasonable pricepoint for a desktop operating system licence - if you were IBM...) They were shockingly unusable - as in the store manager would be entering that day's sales into a dBase app and running the numbers and they'd be able to enter stuff successfully about one time in three - the other two, OS/2 or one of the apps running under it would crap itself, lock the queue and once that happened it was goodnight Vienna.
Re:Michael Landon Is My Cousin (Score:3, Informative)
In 1994, they sold less than 400,000 copies of NT and had less than 700,000 total market for NT. IIRC. Not a very large market but they had just finished putting 100's of millions of dollars into MARKETING Windows 95( aka Chicago ) and IBM said it was going to push OS/2 into the server market. 100's of millions of dollars were dumped into MARKETING Windows NT into the server space. They are very much like vultures. They feed on the technology and markets of others. Go throw $50 million at a market and you'll see Microsoft throw $100 million at it while they figure out what the market is and how it make it only work on MS Windows.
Yup, the reason a company like Microsoft left the MIPS/PPC/Alpha/etc market was because there wasn't anybody in that market to kill. They weren't very large and Apple killed the PREP & CHIRP platform for the IBM/Apple/Motorola partnership and the last hope of an alternative DESKTOP hardware platform. When the $$ behind that market dried up so did Microsofts interest outside of x86.
There were very large companies putting $$$ behind those platforms and Microsoft had to be sure they'd be there to crush any other OS on those platforms. When the big $$$ leaves, so does Microsoft. They are not going after Linux because a bunch of developers like messing with it in their basements. It's because there is big $$$ behind it now and $$$ being paid for it as a solution base. IMHO.
LoB
Re:12 hour compiles!!! (Score:4, Informative)
Even as a dev, I try to run a clean build of my systems when I leave Friday. I can't tell you how many times a weird bug has been caused by some missed dependancy that the make system wasn't rebuilding.
Re:Availability (Score:3, Informative)
I hate percentage uptime stats as they're useless and as you've just proved, hard to calculate correctly if you're a moron.
If you have a large server population of say 50,000 RC1 boxes all reporting back to MS (and it does), you can easily determine the MTBF of the fleet. But more importantly, the error reporting tells you about the bugs your customers are seeing and how many are seeing the same bug.
Andrew