Follow Slashdot stories on Twitter


Forgot your password?

Windows 2000 to provoke domain game 337

According to this article found on PC Week, Mircosoft Windows 2000 implements DDNS (Dynamic Domain Name System) in a way that makes it extremely difficult for administrators to integrate the operating system upgrade with Unix systems, which use the older, static DNS. I would like to ask if someone here could explain what is the difference between Static DNS and Dynamic DNS, and why it's not implement almost at all unices, including Linux. I smell a fight here between Unix Admins and NT/2000 Admins in some corporates. Am I wrong?
This discussion has been archived. No new comments can be posted.

Windows 2000 to provoke domain game

Comments Filter:
  • It's okay to use an excerpt from the article, as long as you use quotation marks. Otherwise, it's just plagiarism.
  • You've obviously never seen a major corporation deal with a crack. No one but the admins know about it, unless enough damage was done to be obvious to everyone, or a web page got tagged. Slashdotters hear about less than 1% of corporate security problems.
  • you are allowed to use parts of copyrighted material in review, critique, or parody. I'd say this prolly falls under review. Besides, zdnet i'm sure is happy everytime one of their articles gets posted to slashdot, more hits, more banners loaded, more ad money.
  • What is it with slashdot that attracts moron posters?

    I've seen a few intelligent comments here, along with a whole slew of "Help me! Help me! Microsoft is out to get me, those evil dirty bastards!"

    Sheesh. Assign your root DNS to your Unix machines, and delegate the Win2K DDNS to a subdomain. It's that simple...

    They can coexist. How the hell do you think the internet works except for delegation of DNS duties to thousands of different machines and DNS implementations?

    The Microsoft DNS implementation is compliant with BIND 8. It may or not allow dynamically allocated Unix machines, but it most certainly responds to DNS lookups from Unix machines, and it most certainly will use a Unix machine as an authorative DNS for a different domain. I'll bet it even implements some level of security to prevent a machine from overwriting the DNS record for a server... in fact I'm going to go experiment with this right now.

    Sheesh, what a bunch of maroons. This is a non issue, the article was FUD, get over it.
  • Pardon, Mr. NT Admin?

    What's this I hear?

    YOU talking about SECURITY?

    Haven't been paying attention to recent news, eh? Remember the 7-second crack? Used a trojan installed on an employee's NT box to fetch their password though they were connecting via SSH.

    But no, a hacker can't do any harm an NT box. Riiiight.

    An NT admin would be just as wise to apply patches as we are -- there's no less need. Except Microsoft distributes patches but rarely, so you CAN'T. Personally, I'd rather have the option to spend the time to be secure. You don't even have the choice.
  • Honestly I don't know how you m$ employees sleep at night you must have no concience.

    "While we will eventually support a standard, the IETF is having problems coming up with final draft."

    Crap why do you lie? You know as well as I do that M$ has no intention of supporting the standard. You will give some lame excuse like you did with your HTML standard. Why can't you ship W2K which supports the current standards and then implement the new standards when they get approved. Read the haloween docs it is the stated intent of m$ to break standards.

    RE JAVA.

    It does not matter if Java is a language or platform you dolt!. You signed a contract and then violated it with malicious intent. M$ INTENDED to break java. m$ signed a contract they knew they were going to break. Read the DOJ transcripts, real the depositions before you go sprouting off on lame excuses.

    M$ lies, m$ cheats, m$ steals. You my friend are an instrument of unethical people. Clean up your karma before it's too late.

  • I agree with the concept of Office being the "killer ap" that drives M$. I recently was working with the CTO of one of the largest Cable companies (as in leased line) in Asia, who told me that if Office or something that could guarantee 100% compatability with M$Office (both in read/write and user interface) were available he would switch all 20,000 (yes the number is correct) of his corporate desktops to Linux immediately. This by the way is comming from a man who has linux on his desktop and notebook with Staroffice. Currently Staroffice comes close but it doesn't do what is needed. Porting of Doc and Excell files is still a little iffy. It comes close and doesn't lose too much data but it seems to usually require a cleanup, which drives secerataries and bookeepers nuts, and costs companies $$$ in lost productivity. Look at what M$ did to Lotus, Harvard Graphics, Dbase, and Multimate (Yes these were once the dominate force in corporate software) by giving the user 100% compatability with their current DB of information and archives in one package with 1 licensing fee they gained total dominance of the market. Dynamic DNS won't sell servers in and of itself. Most good sysadmins have already created a work around for this problem. What drives M$ is Office and it's lack of competition.
  • Hopefully some of the linux zealots will take your advice and do some research about NT before they start making wild claims that it crashes every 4 hours.

  • M$ lies, m$ cheats and m$ steals I can live with. It' the way how M$ sucks that kills me.

  • Uhh, NetBSD and FreeBSD already have very useable USB drivers, and support for USB keyboards and mice. And yes Virginia, Linux isn't the only Open Source OS. That is if you consider the GPL open.
  • Correct me if I'm wrong, but hasn't BIND 8.x had this capability for some time? What is the difference in M$'s implementation? Are they "extending the specs" the way they did HTML?
    Stephen L. Palmer
    Just another BOFH.
  • At work (not my URL) even the NT admins were annoyed by MS' behavior, trying to ram Win2K DNS down our throats. So the NT guys--to their credit--decided to go with MetaIP from Checkpoint. The one thing I'd like to see from MetaIP would be a little less proprietary approach, then I could endorse them. They talk of a "one-time conversion" of DNS files from human-readable text to some funky proprietary format. As anyone who has administered DNS or mail will attest, you do NOT want your info to be a binary blob that you can't decipher if you start having problems. Anyway, at least Checkpoint pays lip service to standards, and since Checkpoint is an Israeli company, over time they'll be inclined to favor Linux for its technical elegance, accessibility, etc.
  • RFC status means nothing.

    A document becomes an RFC by:

    a) being written
    b) being sent to the IETF
    c) waiting in a 100-deep queue for some time
    d) getting assigned a number

    RFC-ness doesn't guarantee that it is official doctrine, only that "hey, here's the spec, get it at your local site."

    There are stronger levels of IETF document for official blessings.
  • by Anonymous Coward
    Dynamic dns takes automatically care of nameip mappings when ip's change due to for example dhcp. Static dns systems will need to run update scripts for this or the mappings have to be updated manually. (Or, as in the company I work for, names change when ip's change.) The whole thing could be done with a dhcp server that knows how to update the name info of bind. The friendly people at Redmond have some sort of system available for this. Note, though, this is nothing new; it's available for current NT's if you look hard enough.
  • I'm fairly certain my school(USC) implements this, and it honestly works fine w/ the win9x/nt dhcp clients, but all the linux clients i've tried to use have screwed it up.

    Hopefully when a more complete linux dhcp client is working the problems will be solved.
  • Does anyone else find it ironic that someone named "HeUnique" would copy a headline word for word from another publication?

    BTW, you probably want to change that before someone sues.
    Put Hemos through English 101!
    "An armed society is a polite society" -- Robert Heinlein
  • Alright. I'm getting sick and tired of listening to all of these MS conspiracy freaks blow 'facts' out of their a$$es.

    For the record, get it straight. MICROSOFT DID NOT INVENT DDNS! In my (not so) humble opinin, this is a great move! Finally we are getting rid of WINS (which was TRULY a Microsoft-only thing) and replacing it with a decent 'standard'.

    Stop looking for reasons to berate Microsoft, especially when the lot of you haven't even tried to check on the facts. I have to be one of the few people here who knows what WINS was, and to realize that it deserved all of the negative feelings that DDNS is getting.

    Get a life. Go read Linux-Advocacy-HOWTO. Stop being a bunch of conspiracy-driven punks.
  • by scrytch ( 9198 ) <> on Saturday August 28, 1999 @12:57PM (#1720568)
    Have you just noticed this fact? Most slashdot headlines are taken verbatim from the original article. This isn't unusual in itself, but custom and courtesy dictate that the name of the publication or service be placed before the headline. This is what Linux Today does.
  • Yes, this could be a risk. To address this risk, you are allowed to limit who you accept updates from (in both BIND and WIN2K DDNS).

    A Win2K DHCP server can act as a proxy for its clients so that registration of both A and PTR records occurs via the DHCP server, NOT the DHCP client.

    Most installations that I've seen only accept updates from the DHCP server, not the individual clients.

  • Dynamic DNS, if Microsoft is following the emerging RFC (yeah, right) give you the ability to automatically update your DNS tables if a machine's IP address changes. So, for example, if your machines are on DHCP, and their lease runs out and they get a new IP address, the DNS server will be updated to reflect this new address so that other clients will be able to resolve it's address.
  • by Anonymous Coward
    from : ISC homepage []

    DHCP Distribution: Version 3.0

    Current Version: 3.0b1pl0

    Version 3 of the ISC DHCP Distribution adds conditional behaviour, address pools with access control, and client classing. An interim implementation of dynamic DNS updates for the server only is included, but is not supported. The README file contains information about how to enable this - it is not compiled into the DHCP server by default.

    Features in upcoming releases, starting with 3.1, will include the final asynchronous Dynamic DNS Support, DHCPv4 16-bit option codes, asynchronous DNS query resolution, DHCP Authentication, and support for a DHCP Interserver Protocol and live querying and update of the DHCP database. I don't see why they say it doesn't exist on UNIX. There are also perl scripts that do the job.

  • by DNSDave ( 83153 ) on Saturday August 28, 1999 @10:00AM (#1720573)

    I'm working with DDNS both at home and at work using both Unix (Proprietary or Linux) and Win2K. They interoperate fine.

    The only issues I've seen are with IXFR implementations (incremental zone transfers) and some "noise" data for some subzones. The workaround is that you can delegate the "noise" zones back over to a Win2K box until the BIND 8.2.1 code is fixed.

    The REAL PROBLEM as documented in the story about Boe...oh, the "large aerospace firm" is that many large enterprises segment their IT structure along operating system lines rather than functional lines. It is much more efficient to LOSE operating system religion and use the "appropriate tool" for a job.

    The DNS folks where I'm consulting use both Solaris and Win2K systems as nameservers. Solaris hosts the root namespace and the IP management tools. Win2K hosts the Active Directory Integrated delegated zones. The same folks in THE NETWORK GROUP (a functional split not an OS-centric split) manage all of these zones. There is no pissing contest over OS machismo. If more companies were to split their IT into functional areas, rather than OS empires, they might see a better result.

    I'll get off my soapbox now. Just my two cents.

  • No, BIND does not implement Dynamic DNS. Several commercial DNS servers, some based on BIND, offer DDNS, but vanilla BIND does not.
  • And how many languages do you know? The world doesn't revolve around the US and Britain. Hell, if you're going to start getting technical on spelling and grammar, you're just being pedantic. How many people do actually speak proper English anymore anyway? Learning the language is a good thing, and more than likely the person is. Just don't dock him for not being a master yet, and learn some compassion. Being a tight wad doesn't make you friends.
  • Let's see. This one is easy to debunk. I've made an expert system, I've made a distributed system, hell, I've even made a music composition program. All worked cross platform, without JDirect stuff. I disagree that Java sucks ass without Microsoft. Java's pretty cool. Even if I feel it's a bit slow, You can write faster Java if you pay attention to what you are doing. I'm not some grandious visionary, I just write the code to fit the specs of the people paying the checks.
    Now, please mark this as Off Topic, and let's get back to DDNS.
  • So I'm a corporate IT manager. I've had the misfortune to hire an NT bigot and a Unix weenie as sysadmins for their prospective domains. Both refuse to work on each other's systems, and both demand control of DNS.

    They've swallowed the FUD about DDNS in this article, ignored the fact that's it's substantially a technical non-issue, and now I have both of them in my office shouting at each other, both demanding control.

    What do I do?

    Yep. Sack 'em both, and get two (or one?) admins who are prepared to work on both systems and do what it takes to get the job done. The company will be a better place without weenies, OS bigots, or prima donnas.
  • This is very open to interpretation. Linux has more security issues, because the code is able for review by anyone.

    Beside the fact that I disagree with you, perhaps the reason that NT has less "security issues" is because the code is not open for such review.

    If the code was at least open for REVIEW (not development), at least a lot of unresolved bugs that are going to pop up sooner or later, and take big hits with them. At least if the code is there for review, an admin could take steps to prevent something from getting exploited, even if it doesn't actually FIX the problem.

    I'm a full advocate for open source, but when security is an issue, the more you can see the better.

  • but why change it when, as the old expression goes "a picture says a thousand words"?

    or, at least I _think_ I got the old expression right
  • Well, are you using Visual Studio 6.0, IE 5.0 and Office 2000? I believe his point was that order very much matters on installing these things because some installations rewrite system files without checking to see if it's a later version, and without prompting you to replace them. My big problem is that they even replace these files through app installs. If you're just using NT with SP5, you can get it to work just fine. Try adding in a few random other microsoft products, though.
    Installations are permutable. Order matters.
    nPr vs nCr
  • man stat

    time_t st_ctime; /* Time of last file status change */
    /* Times measured in seconds since */
    /* 00:00:00 UTC, Jan. 1, 1970 */

    and further down

    st_ctime Time when file status was last changed. Changed by
    the following functions: chmod(), chown(),
    creat(), link(2), mknod(), pipe(), unlink(2),
    utime(), and write().

    Yes, creating a hard link to a file, chowning it, or chmoding it will change its ctime. creation time my eye. Oh well slashdot doesn't respect pre tags anymore, deal with the formatting.
  • Oh please, that is such utter piffle.

    Nobody can set anything up these days on NT with the click of a mouse, you need MCSEs, service packs, hotfixes, HUGE NT manuals, etc.

    I added 100 IP addresses to an NT box recently and it took more than one mouse click to do it.
  • "The NT kernel comes formt the UNIX just took the kernel developed it to there own fact winnt is posix compatible..."

    Huh? NTs design is influenced by a number of things, including early versions of OS/2, VMS and Mach, but it really isn't any of those things, and it certainly isn't a monolithic kernel.

    The POSIX API support you mention is separate from the OS core, so, for that matter, is the Win32 API.

  • Why not? Could it be that metadata can't ever be associated with an inode? tch, just too bad, isn't it.
  • "in fact winnt is posix compatible I am a unix guy but nt has its place too"

    Yes, I agree. At my work I have a very nice oak desk. For fear of wrecking the wood, I use the NT4 CD as a coaster. It's very effective.

    It's been working great for over 4 months. Who says NT is worthless.
  • Two possibilities:

    1. They are keeping up with their students, and just keeping a record of what MAC addresses are whose, that way if you do anything illegal, they can say "It was this guy".

    2. They are giving you a static IP (good thing) which is the way to go. That way, you get the benifits of DHCP, and the benifits of a static IP. So, is your IP the same all the time? Or does it change?

  • First posting sucks,..
    Let me be the FIRST to announce,..

    "LAST POST!"

    Ok,. now no one else make comments please...

  • In a very very very long while.
  • MCP Magazine published a similar piece in the 9/99 issue. For various reasons, they claimed that NT's DNS as superior to Unix. However, the uneducated author of that piece was really contrasting BIND 4 and BIND 8. I sent MCPMag a nasty-gram for that. Linux has had a BIND 8 compliant DNS out for a couple of years already. Unfortunately, many of the major Unices don't have a BIND 8 DNS out. AIX 4.3 being one exception to that. IBM was fairly aggressive in keeping up with the latest RFCs governing DNS/DDNS and DHCP. The key difference between static DNS and DDNS is that DDNS allows zone files to be updated via special nsupdate packets. Update packets are sent to the DNS by either a DHCP/BOOTP server or by each node. No current MS OS supports this latter type of DDNS registration, but their are 3rd party tools to make it happen. One bummer about Linux is that its DHCP server does not yet support DDNS updates. Here's one area where NT's DNS/DDNS really stinks: you are forced to use the MMC GUI tool to admin it. Yucko! IMNSHO, vi & perl are the ONLY tools for DNS. =)
  • accountability is about saying "you know.. if I post this stupid comment someone might form an opinion about me" and then maybe reconsidering.
  • Cool, Astroturf has reached Slashdot...

    I don't know what experience you have on UNIX boxen, but I've used both UNIX workstations and NT workstations, and I can tell you you are full of shit. NT is a productivity destroyer, as the Windows interface just isn't designed to get work done. It may have been designed not to scare Joe Blow, with the dancing paperclips and the flying sheets of paper, but it certainly hasn't been designed to let people do what they want to do.

    Hell, even the bloody Macintosh is better in that respect, because at least it has a good graphical interface. Windows is just an ugly, unholy mess built on top of an unstable kernel.

  • I know it can be made to work, but most business users prefer something that can be enabled in a standard distribution, not some patch that can be pulled from a mailing list. Especially because there are so many of these loose ends.

    Umm...most business users don't admin a server at all. I've had to download patches for everything I admin, from the NT boxes to the Linux boxes to the Cisco routers. The PHB's just want it to work, and usually don't care how you do it.

    Leave the system admin to root, baby....
  • Please, enlighten me as to what I should use instead? Not Windows, surely. I don't hate Windows because it is succesful, and I don't have a deep need to be elite. I hate Windows, because it hampers my ability to do anything.

    I like Linux because of the plethora of modifications I can make to it, and the amount of customization I can make to the UI. I also really appreciate the online documentation, which is in a sane and easy to use format. Once I figured out how to use it, I fell in love.

    So, what computing paradigm to you love? I probably haven't tried it, but if you would help me get it installed, I'll be happy to give it a try.

  • I am running Office 97, VS 6.0 and IE 5 ( it is my developent machine - server cause I need to test stuff with MS SQL 6.5 - which by the wy is installed too - and all of this in the IBM thinkpad -128 MB ram ) So far works perfectly ...
  • And also Linux on another partition - works fine too with exception that video drivers were so painfully slow I had to get AcceleratedX...
  • Practice what you preach there, pal. I've got no idea what you are talking about and neither do you it seems. NT4/2k stores the DNS data in a binary format, have fun admining that with
  • >>Are unix people all anti-capitalists?

    you know that is obviously false. i think his point was that microsoft is often associated with having fabulous amounts of money, perhaps more than people should be giving them. (oh god, i'm just setting myself up for an anti-trust debate, aren't i?)

    >>Micros~1 - Hello... I've used win95/98/nt and I've *never* had to type in an 8.3 file name.

    if you use any of your older 16-bit apps in windows, you will encounter these silly abbreviations (eg pictur~1.jpg) all the time. however, i usually run into these things in DOS, which turns long filenames into a nightmare, not a blessing.

    >>I think linux geeks are just bitter about MS's dominance.

    the complaints about ~1 filenames have nothing to do with MS's dominance. just because you've never had to struggle against its ugly side does not mean it's not a bad system. people complain because it sucks.

    >> Macrostupid - Surely the people who were fooled into running these macros are to blame.

    the people that passed on that virus are mostly newbies, and don't know how to wield the power of macros anyway! unfortunately, there are millions of newbies out there, and microsoft pushes their products into their hands more feverishly than any other demographic. if you don't believe me, i'm sure the talking paperclip can convince you otherwise.

    >>I get 15+ days uptime all the time on my machine.

    well cool, as long as we're sharing our experiences i might as well give mine! i've got win98 on a top of the line dell machine that i'm using right now. this whole summer it has not had an uptime of more than 5 days. hell, i don't even push this machine! i use it to browse slashdot and chat mostly. what causes it to crash after 5 days for no good reason? beats me. buggy coding i guess. oh sure, it made it to 8 days *once* but who wants to use a computer that has 194 meg of allocated memory with no apps running?
    and this is windows *98*, supposedly fixing those truckloads of bugs. when i had win95 on a diff computer (that i sold, hehe) i couldn't even make it through the *day* without it crashing. usually crashed 3 times a day. are you telling me this is 'acceptable'?
  • "...The DEFAULT authentication for Win2K is kerberos."

    There I was addressing the current roaming user, using a non-Microsoft platform. I have a problem
    with DDNS in general, not just Microsoft's.

    I don't think dynamic DNS solves the roaming
    machine problem because of the TTL and security
    issues. The problem it does help solve is plug-
    and-play -- you can fire up 100 w2k boxes on a
    network using a promiscuous DHCP server or even
    the client-autoconfigured range and they will
    all get registered in the DNS based on the
    network settings on the *client* end. Just like
    the Macintosh Chooser. How useful that is depends
    on what you plan to do with the information and
    how scalable it needs to be. In our environment,
    out-of-band end-user access through a secure web
    server is better.

    To answer your question, as far as I've seen, the
    w2k DDNS client does not do kerberos or any other
    form of strong auth.

    Internet Explorer does not and for the forseeable future will not do kerberos (according to the lead
    NT5 security engineer when I was in Redmond). The
    NT5/w2k version does some SID token-passing with
    proprietary headers, not SASL. You can't really
    fault MS for this, though, because there are no
    standards for kerberos over http. CMU keeps trying
    to get people to kerberise web applications, but
    Stanford gave up and went s/ident and even MIT
    makes x509 client certs for users rather than
    force kerberos on an unwilling application.

    File & print service does do kerberos and can be
    configured to refuse to negotiate non-kerberised
    connections if you know that all clients and all
    servers on your network are guaranteed to be
    running w2k. In the real world, I expect WINS and
    legacy NT domains to last through 2010. We still
    have 2 key production NT 3.51 servers because the
    commercial off-the-shelf application they run is
    not reliable under NT 4.0. There are more
    architectural differences between NT4 and w2k than
    between NT3 and NT4.
  • by Anonymous Coward
    > *last I looked the RFC wasn't final yet, MS has
    > been updating the W2K code to follow the RFC.
    > drafts.

    Or vice versa.
  • Assuming you configure BIND 8.2 to accept DDNS
    requests with no authentication whatsoever.

    If you do that, you deserve what you get.
  • Yeah, and did you notice that msvcrt.dll changed in between VC 6.0 and VC 6.0 SP2? That was to fix
    bugs. On my machine, installing VC 6.0 broke at
    least 2 non-MS applications. Go take a look at
    bug reports.
    Man, I had to take care of a dozen NT boxes
    loaded with development tools. I know more about ways to destroy this systems by a wrong sequence in applying patches, fixes and errata than I ever wanted. Our Linux boxes are order of magnitude easier to maintain. And I am no UNIX fan. I like goog GUI and IDE's. It just a fact that
    UNIX style is much more stable for development use.
    Original poster insisted that MS enviroment is stable. I think he is full of shit.

    AcceleratedX rocks, BTW...
  • I remember reading somewhere that although ipv6 is 128 bit addresses (iirc), only the last 64 bits are used for actuall address... thr rest are country, state, etc... (not sure how it's split up in this part) but you automacically keep your ip and routers can figure out from the first part of the addy if they've got to drop packets to your backyard or to china... I remember that it promised to make routing much simpler, iirc...

    Always the chance I could be wrong, of course.
  • Ya I figured this might happen. Sentence and word optimizations can confuse people.

    I was talking about the concept of DDNS vs SDNS.

    The concept behind DDNS is that a device should always have the same name but the IP can change.

    The concept behind SDNS is that a device should always have the same IP but the name can change.

    That is what I meant by bindings.
  • If you need a GUI to do productive work, I feel sorry for you. I rarely use the GUI functions of windoze, and even then I use them only when a CLI solution doesn't readily exist. Point and click is for dry firing your gun, not for using your computer.
  • I just sat in on a several hour AD presentation done by MS. You can tie the DDNS in Win2k to Bind 8.x servers.
  • There's another way of doing that of course, but it's evil as hell.
    If you don't have any REAL subnets / LANs then you can tell the ridges that all conference rooms are really in all the vLANs of the organisations, and stuff will just magically work. I suspect the behind-the-scenes cost in data traffic is horrific, but I don't care :)
    This also lets you grab a server, complete with UPS, and run over to another building with it, and hardly anyone notices :)
    Or so I'm told (Do you read this stuff Tim?)
  • I think dynamic DNS is a solution in search of a

    You say, "Its nice to be able to connect w/ a laptop anywhere on a 100+ subnet network and get the same domain name to resolve everytime."


    How many people besides you regularly connect to
    a server running on your laptop?

    Are you sure you control the TTL on your DNS
    server, every DNS server used by every client
    that talks to you, and every server you talk to?

    What do you do when a remote site's TCP wrappers
    refuse access because they cached your old PTR

    What assurances do you have that someone can't
    spoof your dynamic name and steal credentials? If
    you think you're authenticated by MAC address,
    try ifconfig eth0 hw de:ad:be:ef:01:23 (doesn't
    work with all enet cards, but does with the common
    ones). If you use kerberos, x509, or ssh host keys
    and you actually bother to verify them, then you
    have less of a problem, but many common services,
    like unencrypted web pages, have no end-to-end
    server verification protocol. Interestingly
    enough, Microsoft's NT domain protocols do not
    strongly authenticate the server to the client.
    If an attacker puts himself at the server's IP
    address and generates a nonrandom nonce, you lose.

    Microsoft considered strongly authenticating
    DDNS to be too hard (and nonexportable), so they
    basically trust whatever you put in the Network
    Control Panel (or a packet manufactured with
    smblib) as long as the name has not already been
    taken. Taken names can probably be freed up with
    the same sort of games people play to take over
    IRC channels. Bzzt! Game Over.

    Microsoft says it plans to get rid of WINS, but
    the initial implementation brings all the
    instability and insecurity of WINS to DNS. No
    thanks. The non-Microsoft solutions tend not to
    be much better at this time.

    Out-of-band authentication like MyIP or the old web page works, but that ain't DDNS,
    that's end-user access to static DNS... which
    can be a good thing. We provide something similar
    for our students.

    In case deja URLs aren't permanent, search for "WINS" in during January 1996. XT=935866614.1926103089&hitnum=9
  • whatever man...
    politics is politics...
    posix is posix...
    and personally i am not surprised that you feel
    way and i am not surprised MS has taken another's just unfortunate..oh well...
  • >If anyone deserves bashing here, it is PCWeek and/or this particular >reporter,not MS.

    Get real. You don't actually think that MS didn't approve of this article before PCWeek released it,do you?

    You MS PR flacks are really quite stupid,you know.
  • How come all of these MS praising posters are all Anonymous Cowards?? As for the comment that "[Anyone making the] assumption that a Sys Admin that runs MS products is ignorant, is nothing more than tunnel-vision and narrow mindedness", the plain hard facts are that anyone voluntarily using an MS Product for mission critical (otherwise phased as 'important') server applications is a little daft as the tendancy for MS products (in the vast majority) to be: 1. Crashware 2. Bloated 3. Slow/Inefficient 4. Insecure is notorious. While not every MS System Admin does so of their own free will, I would have to agree that anyone that claims that NT is a better solution to everything else is being a little ignorant of the facts. (For actual references, just refer to the many past slashdot articles and posts on similar subjects. This topic is getting old... I guess MS marketing really does get to some people...
  • This feature of ISC DHCPD and the DDNS features
    of w2k are totally unrelated.

    The ISC DHCPD 3.1 feature referenced, and the
    patches to 2.0 which have been around for over a
    year, does this:

    When a Windows 95/98/NT client, or a UNIX or any
    other client configured to send option 12, is
    assigned an IP address, the ISC DHCP server
    connects to the DNS server on the client's
    behalf to update its entry.

    This allows you to secure your DNS server to
    accept (possibly DNSSec'd) updates from your
    DHCP server only.

    The Microsoft DDNS solution does this:

    After a Windows 2000 client has been assigned an
    address by the DHCP server, it contacts the DNS
    server directly to update its entry.

    The Microsoft solution requires your DNS server to
    accept updates directly from your clients. The
    Microsoft solution does not attempt to support
    Win95/98/NT clients at all.
  • How come all of these MS praising posters are all Anonymous Cowards??

    As for the comment that

    "[Anyone making the] assumption that a Sys Admin that runs MS products is ignorant, is nothing more than tunnel-vision and narrow mindedness",

    the plain hard facts are that anyone voluntarily using an MS Product for mission critical (otherwise phased as 'important') server applications is a little daft as the tendancy for MS products (in the vast majority) to be:

    1. Crashware 2. Bloated 3. Slow/Inefficient 4. Insecure

    is notorious. While not every MS System Admin does so of their own free will, I would have to agree that anyone that claims that NT is a better solution to everything else is being a little ignorant of the facts.

    (For actual references, just refer to the many past slashdot articles and posts on similar subjects. This topic is getting old...

    I guess MS marketing really does get to some people...

  • How come all of these MS praising posters are all Anonymous Cowards??

    As for the comment that

    "[Anyone making the] assumption that a Sys Admin that runs MS products is ignorant, is nothing more than tunnel-vision and narrow mindedness",

    the plain hard facts are that anyone voluntarily using an MS Product for mission critical (otherwise phased as 'important') server applications is a little daft as the tendancy for MS products (in the vast majority) to be:

    1. Crashware 2. Bloated 3. Slow/Inefficient 4. Insecure

    is notorious. While not every MS System Admin chooses their server platform of their own free will, I would have to agree that anyone that claims that NT is a better solution to everything else is being a little ignorant of the facts.

    (For actual references, just refer to the many past slashdot articles and posts on similar subjects. This topic is getting old...

    I guess MS marketing really does get to some, dare I say it, *ignorant* people...

  • "Khttpd is not in the kernel."

    Its in 2.3.15

    Paul Laufer
  • This is my current project, so here is my take on what micros~1 is doing.

    First, some background as to what Dynamic DNS truly is, because its obvious most of the slashdotters are posting without a clue. Here's a clue, and its free, as in free software :-) At the end is an opinion, which is not a clue, but can be ignored or countered as you see fit.

    What is Dynamin DNS?

    DynDNS is result of putting together several RFC documented techniques in a quite nifty way. Start with DNS [rfc1034 & 1035], add DHCP [1531, 1532, 1533, 1534] and tie the two together with Incremental Zone Transfers and Notify [rfc 1995 & 1996], and call it DynDNS [rfc 2136 & 2137].

    Read rfcs 1995 & 1996 for a discussion on why full zone transfers [AXFR] are a bad thing (for bandwidth consumption), and see the elegant solution proposed with the incremental zone transfer [IXFR] extension. This is the basis for updating a primary name server with a new RR containing the hostname & IP pair (and IP->hostname reverse pair). You can also use this mechanism to remove a RR when the host is no longer associated with that address. There is also a discussion of security so that only pre-programmed IP addresses can do IXFRs, and allows extensions for fully authenticated updates when someone gets around to writing the code someday.

    Read rfc 2132 to understand how a DHCP client does a DHCPREQUEST to a dhcp server, and how it can pass its hostname inside of option 61, client identifier. This is what win9x currently does with its client code, but only a patched version of some dhcp clients for linux do this.

    Now, to put it all together.

    A machine [win or linux] with a dhcp client boots up, broadcasts a bootp request (the transport mechanism for dhcp) with a DHCPDISCOVER message. A dhcp server on the network responds with its local address in a broadcast (because the client has no IP address at this point, all traffic must be broadcasts), and then the client broadcasts a DHCPREQUEST to that specific server. Contained in the REQUEST packet is option 61, containing the hostname of the machine. In win9x, this is what is entered in the network control panel "computer name" field, in *nix it the contents of /etc/hostname.

    Then there is a whole bunch of communication between the dhcp server and client so they both agree on things (go read the rfcs, or sniff some packets off the wire, or both) with the end result the dhcp server now has given the client a lease on an IP address for a certain amount of time.

    Now comes the DynDNS bit.

    The dhcp server now communicates to the primary name server with an IXFR message, sending a RR containing an A record (and a PTR to the reverse DNS server) with the any and all information that might be contained in a RR, and the TTL is set to one half of the lease time given to the client. If the name and IP address are not currently in the DNS database, they are added. If they already exist, the IXFR message is refused, and the DHCP server must change the name to something unique. This is one mechanism to prevent overwriting your important servers addresses with bogus info.

    What micros~1 is doing.

    From what I can tell from some presentations I have seen, and playing with win2k beta, they have tied their DynDNS into ActiveDirectory as an attempt to shut out the *nix/OSS implementations until they get a foothold in the corporate door. I can't tell exactly what they are doing until I get a lab testbed set up and see if they interact correctly with BIND 8.2.1 or other rfc2136 compliant systems (someone mentioned cisco's registrar product, its real nice, and real expensive, and not based on any bind code). There is something going on with rfc 2052 defining directory servers on the internet, but I only read enough of it to give me a headache.

    Static vs. Dynamic

    M$ strategy is to put all IP addresses into AD, making the entire network a big, dynamic mess. As a network guy, I want all the important services to have static IP addresses. This means servers, DNS machines, router ports, mail servers, and anything else that should be stable.

    M$ considers servers to be unstable (based on BSoDs and regular reboots), so they want the IP addresses to be dynamic. That's a bad way of thinking.

    The article in ZD is actually correct on a lot of things. There are already battles going on between the ultra-reliable thinking *nix admins and the reboots-are-good ninnies who have realised they can't make M$s win2k work in a unix based world.

    The only solution is for the OSS community to make a standard implementation of dhcp client, one that by default passes /etc/hostname in option 61 of the DHCPREQUEST, and get that code into every major package out there. Then the FUDders will not be able to do any more than superficial damage.

    the AC
  • thanks for making me laugh. I suppose having microsoft deny the existance of a REMOTELY exploitable buffer overflow in IIS for two weeks until someone PUBLICLY released an exploit is a better security model? I wonder how many hosts got compromised by crackers privately trading exploits on IRC in those two weeks while microsoft denied the existance of the problem?
  • Good user interfaces can improve productivity. The Windows user interface is just about the most useless I've seen in my life. The SGI UI is the best I've used so far, followed by the Mac. KDE suffers from trying to look like Windows.
  • Rob - You may want to change the M$ icon. I think the troll kiddies get aroused whenever they see it.
  • by Jobe_br ( 27348 ) <bdruth@gmail.c3.14om minus pi> on Saturday August 28, 1999 @04:51AM (#1720668)

    DDNS is indeed implemented in the Unices - w/o a problem. The current version of Bind (8) supports DDNS and the development version of DHCP supports the DDNS updates.

    The difference in the two (Dynamic/Static) is that, as everyone knows, static DNS requires you to know the IP address of the domain name you're recording. In DDNS, the client requests an IP address from a DHCP server, then, as long as the DHCP server is configured to 'know' the client, it recognizes which client is requesting the IP (based on MAC addressing) and informs the DNS server that it is giving a certain IP address to a client for a particular domain name, and the DNS server accepts the information and adjusts its lookup tables accordingly.

    I've implemented this in Linux w/o a problem whatsoever - and I know of a school that has implemented it in a Solaris environment.

    Its been out there for a LONG time, btw - by that I mean at least 3 yrs. It wasn't pretty, at times, 3 yrs ago - but it was there. Now, it is a very well integrated solution.

    Its nice to be able to connect w/ a laptop anywhere on a 100+ subnet network and get the same domain name to resolve everytime :).

    Btw - first? :-)


  • probably more, since installing NT doesn't automatically make you a sysadmin like installing Linux does (anyone know how to get fetchmail working over ppp with Debian slink?)
  • Just meant to say that MS has added a useful feature and Linux should do the same.
    The situation I described with their MCSE's has to be addressed if Linux is to keep their gains in the server market...
    Jim in Tokyo
  • It's Embrace, Extend and Extinguish all over again, but with a substantially different tactic.

    Last I checked, DDNS was already a set standard, albeit a very new one that most Unices don't use yet. So there's nothing inherently evil about including that in Win2000. But, M$ is breaking interoperability with Unix servers to do so, due to the poor design decision of making a lot of their stuff (although with "Active" in its name, you can tell it's going to be insecure/unstable/buggy/all-of-the-above) depend on a standard which isn't mainstream yet, even if it is probably an open one.

    Very clever, I must admit. A way to twist Open-Source to their advantage. Nonetheless, I'd say this ought to go into the 2.3 development tree now, so that it'll hopefully be ready before Win2k or at least not long after.
  • by Anonymous Coward
    Somehow, I don't think that Ziff-Davis if going to take Slashdot to court over something as simple as the title of an article. 'Sides, they'd fire a warning shot first. (Nastygram from the lawyer.) Even more so, why flip off Slashdot when it brings them so much revenue? Ziff loves Maldo, and so does Davis. (But I don't think they're having a three-way.)
  • yeh, switch to linux! nope...that won't work either...linux doesn't like DHCP on my cable modem...never gets default route...oh much for mindless advocacy....
  • if that's your philosophy, why don't you just change your minimum threshold to be 1. That way you won't see stoopid posts by AC's. I prefer to read the AC postings because every once and a while an AC had something useful to say.
  • someone mentions 'Ignorant MS-using sysadmins' and everyone assumes they were the subject...
    i would read it as referring to the subset of MS-using sysadmins who are ignorant, not as labeling the entire group as ignorant...
  • Insiders get eaten up and spit out just as regularly as ouside companies.

    Oh, please Bill! Let me work there! ;)

    The Halloween documents were nothing more and nothing less than the standard white papers developed internally at most businesses.

    Honestly, can you really think of another company that has enough power to even think of doing what the Halloween documents suggested. Remember, might != right. Being able to force your customers to buy something does not make a good long-term business plan. Eventually they come after you with pitchforks.
  • BIND 8 has this, and has for a while. Copying standards and then making them incompatible with the rest of the world is the only way microsoft pushes the envelope.
  • I'd say this ought to go into the 2.3 development tree now

    Except that DNS isn't done with the kernel, just as HTTP and SMTP aren't either.

  • wow, that really beats my linux box that's been up for ~100 days since that power outage...
  • M$ DHCP in NT5 betas does not work now. If you load W2kP (NT Workstation 5) Release Candidate 1 and then try to connect using a cable modem (using DHCP), you will never be assigned an IP address or be able to connect. Had to reformat hd and go back to beta 3 installation to get around this. Same result with W2kS (NT Server 5). Perhaps Release Candidate 2 of the betas fixes this.
  • by whoop ( 194 ) on Saturday August 28, 1999 @10:42AM (#1720688) Homepage
    A Windows interface does not a LAN Admin make.

    Networking, DNS/DHCP administration, network security, etc are things that should NOT be left to Windows dialog boxes and wizards. The person in charge of these should study, and learn about them before trying to use them. After that is done, compiling and configuring Bind and dhcpd to do these DYNDNS updates is trivial. My original point was that the technology exists for any mildly competent person in charge of DHCP/DNS on a Unix box, despite the PCWeek author's claim that it just does not exist.

    For adequate security models, I'll trust Bugtraq and the dozens of other mailing lists/newsgroups far over MS's little bug page which takes 3-4 weeks to acknowledge security problems, and another 3-4 to come out with a workaround like "don't use this option." If a business wants to protect their networks, they MUST hire a competent person to do the job (I'm available if anyone's looking :)), and not rely on the OS manufacturer to secure their systems.

    Running network services like these on Windows just doesn't promote the Unix concepts of RTFM. Explaining to my brother the concept of mapping hostnames to an IP and likewise that IP to the hostname, or what an MX record is, was made terribly difficult because of what Microsoft has done.
  • I'm not sure what the whole issue is here - ISC's BIND supports dynamic updates now. And their DHCP client supports sending the hostname as part of the packet.

    In fact, if you look at this link [], you'll see that I currently use a perl program to take entries out of my DHCPD lease file, and update my DNS with the new hostnames, DYNAMICALLY!

    - Kazin
  • For those of you who don't know, DDNS is basically DHCP with hostnames built in.

    Problem is.. it violates the real standards for DNS.

    To do DDNS requires that all upstream servers update excessively; AXFR's are performed on average every *FIVE MINUTES* in DDNS from what I've seen.

    Problem #2; Microsoft doesn't even know what an AXFR is. NT DNS follows standards for lookups, but if you need a secondary DNS server and your primary is NT, well, break out the checkbook. M$ DNS follows ZERO standards in zone transfers, not to mention file format! You *CAN'T* secondary with unix without more headaches than it's worth.

    DDNS is nothing more than another Microsoft attempt to gain more control over the internet through 'evolving' standards by blatantly ignoring them.

    I pity the fools who believe the hype.

    -RISCy Business | Rabid System Administrator and BOFH
  • Its all a buncha BS. it is 100% possible imho to have a unix box and an NT4SVR box competing for the same "highest" uptime. our WWW (AIX) and DNS (NT) boxes never go down. the real problem here is the stupid NOVELL using sysadmins pulling the power cable on the boxes to restart them because they dont know how to restart a service. just use WHAT WORKS FOR YOU.
  • hmm, someone forgot to get a babysitter for the script kids tonight..
  • I am only guessing, but I imagine that W2K integrates DNS and DHCP in such a manner that a UNIX DNS server cannot be used (embrace and extend, anyone?). A FQDN would be assigned to a host regardless of its IP address; the DNS server would update the host's IP address dynamically whenever it changes.

    There is nothing that says that you need dynamic DNS in order to associate a FQDN to a specific workstation in a DHCP environment. With DHCP, you can reserve an IP address for a specific workstation simply by giving it the workstation's ethernet address. I set up a bunch of X terminals like this at my previous job. Works great. Less filling.

    As a rule of thumb, servers (i.e. hosts that need to be accessed via a specific FQDN) ought to have a static IP address anyway, and it is unwise to create dependencies like this (for example, NIS server needs DHCP server in order to boot).

    In my opinion, Dynamic DNS is nifty, but if Microsoft is not keep the standard open, then it is useless.
  • > The Mailing List and Newsgroup just aren't an adequate security model.
    If you don't trust a patch floating around a mailing list/newsgroup, fine. They will eventually get looked at by the (trusted) maintainers, who will personally review the patch and likely include it in the standard distribution. It's not as if joe schmoe can magically write some code, post it on a newsgroup, and *bam*, it's in the distro. It doesn't work like that. Code has to go through an EXTENSIVE public review process before it gets merged into the main tree. That's a more than adequate security model, and better than most proprietary software vendors.
    If getting patches from an untrusted source in a newsgroup bothers you, then you can wait for them to get reviewed and either be rejected (and the functionality added in some other way), or make their way into the standard distribution. I don't see what's so hard about that.
    You obviously haven't actually had any direct experience with the way these projects work.
    Berlin-- []
  • I've been lurking for a bit and watching the discussion. Perhaps some of you could tell me what the MS DDNS means for the following implementation:

    We're currently installing an Oracle workflow system that relies on LDAP to grab user information from our e-mail server to populate the workflow system directory. The Oracle system is hosted on a Unix box, but most of the user information comes from our e-mail servers, which are all MS Exchange. We also use NetWare.

    If the directory services in Win2k are all one-way into the MS directory and we migrate to Win2k, will it prevent our Oracle WF system from pulling user data from the DDNS to populate its own LDAP directory?

    Thanks in advance. And if I've phrased the question incorrectly (or cluelessly), please give me a clue.

    (Pulling on reflective armor and awaiting response to my first-ever Slashdot post!)

  • by Anonymous Coward
    "starting with 3.1, will include" is just too late! this feature is required by so many installations that it should have been included long ago. Microsoft can be accused of many things, but this is just something that the Unix community had to do years ago and they just let it slip.
  • Have you considered that your cable modem may be the cause of your problems?
  • WINS is limited to M$ machines. Non-M$ machines will not appear. So reverse WINS != DDNS.
  • What about Firewall/NATs? if the internal server was DDNS and the NAT assignation was tied into DDNS, then it wouldn't matter what the world thought and you could run your internal updates as fast as you like with external updates set to a more bandwidth-friendly longer settign.
  • As far as DDNS' usability goes. Keep in mind that DDNS is an option in Win2k - not a requirement. On my win2k advanced server running Release Candidate 1 of the beta code, you can still choose to run the regular DNS service. The advanatages of DDNS will likely make administrators want to move to it.
  • by Skidmarq ( 5462 ) <scott AT gicsm DOT org> on Saturday August 28, 1999 @05:29AM (#1720722) Homepage

    If anyone is interested in actually reading them, the RFCs MS is SUPPOSED to be following with this are 2136 and 2052.

    Also, no one I know who is testing this out (in the IT consulting firm who will be doing a great deal of this whem it spills out upon the world) is fooling themselves about what a GIANT political battle this could turn into. To avoid this, you will probably see Active Directory Domains handling their own DDNS, and forwarding to existing UNIX infrastructure for all other name resolution if those doing the implementation aren't up to the fight. other systems in the network will resolve to systems in the DDNS zones is supposed to be worked out, (with the use of some crazy zone magic) but I've not seen it work yet.
  • "Vanilla BIND" (i.e., the version release by the Internet Software Consortium) has supported Dynamic Update (as specified in RFC 2136) since version 8.1.

    However, the dialect of transactional signatures (TSIG) supported by Windows 2000 is *not* the same as that supported by vanilla BIND, and that will cause problems. Basically, you'll have to allow "unsigned" dynamic updates if you use BIND instead of the Microsoft DNS Server.
  • I have a laptop (usually running Linux, occasionally running Win98) that gets its IP address via DHCP (from a Linux server at home, an NT server at work).

    At home, because it's almost always the only DHCP client, my laptop always gets (the beginning of my assigned DHCP range), so I can pretend it has a fixed IP address for local DNS purposes. At work, it gets a different IP address almost every day. WINS can resolve its name anyway; DNS can't because we don't have DDNS yet. MS supporting DDNS is good; my Solaris and Linux machines (which have clients for DNS but not WINS) would be able to look up my laptop by name, just like my Windows box (which has clients for both DNS and WINS).

    Yes, MS might screw up DDNS, through malice or incompetence, and provide something only 99% compatible with the RFC. Recall the pump DHCP client included with Red Hat 6.0, which worked great with most Unix DHCP servers but not with NT's. But note that it was quickly patched to work with NT. Open-sourced clients can quickly deal with a bit of incompatibility, whether malicious or accidental.

    The fact that MS supports a new open standard like DDNS before your favorite OS does is a reason to start working on an open DDNS client, not an excuse to bash MS. DDNS is good. NT becoming more standards-compliant is good. If at some point in the future MS starts changing their DDNS server around to deliberately cause problems with other people's clients, *then* bash MS, and suggest to your local sysadmin that he run DHCP and DNS from a cheap Linux/*BSD/whatever box instead of an NT server to maintain maximum compatibility with existing clients. But bashing MS in advance just for announcing the intent to support a good, new, open standard is counterproductive. Would you really prefer WINS?
  • With DDNS, the hostname is bound to a device and the IP changes. With static DNS, the IP is bound to a device and the name changes.
  • How is DDNS related to DHCP? I would think that the DHCP server implements DDNS... and I thought that clients and servers for DHCP were already available (for Linux). But I am not wise in the ways of Bind. Am I missing something?
  • "in a coperate envoiornment, this isn't an issue"

    I agree with you completely, because this strengthens my theory about MS's server strategy.

    DDNS may not be a compelling solution for a global, public network, but it sounds as though it's a very nice option for a local net, and that's where Microsoft is concentrating their efforts.

    It is important to remember that the Winxx platform is not the logical center of Microsoft's empire. MS Office is. MS Office is the "killer app" which makes most businesses buy Wintel boxes on the desktop, and Windows on the desktop is why those same businesses buy NT servers. The presence of MS Office for the Mac was a significant factor in Apple's resurgence in sales.

    Microsoft is leveraging this advantage very effectively, integrating Office with IIS, and with DDNS they are now making it even easier for any salesperson to connect their Windows laptop to connect to any open ethernet port in the office and start working immediately.

    That, all by itself, is a good thing. What is not a good thing is for MS to specifcially design their ActiveDirectory so that it requires DDNS. Novell's NDS doesn't require DDNS, and from what I've seen ActiveDirectory does less than Novell's solution. I'm sure that the programmers behind W2K are very good at their jobs, so I must assume that the decision to make W2K DDNS dependant was a conscious choice. If MS publishes a white paper stating the reasons for this, I will read it, (and the soon-to-follow slashdot commentary) and make my mind up then.

    PC Week deserves criticism for not doing their homework on this (no surprise there). To state that Unix does not offer this service, when it does, is terrible journanlism.

    But then, any "news" article about Windows 2000 which is followed by a link titled
    "Check prices: Windows 2000" isn't actually journalism at all, it's an infomercial.

"If you lived today as if it were your last, you'd buy up a box of rockets and fire them all off, wouldn't you?" -- Garrison Keillor