Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet

Sun no Longer the "dot" in .com 173

An anonymous reader writes: "Sun's claim to fame, namely being the "dot" in .com in all their TV spots, has been snatched by IBM. Their E10000 which was serving as the A.Root server has been replaced by an IBM RS/6000 S80. " OK, it's not the most significant news, but it was just funny to see that title. ;)
This discussion has been archived. No new comments can be posted.

Sun no Longer the "dot" in .com

Comments Filter:
  • by Toddarooski ( 12363 ) on Thursday April 20, 2000 @09:50AM (#1120427)
    ...involving the phrase "Getting the dot, but missing the point."

    I just can't think of it.

    Damn.

  • The plan was for the .com, .net, and .org gTLD servers, not the root servers. Same thought pattern holds, though.

    (from the NANOG mailing list:)

    Date: 14 Apr 2000 20:04:52 -0700
    From: Sean Donelan
    To: tomn@netsol.com
    Cc: nanog@merit.edu
    Subject: RE: NetSol screwing the pooch?

    [snip]
    I'm a bit concerned when I read about a plan to install identical
    servers, with identical configurations, with identical software,
    connected to identical routers also with identical software and
    configurations, operated by a single human point of contact.

    [snip]

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • S-80's will include hot-swappable memory and CPU's...so that won't be an issue for much longer.

    Sun has been swapping processor boards on running systems for quite some time now. Designing a computer that can do this is _not_ an easy thing to do. The E10K is a second generation machine (the Cray CS6400 was the first generation). Dynamic Reconfiguration ("DR") requires all device drivers to be tested and stable during this operation. Memory has to be "drained" from the banks on the board being removed and processes have to be migrated off of the processors on the board being removed. The hardware on the system board has to support DR, the backplane has to support DR, the control board has to support DR, the operating system has to support DR, and the Service Processor has to control DR.

    Again, _not_ easy. I do expect that IBM, given its extensive experience making mainframes, could definately provide this capability to a UNIX system if they put their minds to it. Heck, they put LPARS on the AS400.

    BTW, the E10K has three times the system memory bandwidth of the S80. That is why IBM will never publish a Stream benchmark for the S80.

  • OK. So there were a few issues with the report.

    1). NSI Registrar actually manages this, the "independent" part of NSI.

    2). Nobody cares that NSI chose IBM over SUN as it's only one machine. You guys fuss like this machine is actually important when it's actually about as significant as my home PC. If it goes down things might become a little slower for uncached queries, but the vast majority of users won't notice any change - it's called DynamicNS for a reason fokes, stop fussing over one particular box.

    3). NSI are being twerps choosing to standardise on certain stuff when in reality I'd trust (no Verisign pun intended :) the root servers that are based in universities and other educational environments more than I'd trust any closed source limited setup. Face it, those "educational" root servers are probably running BSD or Linux and are probably using the latest versions of bind without sh*te loads of other processes and probably are kept up to date. I wouldn't trust NSI further than I could throw them.

    4). NSI do ___NOT___ maintain the domains for other countries. They may own the box that is A.root-servers.net but that only takes you from the "." to "com." or "uk." - the actual country dns's coupe with registrations - so NSI are trying to claim responsibility for something they know crap all about and that they don't own or run.

    5). They articles are so badly written that they might as well have not been written at all.

    In short:

    "NSI today purchased a new box to replace A.root-servers.net, which used to be a SUN E10000 box. The "A" root server is responsible for resolving the top level '.' domain into subdomains such as .com .uk etc. and is used as a last resort when local dns caches do not have the information to hand, or it is out of date. There are many other root servers, an article was written about this, but this whole thing is fscking boring and about as interesting as me buying a new PC."

    Jonathan.
    --
    oh-go-on-spam-me-spam@easypenguin.com
  • Speculation:
    Automatic lookups of domains in order to find out if they're free? Feeded by word databases which itself were built by semi-automatic "buzzword generators"?

    Really, when the imac came out I would guess that 1 million domain-traders tried to catch everything from www.i-apple.com to www.i-zoo.com.

  • Well, the ArpaNET started on PDP-10s, no doubt about it.

    But hadn't the world pretty much gone to Unix by the time the Internet began?

    D

    ----
  • That and the shareholder meeting is this tuesday....don't forget to vote your proxy.

    Vermifax
  • Really, thought the same time. We just priced a E5500 with only 4 processors and 2gig and a shitload of disk and the total was $180k.

    If you have a Starfire fo $80k, let me in on where to pick one up!
  • Actually, the article says 24, not 4
  • *One* Server holds the master file? One Server to rule them all One Server to find them One Server to bring them all And in the DNS BIND them
  • by Van Halen ( 31671 ) on Thursday April 20, 2000 @11:26AM (#1120437) Journal
    Screwed up the previous post after the preview (removed all my html tags... how am I supposed to check my html if I have to re-enter everything afterwards?) arggh... (I'll probably get -1, Redundant for this but oh well...)

    *One* Server holds the master file?

    One Server to rule them
    One Server to find them
    One Server to bring them
    And in the DNS BIND them

  • I meant to write that it was product of the year primarily due to the different benchmarks it shattered (yeah, yeah - contrived benchmarks)... DB2, SAP, Oracle, all sorts of goodness.

    I will now cease to rant, unless otherwise provoked 8^)
  • Well for a brief summary, look here [ibm.com]. Briefly summazrized: roughly the same horses with half the cost and 1/3 the processors.
  • Plus better reliability, better service and a more sophisticated operating system.

    That's all really.

  • by Matt2000 ( 29624 ) on Thursday April 20, 2000 @09:53AM (#1120441) Homepage

    Looks like after that they've decided to change to the "doh" in .com.

    [drum hit]

    Hotnutz.com [hotnutz.com] - Funny
  • by matticus ( 93537 ) on Thursday April 20, 2000 @09:54AM (#1120442) Homepage
    you've got to wonder who is using the old server now. maybe they'd give it to me...hmmm...it would be incredible to have the ex-A.root in my dorm room (i know they cost like 500K...used).
    Random Person-"you mind if i get a coke?"
    Me-"That's not any ordinary fridge. that's a.root!"
    Random Person-"huh?"

    that would be fun. but seriously, what do they do with the ex-servers? i mean, no matter if it is an E450 or the E10000 the article claimed, that's still some serious power. it's funny when technology you could never afford in a million years gets deemed obsolete. maybe i'll get a big alpha-200 server or something for cheap and pretend it's a.root. or something. isn't it great to be geek?

  • He's describing the behavior of the registry, not of the root-servers themselves... what gives?

    The root servers run bind, and server out names. Period.

    The registry facilities (internic, formerly) are on a totally different system.
  • Not four servers.. a four processor e450.
    And a quad processor e450 running solaris will eat you for breakfast.
    Some compaq servers? If it's a quad alpha.. yeah...
    but you can't beat solaris.
  • I dig sun... I love sun..
    but you know.. some sun salesmen REALLY piss me off. VERY pushy. The worst thing you can do with me is get pushy.
  • I was wondering when you ancients were going to show up and start setting things straight.

    Oh.. thanks ;)
  • I'm not sure why you used the analogy you did. In the event of a natural disaster, a piece of Big Iron is just as fallible as a PC. Depends on the disaster. The S80 could probabbly fare pretty well in an earthquake. I remember an add DEC use to run about their "High Availability" VAX/VMS systems. A picture of a machine room after an earthquake. Machines that had ripped the bolts out of the racks and were on their sides. The HA VAX had it's disk lights going, even with part of the machine in a pool of water (I assume from some thing else's cooling). The S80 could have a lot of it's CPU and memory boards unseated (or destroyed) and should keep on chugging (it might have to auto-reboot). No PC's I know of would. Unless you count the old Sequents as PCs just because they use 80386s and 80486s. This makes me curious -- what would happen if the root A server got totalled? What gets failed over onto? If the primary fails the secondarys can still give answers (I think secondaries can even give authoritatave answers in most cases). The failure would have to last days before a Bad Thing (other then excess load) happened. Check your /etc/namedb/root.cache for details.
  • I am not a big fan of the old K systems, but the N class systems are pretty nice. We have a 6x440 CPU/16 GB system that screams, and they now go to 8x550/32 GB -- not bad for a "mid-range" server. The N machines have full hardware support for hot-swap components as well. Unfortunately, there is (as of yet) no software support for it at all (supposedly comming in 11.11, but as mentioned it will take at least a year to work right)

    Everything I have seen and heard says the hardware is more reliable than anything Sun makes. Unfortunately, there are 6 times as many patches you have to apply to make the software run at all (all of them triggering an auto-reboot after install. Damn, Toto, I think we are in Windows again...)

  • Go Big Blue! (shameless plug).

    Hey... they pay my salary, what can I say 8^)
  • by devphil ( 51341 ) on Thursday April 20, 2000 @09:34AM (#1120450) Homepage

    Insert "Big Blue Dot" jokes here.

    (Odd, too -- Sun's E10K, or "Starfire" box, kicks ass. Copious amounts of ass. I'm surprised they switched.)
  • Which - the lame Sun dot ads, or any of the lame IBM ads...

    If they all stopped, or actually made sense (smelling a Thinkpad?!), I'd feel a lot better.
  • Wow, somehow I broke the 'preview' options.

    No, CmdrTaco broke it, you just stepped on the pieces...

    Yup, I got suckered by the bug here too [slashdot.org] (corrected non-previewed post here [slashdot.org]). Seems this guy [slashdot.org] did too (and had to correct [slashdot.org] it as well). Just venting my (offtopic) frustration, that's all... ;-) (fingers crossed without preview...)

  • Sylvania should be the one with the copyright on "blue dot" from their flashbulbs.
  • ..how could I have been so mistaken =P
  • Could this be in any way the reason why the root server database has not updated for 2.5 days? Last updated 19-Apr-2000 22:22:07 EDT.
  • Just from a theretical point of view, how difficult do you think it would be to take those servers down from terrorist activity. I mean could the internet be taken down if 12 explosions at the right time/place where detonated?

    Assuming you can figure out where they all are form the IP addresses in the root.cache file, and traceroute, or other similar tools, and maybe a bit of social engenering, it shouldn't be any harder then any other 12 randomly selected machines. (i.e. you may get unlucky and some are in phone COs and you need to get into a somewhat secure area, or blow through a lot of concrete in the internal walls behing the office bilding facade).

    That wouldn't take out "the Internet", just much of name service. It would suck a lot. As caches started timing out things would start to suck a lot more.

    However there are unoffical secondaries (not listed), and I assume other backup sets of the data. "All" that would be required would be to set up another root server (or 12), and route the old root serve's machine's IP address to the new ones. Wait less then five minutes for routing to converge, and all is right with name service again. Regretabably the loss of life involved in "12 explosions" would be far harder to "correct".

    Beats me how long it would take to fix. If there is a real drill for it, maybe under an hour. If there is no drill for it, it could be much longer since the "12 explosions" probbably will cause lots of confusion.

  • by Coward, Anonymous ( 55185 ) on Thursday April 20, 2000 @09:35AM (#1120457)
    The root servers are root-servers.net, so IBM can be called the dot in .net and Sun can still claim to be the dot in .com.
  • No, it's IBM. Not Blaupunkt.

    Although I'm sure a Blaupunkt server would look better in a VW New Beetle.

  • BURN THE HERETIC! We worship Linux here! `8r)

    I felt someone had to stand up for AIX, cause well, it got me a job at one point, and you're the only one who will! `8r) but I still say I was dead on about the 'smit' crack. heh

    As far as the a brand new IBM box beating a Solaris box.... that's not bad for a box that first [sun.com] started shipping in March 1997. It just got leapfrogged 3 years later for some odd reason... `8r)

    --
    Gonzo Granzeau

  • the dot shortage [segfault.org] may quickly become unbearable...
  • That may be true of McNealy, but look at some of the other Sun heavyweights like Bill Joy! This guy wrote vi and BSD networking for goodness sake!

    --hunter
  • Here's the real info:

    The A root name server has doubled transaction growth in the past quarter to over 5000 queries per second with peaks up to 8000 queries per second.

    Which comes out to ~430 million queries/day - as the article states...

    Though several other sources seem to agree - it was a E10K...
  • by jabbo ( 860 ) <jabbo@yahooMOSCOW.com minus city> on Thursday April 20, 2000 @09:59AM (#1120463)
    I'm not sure why you used the analogy you did. In the event of a natural disaster, a piece of Big Iron is just as fallible as a PC.

    Which is one reason IBM sells clustering solutions for just about everything they make.

    This makes me curious -- what would happen if the root A server got totalled? What gets failed over onto? I know I should RTFM, and I will, but my Stevens books are at home.

  • I was watching the late evening business news on CNBC yesterday, and they interviewed the CEO, is it, of HP - the very sexy-looking lady, Fiorino, Carly Fiorino? Man, I'd like to be her personal assistant :) Anyway, the interviewer was asking about HP earnings, and the debut of the "new" MS-based PocketPC, and Ms. Fiorino also started in on Sun. Seems HP's got their server sites set on ol' Scott&Co. Big announcement that eBay replaced it's Sun's w/ HP's. Ms. F. said to look for future announcements in the same vein.

    Guess Sun better check it's six, huh?
  • by Tony Hammitt ( 73675 ) on Thursday April 20, 2000 @10:00AM (#1120465)
    You can't put 'just' 4 processors in an S80, it comes in multiples of 6 up to 24.

    The biggest advantage to an S80 is the price/performance ratio. The big disadvantage is that it has to be shut down when a CPU or a memory card fails. E10K's can hot swap CPUs and memory, but E450's can't...

    Just clarifying.
  • ibm likes linux more than sun [...] hey probably upgraded for performance reasons.

    Well, of course ! The whole reason it performs better is because of Linux. Imagine millions of Linux developers coding and sweating, saying "IBM is cool". Their effort then will naturally turn into CPU power, making all IBM CPUs magically run faster. The box itself doesn't have to run Linux (of course, it would be *at least* 10 times faster if it did).

    It certainly is because of Linux. Anyone suggesting any other alternatives are deranged.

  • Or Cray - they put the dot in .mil :)
  • Yeah... also, look at my sig.

    --

  • Read it again... the exact quote is "He said transactions at that registry--which includes people looking up names to see if they are still available, as well as changes made to domain-name registrations--jumped from 1.5 million a day to 25 million a day in the first 12 weeks of the year. In other words, that number includes look-ups and changes to existing domains.

    --

  • IBM has announced that the next rev of the S-80's will include hot-swappable memory and CPU's...so that won't be an issue for much longer. As a sysadmin who helps run a pair of S-80's, I can definitely say that they are some *sweet* machines :)
  • Kinda like it's argueable that Cisco is the . in .com. Their routers and hardware run all over the place, but most end users don't even know it's there. Just like that lowly '.'.
  • by Josh Guffin ( 43687 ) on Thursday April 20, 2000 @12:06PM (#1120472) Homepage
    Last time i checked, RFC 882 put the dot in .com
  • Half the cost? Maybe, but they already *BOUGHT* the Sun.

    So they have two choices:

    1) The Sun, total expenditure 100% of the cost of one Sun.

    2) The IBM, total expenditure 150% of the cost of one Sun.

    All of my co-workers on projects using IBM are wishing like hell they'd picked Sun, and meanwhile my Sun servers are happy as clams, chugging along, unaffected by the crashes over on the Blue side of the data center.
  • I thought it was Bill Gates... ... You mean the dot in dot com isn't a windows 2000 box ? TastesLikeHerringFlavoredChicken
  • A poster asks:
    Just from a theretical point of view, how difficult do you think it would be to take those servers down from terrorist activity. I mean could the internet be taken down if 12 explosions at the right time/place where detonated?

    Stripes starts his reply:
    Assuming you can figure out where they all are form the IP addresses in the root.cache file, and traceroute, or other similar tools, and maybe a bit of social engenering, it shouldn't be any harder then any other 12 randomly selected machines.

    Define "explosions"

    Stripes, The poster to which you responded did not specify what type of explosions were available to them. If they're nuclear explosions, they'd probably need only 8-10 strategically placed explosions to wipe out all of the current neameservers (with or without social engineering). If they're lucky, they might take out the "shadow root servers" as well. Given the location of some of the root servers, they'd probably cripple alot more than just DNS. They'd effectively take out a good deal of infrastructure as well as the Internet engineers necessary to repair it, not to mention start a worldwide panic.

    The Internet would still recover though, much as you described in your post. Anyone can setup a redundant server cluster within a matter of minutes given a set of pre-staged root and first level zone data.

    The more interesting problems are due to corrupted data rather than doing denial of service attacks on nameservers. Some bad data in Network Solution's database can make various interesting parts of the Internet suck really bad. When one root server has data corruption, the whole net feels it. Imagine if some NSOL staffer garbled the nameserver data for "Yahoo.COM." or "IN-ADDR.ARPA." to point to 255.255.255.255 instead of the real servers?

    For anyone else interested in DNS DoS...

    An easier method

    One of the easiest way to kill DNS is to try a coordinated DoS attack against all of the nameservers. Each of the world hundreds of thousands of resolvers is configured to use any of 13 root nameservers. Just like a 15-year-old kid did with HTTP requests, one could probably start a distributed DoS attack against DNS. The "heftiest" root nameserver is rumored somewhere in this discussion to be able to handle 6000-8000 hits a second. With 13 published nameservers, one needs only about 100000 hits per second to saturate the current capacity of all of the servers. Let's say that I was a bright hacker (which I'm not) that I could find my way into 1000 machines around the world that each had a T1 connection or better. Can we agree that this is a difficult but not unreasonably impossible thing to do? If one were not smart enough to do it themselves, one could perhaps go to a hacker convention or local user group and bribe a script kiddie seeking infamy and fortune to go forth an find 1000 machines to hack. Another way is to unleash a time-dated virus onto the net that will do your bidding at a specific time. Each machine would gather a list of 100 addresses, perhaps starting with the history file of a user's browser to get a list of second-level domains. It could also look for addresses using a popular portal directory or search engine and interpret results to get domain names. With 100 domain names, it would query 100 names per second (less than one megabit) from each of the few registered root nmeservers. While the traffic isn't overwhelming, it will overload the root servers fo rthe number of transactions per second, and nothing short of hunting and killing half of the query servers would reduce the effectiveness of the attack. To make the attack harder to stop, one could double or quadruple the number of query servers or use methods of masquerading your attack (I won't go into detail here) to keep network administrators from being able to shut down query servers. Another way to scale the attack is to use they heavier TCP protocol for most of the queries instead of the lightweight UDP.

    fin.

    The technology needed to exponentially increase the ability of the root servers to perform is not out of reach. With the proper motivation (a DoS like I described), one million dollars of capital (compare $1m to the current valuation of NSOL), and perhaps 30 man-weeks of time, one can make a farm of servers able to handle two orders of magnitude more requests than the current set of servers.

    The IBM server announcement by Network Solutions disappoints me. It's sad.

    Any of the following are good candidates that I know about for scalably solving root DNS infrastructure problems...

    • UltraDNS [ultradns.com] - DNS service provider with an interesting spin on distributed scalability
    • Nominum [nominum.com] - the knowledge and knowhow to make fast scalable DNS servers and software
    • Akamai [akamai.com]/Sandpiper [sandpiper.net] - a distributed operations infrastructure onto which one can install root clusters.
    Hint: If one can make an application layer proxy host that takes inbound DNS requests and routes them based on a hash table of domain names to a set of back end nameservers (with only a fraction of domains loaded on each), one could have the start of a scalable solution. One can make a fast cheap BSD box to do this up to 5000 ops per second or better. I wonder if the skunk works at Novell can do ths faster. One can use some router technology (OSPF, trunking, or L4 switching) to spray UDP requests to a number of these appliction load balancer / DNS proxy servers.

    One can also implement interesting filters on such a proxy server to reduce the effect of stupider resolvers or lame DoS attacks.

    --
    Eric Ziegast

    PS: Slashdot probably isn't the best forum for this, but if you know a better forum, feel free to point them toward this post.

  • by jafac ( 1449 )
    Proving the unstoppable superiority of the Power PC Architecture.

    Just wait, tomorrow, we'll hear about them replacing the RS/6000 with a warehouse full of water-cooled quad Xeons running Windows 2000.
    We wont hear, of course, that MS fronted the money for the HW.

    I wish I had a nickel for every time someone said "Information wants to be free".
  • There's plenty of load balancing among the root servers. If you have an adequately recent distribution of BIND (4.x+ will do fine), you have a hint file (named 'root.hint', or 'named.ca', or whatever) listing all of the root servers (I guess) and the original names. My 'root.hint' file lists 13 of them (from a.root-servers.net to m.root-servers.net).
  • I would imagine that eBay moved to HP servers because HP has the closest alliance out of all the UNIX hardware vendors with Zeus technology, the company that makes the web server that eBay uses. For a company such as eBay the downtime reduction that that alliance might yield would be worth the transition cost.

    just a note, in case anyone is wondering what I am talking about when www.ebay.com is shown to be running IIS by netcraft. They run IIS/NT for the pretty Frontpage stuff, but have a look at the guts of the site: search.ebay.com . That's running Zeus 3.3.

  • But the real story in that article is down near the bottom of the page:

    Millions more names have been registered by competing companies and registrars outside the United States. Network Solutions will disclose exactly how many next week when it reports quarterly earnings.

  • by zCyl ( 14362 ) on Thursday April 20, 2000 @12:42PM (#1120489)
    One dot, slightly used.
  • by sludg-o ( 120354 ) on Thursday April 20, 2000 @12:50PM (#1120492)
    Known as the A.Root server, the big black IBM computer holds the authoritve files for matching domain names--such as www.marthastewart.com or www.yahoo.com--...

    Actually, this is not true. This server only translates the field directly before the TLD extension. That is, only yahoo.com and marthastewart.com are served. The www part is supplied by yahoo and martha's respective root servers.

    I realize that the author of the article probably knows this, but did not include it in his article so my mother would understand, but I feel /. readers that are new to network hierarchy should get the facts.

    Sludgie

    and what's up with my tags being removed in the editing field when I preview? That's annoying.

  • Hey there..

    Isn't it arguable that SRI & ISI put the . in .com?

    RFC830 put the . in .ARPA (an SRI publication)

    Then, a little later, RFC 881 defined the
    domain name heirarchy.

    And RFC920, an ISI publication "Domain Requirements" actually lays out the top level domain structure, seperating 'education' 'commercial' and 'government', i.e, the first definition of .COM.

    So I'd say that RFC830 put the . later used in the RFC920 COM.

    Oh well..
  • Not to rain on your joke but...

    The sad part is he was almost right, just you have to know your history. He sponsored the bill that got the Internet started, back when it was just arpanet and a couple researchers.

    It's great to have a good laugh at politians talking out their ass, but the scary part is he was there at the begining, even if only as a politian. course, he still can't debug a tcp/ip stack. `8r)

    --
    Gonzo Granzeau

  • the "/" that you see between the top-level-domain of an address and its subdirectory is no longer being served by Slashdot

    Slashdot has never been the slash between the domain and directory, slashdot is the second slash in http:// and as of March 18 of this year, it's the first slash in ftp://. A currently pending deal will make it both of the slashes in gopher://.
  • by puddles ( 147314 ) on Thursday April 20, 2000 @10:22AM (#1120499)
    Cheap! Slightly used Sun Ultra Enterprise 10000 for sale. Like-new condition. Every home network needs one of these.
  • by xtheunknown ( 174416 ) on Thursday April 20, 2000 @10:25AM (#1120500)
    Sun may not be the dot in .com anymore in the literal sense, but they are far from losing their standing as premier provider of hardware for dotcom sites.

    If you look at the Fortune 100 corporate web sites, 52% of them are running Solaris with various web servers. Now this is certainly flamebait to most /.ers, the runner up was Windows NT (2000) with 29%. Interesting fact: Linux only runs one of the Fortune 100 web sites.

  • Perhaps that's why IBM's revenues fell 5% from the same quarter last year, with most of their business segments showing flat or negative growth, while Sun hit a home run with their earnings report, showing record revenue of $4 billion, a 37% increase from the same period last year. Hey, no shame, IBM wouldn't be the first company ruined by pandering to the open source community (see SGI, Netscape, etc.)! :) Cheers,ZicoKnows@hotmail.com
  • , no shame, IBM wouldn't be the first company ruined by pandering to the open source community (see SGI, Netscape, etc.)!

    IBM's drop in revenue originates in their consulting arm (IBM Global Services). It's nothing to do with their OS division - altho', given their current enthusiasm for Linux, that's probably about to change (think about it... the only way to make money on Linux is on yep, services).

  • Well, I have a hard time to believe they managed actually install w2k on 29% of f100 webservers in 2 monthes that w2k is out. Do not those guys believe in testing, etc.? Do they trust corporate website to be run by 2-month-old platform just from the day it hits the stores? I hardly believe this.
  • by Jon_E ( 148226 ) on Friday April 21, 2000 @03:22AM (#1120518)
    I haven't seen an accurate press release yet ..

    from an inside Sun source at NSI:

    1) There are no E10000 that were replaced .. there are no E10K servers at NSI. the old a.root-servers.net ran on an E450 (4proc) 4GB of Ram, and of those four processors their single-threaded bind process consumes 1.

    2) a.root-servers.net is the top authoritative server for the .com, .net and .org zones and i think they also load the .mil, .edu, .gov, and .arpa on a.root .. that's it. The internal press release claims that they hold zones for all the ccTLDs (country-code specific Top Level Domains). This is incorrect, but they do point to the correct authoritative servers for each of the country codes.

    suprising to find that much of NSI isn't aware of what exactly they do ..

  • He wasn't even close to being there at the beginning. The 'net (okay, darpanet back then) started life in 1969, the year Gore was graduating from college (with a degree in government). He didn't even run for Congress until 1976, by which time the net was far more than just "a couple researchers".

  • Actually, this is not true. This server only translates the field directly before the TLD extension. That is, only yahoo.com and marthastewart.com are served. The www part is supplied by yahoo and martha's respective root servers.

    Not even that is served from the root servers. All the root servers serve is IP addresses of the nameservers for the domain of the host being looked up, its up to the domains nameservers to deal out any actual IP's, including for their own domain.

    You look up marthastewart.com, your nameserver asks one of the root nameservers where the nameservers for marthastewart.com is, it then asks them for the IP to marthastewart.com.

    -- iCEBaLM
  • by 3247 ( 161794 ) on Thursday April 20, 2000 @10:10AM (#1120525) Homepage

    The rootservers are, as everyone who has ever edited a nameserver zone file knows, the dot in "com.", not in ".com" (which actually is ".com." and invalid without a proper 2nd leven domain).

  • Well, there's A.root-servers.net through M.root-servers.net, which are hosted all over the world. Usually only DNS servers contact them, and there's already built in "round robin" and retries. So, if A.root-servers.net was to go down, at worst, 1 out of 14 queries to domains that hadn't previously been queried would get delayed by a short period of time. (IOW, if you do a lookup on foo.domain.com, your DNS server would cache domain.com's NS info and your query for bob.domain.com would use that instead of hitting the root nameservers.) However, I think the DNS servers would cache the information about the failure talking to a.root-servers.net and stop asking it things for a while.

    In other words, DNS has failover built in.

    However, if the server stayed down for an extended period of time, it would probably cause updates not to happen. I suspect they could get a new server in place for that purpose within a reasonably short period of time, though.
  • In the first 12 weeks of this year, the number of requests for information--or hits--on the master server for all Internet addresses jumped from 220 million to 420 million a day, [...]

    Could this possibly have anything to do with the "hot property" domain mindset that means every acme.com also registers acme-widgets.com, acme-foo.com, and acme-bar.com, instead of using the DNS hierarchically as it was designed for by registering widgets.acme.com and so on within their own domain?

  • by InitZero ( 14837 ) on Thursday April 20, 2000 @10:34AM (#1120530) Homepage

    *One* Server holds the master file?

    One server hold the master file, yes. That master file is mirrored among many other servers which are not only located in different parts of the country but also in different parts of the world.

    No load balancing/[obligatory beowulf]/Round Robin? I would like to think there is some redundancy in there...

    {sigh} Spoken like a true PC server user.

    I've got four S70s which are almost identical to the S80 but max at 12 processors instead of the S80's 24.

    When you think server, you see a tower or maybe even a rack-mount PC. The S80 is no such beast. It is literally the size of an industrial refridgerator. And that's just for the processors. Right next to it is another cabinet of a similar size which has the IO drawers, drives and else.

    The only parts of the S80 that are not redundant are the processors and memory. Since both are non-moving, non-mechanical parts, they have an ultra long MTBF. If either fries, the machine takes itself down, 'deconfigures' the failed item and then brings itself back online. Try to get any PC server out there to do that.

    (Our S70 lost one of 12 processors three weeks ago at threeish in the morning. It was down and up so quickly no one even noticed it. A few days later, I was reviewing some logs and noticed that I was short a processor.)

    Yes, no system is failure-proof. However, the mindset that the S80 suffers from the same problems as a PC server is as silly as thinking a Piper Cub is in the same league as Air Force One (the president's plane).

    Internally, the S80 is redundant and can support an amazing load, externally, the DNS system will out-live us all.

    InitZero

  • He said he "took the lead in creating the Internet." In fact, although he was around, his participation was itself negligible, as were its effects in creating the Internet. He took no such lead.

    The statement is also bogus in that even if he had authored the bill and pushed it through all by himself, he could not have claimed credit for anything other than an accidental success, since the original project was merely an inter-university research network, a make-work project for a soon-to-be defunct government organization (DARPA). He implies that he was some sort of visionary, when he had no idea what arpanet would evolve into. The Internet as it exists today became that way because of the ideas and work of people entirely unconnected with the government.

    What he should have said was: I voted yes to a project that I was not actively involved with, and that changed the world completely after it was handed off to commercial interests and revamped."

    -JD
  • by mindstrm ( 20013 )
    You are correct.
    the . in .com is not separable from .com.. it's all one zone. just as the trailing dot is a zone.
  • Sun no longer the "dot" in .com

    (April 20, 2000) Up to recently, Network Solutions Inc. (NSI) used a Sun E10000, one of the powerhouses of the computer world. But recently, they've moved to a brand new IBM RS/6000 S80. What brought on this startling change? The Dali Lama caught up with someone from NSI recently and here's what went on.

    "Well, it all started with Comdex last year." says J.R. Bob Dobbs, VP of Sales at NSI. "Sally over in Marketing talked to this really cool guy at the IBM exihibit. Anyway, he said he could get this really great deal on this new equipment they had coming out. and she said to me 'Wow, think of the free publicity...' and we just knew we had to move. Besides, the old E10000 allows you to do maintance while part of it isn't working, and I'd rather it just stop working while someone is fixing it! I mean, when you blow a tire on your car, do you want it to actually keep driving instead of forcing you to pull over! Come on, that's dumb!"

    But what of the costs of migrating to an entirely new Unix platform? and the support costs? Dobbs commented "Well, the migration wasn't very easy, but after calling IBM technical support every day for the past month, hiring IBM global services to come out and fix it repeatedly, and firing our entire Solaris loving admin staff, we're through the migration already! I don't care if the new Sun processors and new 128 processor machine is coming out in six months, I want to spam the domain owners now! Besides, IBM assured us that he would install this great tool called 'smit' on the machine. Hell, I'm the Systems Engineer now! I don't even know what it's doing, I just point and click and it does stuff! Think about the huge amounts of savings with Administrative staff! Besides, IBM assures me I won't need anything but smit! I'm even IBM certified!"

    And what of the older processes still in place, like mail forms for registration names, and sending 'CRYPT-PW' via mail? Bob quickly snarled back with "Oh, you want security? wah, go cry in your milk, you linux pussy. I got the root server, fuck off."

    Obviously, great things are instore for NSI in the future.

    [note: Sorry if I'm a little biased, but how probable is this scenerio? Anyone else ever dealt NSI or IBM on a 'professional' level? And yes, it's all a joke. J.R. Bob Dobbs is entirely too cool to talk to the Dali Lama.]

    --
    Gonzo Granzeau

  • Yeah, but... that summary was written by IBM. Of /course/ it's going to give more for less. :-)

  • You see, outside of the WinNT server world, you have mainframes capable of huge amounts of processing by themselves... when you have 24 processors in one box, who needs load-balancing?

    (and DNS has so many hot backups worldwide, redundancy is, well, taken care of
  • ANd all of what, six, that they sold last year?

    Then again, I still have a stash of the pre-cube
    single little blue bulbs, a handful of flashcubes (not magicubes; they needed a battery),
    and even some #5 bulbs (or are mine 25s? I forget)--the ones nearly the size of a golfball.

    And I have the cameras to go with them. What I *don't* have is the 120 and 620 film (but you can still get at least the 120) that the cameras take . . . ooh, and one that takes 127 . . .
  • by loki7 ( 11496 ) on Thursday April 20, 2000 @02:39PM (#1120544) Homepage
    There are still tons left.

    slashslashdot.* is still available. Somebody could turn that into a good "News for Serial Killers. Stuff That Splatters" web site.

    antislashdot.* is available too. The site for people who think /. sucks.

    Or you could just take suckdot.org. I'm surprised nobody took this one after the suck.com parody.

    But dot[dot[dot[...]]].* are all taken up to 5 dots. So's quux.net. You can't have that one.

    If anyone uses one of these and IPOs and makes a fortune, can you buy me a sports car? Thanks!

    /peter
  • The S80 has some pretty phat specs. According to the Official Homepage it's got 53 PCI slots (yipes!), 48 drive bays, and can fit up to 64Gb of memory. Cost for the "base configuration?" (That's 9.1Gb HDD, 6 450Mhz RS64 III's, and 2Gb of memory) $294,096.00. Whew. Hate to think what the pimped out version costs...

    David E. Weekly [weekly.org]

  • by hawk ( 1151 )
    And let us not forget microsoft, who put the . in .borg . . .

    :()

    [I hope this doesn't appear twice; it looked like the message that flashed as I was killing the box said somehtin like slashdot requrires 70 seconds between comments . . .]
  • *One* Server holds the master file? An old legend... One Server to hold the file One Server to find them One Server to serve them all And in the darkness BIND them...

    --

  • BTW, how do you know those are w2k? Microsoft's HTTP responce on www.microsoft.com has no name of OS...
  • The server holds the master file nicknamed dot (or ".") that has the central database of domain-name information. Copies of the information are distributed regularly to other top-level domain servers around the world.
    *One* Server holds the master file?
    No load balancing/[obligatory beowulf]/Round Robin?
    I would like to think there is some redundancy in there...
  • Oops, pardon the double post...
    I'm at work on an old SPARCstation IPX running Netscape 3... Anyway, I previewed, but when I submitted I didn't notice that NS had stripped out the HTML tags from the text box. Anyway, here it is again, properly formatted:
    *One* Server holds the master file?
    An old legend...

    One Server to hold the file
    One Server to find them
    One Server to serve them all
    And in the darkness BIND them...

    --

  • Furthermore, arpa wasn't the only game in town. Federal funding certainly let it grow into what is now the internet, but the seeds had also been planted elswehere. Had it not been for federal funding, fidonet (or possibly something else) could have grown into what we now know as the internet.

    It was going to happen; the question is merely when and from what roots.

    Hmm, and I'd bet spam would be significantly less of an issue had it grown from fidonet, but that's a completely different issue . . .
  • Our Root server (not NSI, one of the others) is a dual-processor Sun 450 with 4 Gigs of RAM.

    Bind 9 does load balancing between two or more processors, bind 8... well... doesn't. Running top on the root server while it's running, and you see CPU3 with high utilization, and cpu 1 with like 1% (only from top and the shell)

    I don't really see the point of going multiple processors until they use Bind 9.

    FWIW, the 'A' server really isn't the master of the root domain anymore, since ICANN has control over what goes in, and what stays out of the root zone.

    As for the single point of failure, if A blows up, destroyed by fire, destroyed by quake, etc., the others just simply will have to pick up the load of the missing 'A'.

    If the mechanism of downloading the zones fails, we have a while (a few weeks) to make up our minds about what to do before bad things happen -- like internet not working anymore.

    And I know at least one Root Server Operator (well, me...) who checks out slashdot daily. I bet more do.

  • by mindstrm ( 20013 ) on Thursday April 20, 2000 @10:15AM (#1120557)
    Actually.. they are not the . in .com, the article misrepresents the truth.

    The . is actually the trailing dot, ie '.com.'. The top-level zone in DNS, that all other records are part of is simply '.'. It's assumed, and not normally written with a domain name (anyone working with bind sees this constantly)

    The dot in .com is not separable from the domain.. as every domain begins with a dot and ends in ... whatever..
  • A well loaded E10K is several million. $80K is probably the cost of the empty chassis if you qualify for some kind of special deal from Sun.
  • by Pike ( 52876 )
    One thing we all hate about Sun boxes around here is that they suck power like nobody's business. Man those things run hot. I never saw a server use sheer wattage like a Sun-based server.

    -JD
  • Reasons Al Gore should be replaced with an RS/6000:

    It is much more expressive.
    It doesn't require $500 haircuts.
    It doesn't come with Tipper Gore chained to it.
    It doesn't say nearly as much dumb stuff.
    Give it a 'net connection and it can attend global events virtually! Saves on $70,000 joy rides in Air Force Two.
    There is very little chance the RS/6000 could be swayed by Microsoft into calling the DOJ off. Now if IBM were to offer a couple new CPU's, we'd be in trouble.
    It doesn't waffle. Everything is yes or no, 1 or 0. No more bullshit answers.
  • It was shipping long before that as the Cray CS6400. This is technology bought from Cray Research, Inc. in 1997. They were being acquired by SGI and wanted to unload technology that competed directly with SGI's Origin2000.
  • by Anonymous Coward on Thursday April 20, 2000 @09:44AM (#1120568)
    1. a.root was a Sun E450 with quad 300mhz sun4u processors and 4gb of ram until ~1 month ago 2. the rootservers have never answered "millions" of queries per second. more like 6000 queries per second. 3. the IBM incarnation of a.root also has quad (323mhz?) processors, not 24 as the article states. all in all, a lot of blather with little technical or reality basis.
  • by Tower ( 37395 ) on Thursday April 20, 2000 @09:45AM (#1120571)
    Well, the E10K is a pretty kickin' box, but it doesn't kick nearly the amount of ass that the S80 does. Of course, the E10K is a little old now, and should have been supplanted by Sun's latest stuff, but they've been having a lot of problem with the Ultra Sparc III (fab problems @ TI, among other things...).

    The price point of the S80 also makes it an amazing bargain compared to the E10K... and the S80 sold 1000 units in 4 months - the E10K took over a year to reach the same sales... and the S80 was named '99 product of the year by several reports. Not too surprising. I am interested to see how well the USparc III does... it'll be a while, though...

    #inlcude
  • I wonder if there were any technical reasons for the switch of platfrom... ie Solaris to AIX... or if it a corpoarte agreement... specially since netSOl was bought by versign.

  • I've always been annoyed at Sun saying this. It was I who suggested that dot be the character to divide the multilevel domains in an arpanet 2-level domain, and Jon Postel who later drafted it. We gotta stop Sun from saying this. And no, I'm not making this up. The record is at this page with archives from the tcp-ip digest of Jannuary, 1982.
  • by snubber1 ( 56537 ) on Thursday April 20, 2000 @10:55AM (#1120584)
    F.root-servers.net claims [isc.org] to be the busiest with 260 million queries/day running on twin ES40 COMPAQ alpha servers.

    Sounds like a whole lotta 'dot' to me.

    ----------------------------------------------
  • This looks like AIX system configuration output.

    How did you get this?

    BTW, the proc[0-3] represents the processor card, each of which holds 6 processors and is hooked to the backplane (thus the 00-)
  • But... he did popularize the usage of the 'Information Superhighway', which I'm sure he doesn't want to take credit for now.

    But there are just more more credible quotes to make fun of rather than the same one OVER AND OVER AND OVER again. you know, ones where they said what they meant and it still came out wrong...

    --
    Gonzo Granzeau

  • by Wakko Warner ( 324 ) on Thursday April 20, 2000 @11:05AM (#1120590) Homepage Journal
    ...but the plan at NSI is to standardize on ONE PLATFORM -- both hardware and software -- for the root servers. I'm sure you can all grasp the sheer stupidity of such an idea. Let's say there's a documented hole in BIND or another program on AIX. Suddenly, instead of a single root server (or a couple of root servers) being down, *they're all gone*.

    Scary, huh?

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • I'll trade my new Laserdisc of "Phantom Menace" for it.

    Pope
  • Calling a PC running *nix and Apache and a RS6000 both "servers" is like calling your house and The Empire State both "buildings": technically correct, but completely missing the scope.

I'd rather just believe that it's done by little elves running around.

Working...