Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking

Where's Our Terabit Ethernet? 218

carusoj writes "Five years ago, we were talking about using Terabit Ethernet in 2008. Those plans have been pushed back a bit, but Ethernet inventor Bob Metcalfe this week is starting to throw around a new date for Terabit Ethernet: 2015. He's also suggesting that this be done in a non-standard way, at least at first, saying it's an opportunity to "break loose from the stranglehold of standards and move into some fun new technologies.""
This discussion has been archived. No new comments can be posted.

Where's Our Terabit Ethernet?

Comments Filter:
  • Stranglehold? (Score:5, Insightful)

    by Brian Gordon ( 987471 ) on Thursday February 28, 2008 @11:55AM (#22589558)
    I'd like to see the internet held together by his fun new technologies. See how well machines communicate without basic protocols.
    • Re:Stranglehold? (Score:4, Insightful)

      by KublaiKhan ( 522918 ) on Thursday February 28, 2008 @12:00PM (#22589620) Homepage Journal
      I see it as an opportunity for a new standard to evolve in a more natural fashion. Consider HD-DVD v. Blu-Ray--you have two competing formats come out, neither of which is compatible with the other's standard, but after a while it becomes apparent which one is going to be used.

      Besides, it's not like this is going to affect TCP or IP or whatnot--this is way down at the bottom of the OSI model at level 1.
      • Re:Stranglehold? (Score:4, Insightful)

        by Jarjarthejedi ( 996957 ) <christianpinch AT gmail DOT com> on Thursday February 28, 2008 @12:05PM (#22589692) Journal
        Right, because corporate competitions in which two big companies do their best to ensure that their format wins the battle, with the individuals being frightened that their purchases will become obsolete is soo much fun.

        Standards should be decided on BEFORE the material comes out. In this case it's not such a big deal, as the only people who are going to want terabit ethernet are huge enough geeks (or companies) to support whatever standard they choose but for the most part a lack of standards hurts everyone (just look at IE/Office, those are 'competing' standards...would you call them a good thing?)
        • Re: (Score:3, Insightful)

          In this case it's not such a big deal, as the only people who are going to want [HD players] are huge enough geeks (or companies) to support whatever standard they choose

          Your quote applies equally well to his example as to what you were saying.

          just look at IE/Office, those are 'competing' standards...would you call them a good thing?

          They're not standards at all, that's the problem. IE's supposed to be compatible with the standard and it's not, so your example seems moot. Office has no standard at all, which would seem to be compatible with the discussion, but the big difference is that it's gone well beyond the point where there should have been a standard.

          However, I don't think any products should make it to the market before there's a standard developed. C

          • Re:Stranglehold? (Score:5, Interesting)

            by KublaiKhan ( 522918 ) on Thursday February 28, 2008 @12:33PM (#22590038) Homepage Journal
            Not entirely.

            There are still a few token rings and other such mesozoic cruft wandering around in the wild out there, but they still work--because some clever folks invented a way to get from one kind of network to another.

            Keep in mind, also, that it's really only the early adopters--those who are willing to buy 1st-generation equipment--who would get 'screwed over', and they have, by definition (as the first generation of a given kind of thing is always several times more expensive than the 'production' generations), the money to waste on this sort of thing.
      • Re:Stranglehold? (Score:5, Insightful)

        by Anonymous Coward on Thursday February 28, 2008 @12:06PM (#22589722)
        Yes, but waiting for competing standards to shake out can be a huge waste of time and money.

        Doesn't anyone remember the bad old days before TCP/IP over Ethernet became standard?

        How many organizations are still laboring to expunge the last remaining vestiges of Token Ring, IPX, Netware, etc.?
      • Besides, it's not like this is going to affect TCP or IP or whatnot--this is way down at the bottom of the OSI model at level 1.

        Therein lies the rub, I think.

        In spite of the OSI model (which TCP/IP doesn't map to very neatly BTW) - if one layer sneezes, they all catch a cold (with severity decreasing by distance). You screw with one layer, odds are good that you're gonna screw with its neighbors.

        A good parallel of standards and what happens to them when folks try to create new paradigms? It can be found as close as your nearest fiber-based SAN installation (in its early days, anyhow)... "doesn't play well with others" is the

      • X2 vs Flex56.

        • X2 vs Flex56.

          I think that's more of an exception than a solid example of what the GP was referring to.

          Both sides were rolling out firmware based modems near the end of the "high speed" modem wars. In the end, a standard protocol was agreed upon (V.90 or just "V.Everything") and firware updates for the new standard was provided to everyone; so nobody but the very earliest adopters (no flash) had to replace their hardware. It was also resolved *very* fast - it took about a year after Flex was introduced, i

      • Re: (Score:2, Informative)

        by waterlogged ( 210759 )
        "Besides, it's not like this is going to affect TCP or IP or whatnot--this is way down at the bottom of the OSI model at level 1"

        Media Access Control and Logical Link are Layer 2
        IP is Layer 3
        TCP is Layer 4

        Geek card....give it here.
        • Re: (Score:2, Informative)

          by KublaiKhan ( 522918 )
          Physical layer is level 1. This covers the hardware--the cables and interface cards that would need to be modified to operate at the Terabit rates, and which is the primary concern in this case.

          Layer 2 will be involved, of course, but the primary difficulties in this endeavor is going to be layer 1.
      • Re: (Score:3, Interesting)

        by GooberToo ( 74388 )
        Besides, it's not like this is going to affect TCP or IP or whatnot--this is way down at the bottom of the OSI model at level 1.

        Early research indicates IP protocols will not scale well with high speed links. CPU load goes through the roof and because of limited buffer sizes relative to line speeds, retries and fallbacks plague applications. The end result is a slow, high speed link.

        In a nut shell, for high speed links to become useful to a large category of users, IP, and especially TCP must be revamped. S
    • Re:Stranglehold? (Score:5, Insightful)

      by geekoid ( 135745 ) <dadinportland@y[ ]o.com ['aho' in gap]> on Thursday February 28, 2008 @12:18PM (#22589870) Homepage Journal
      I think you don't understand where he is coming from.

      You would need to use the existing protocols on some level, but the protaocols to hit terabyte might need to be different. So he is saying Think about how to get reach the goal firsts, then delve into the protocol arena. If it is superior then eventually we would discard the older protocols and only use the new one.
    • Re: (Score:3, Funny)

      by houstonbofh ( 602064 )
      I agree. After all, this worked so well for the American cell phone network.
  • Um, they just made an announcement that they reached 16Tbits/sec on Wednesday, sheesh. Use the bandwidth you have for something useful.
  • but but but (Score:4, Informative)

    by grasshoppa ( 657393 ) on Thursday February 28, 2008 @11:56AM (#22589570) Homepage
    we LOVE our standards. Without standards, where would we be?

    K, just RTFA, and let me save the rest of you folks the suspense: There isn't one. It's a blurb about breaking standards and terabit ethernet. The slashdot summary just about nailed it.
    • by plague3106 ( 71849 ) on Thursday February 28, 2008 @12:14PM (#22589816)
      The slashdot summary just about nailed it.

      So, are we at the start of the end times now?
    • Re:but but but (Score:4, Informative)

      by milsoRgen ( 1016505 ) on Thursday February 28, 2008 @12:21PM (#22589902) Homepage

      Metcalfe says that the current approach being taken in the standards bodies won't get us to terabit rates. So, without going into too much detail, he said he expects a technology revolution, during which proprietary and innovative approaches to Terabit Ethernet will rule, at least at first. He said he sees it as an opportunity to "break loose from the stranglehold of standards and move into some fun new technologies."
      Ahhh, the struggle to stay relevant I suppose. Especially considering this guy has one awards from IEEE [wikipedia.org], a standards body. It almost feels he has an axe to grind from that short statement, at least in regards to the process perhaps. But then again he is a venture capitalist [wikipedia.org], perhaps he is laying down some good press for some startups he might have dumped some cash into? Also he has had some incorrect predictions [wikipedia.org] before, my favorite being Windows 2000 would crush Linux.
    • Re: (Score:3, Funny)

      by glittalogik ( 837604 )
      Without standards, where would we be?

      I don't know, but we probably wouldn't be getting laid there.
  • Tag this story with "Scrolldownyouwhiner" ...
  • 7 years is a long time. Wouldn't it make more sense to work towards a new ethernet technology that has larger capacity? Think of the amount of data we currently send over the web etc. That's only going to increase. Those using ethernet on their networks I'm sure would prefer something that could deal with their daughter watching You Tube while their son is playing his friends on Duke Nukem Forever (haha!) on the LAN. Petabit Ethernet sounds more useful.
    Meh, it's a shitload of data either way...
    • Re: (Score:3, Insightful)

      by sm62704 ( 957197 )
      7 years is a long time.

      Seven years is the blink of an eye, kid.
      • Seven years is the blink of an eye, kid.
        You might want to get that checked out.
        • by sm62704 ( 957197 )

          Seven years is the blink of an eye, kid.
          You might want to get that checked out.
          I did. [slashdot.org] I go back to see Dr. Odin about my eye on the fifth [slashdot.org].

          Thanks for asking.
      • Re: (Score:2, Informative)

        by hraefn ( 627340 )
        Seven years is closer to 25 million blinks. Which is about 115 days with our eyes in the process of blinking.

        Does this mean that four percent of our lives pass in the blink of an eye?
  • Who needs it? (Score:5, Informative)

    by mollymoo ( 202721 ) * on Thursday February 28, 2008 @11:59AM (#22589604) Journal
    One terrabit per second is roughly:

    6 x as fast as 32-bit 2.8GHz HyperTransport
    16 x as fast as x16 PCIe 2.0
    60 x as fast as 20GFC fibre channel
    400 x as fast as SATA-300
    700 uncompressed 1080p HDTV streams (24bpp, 30fps)
    15 million telephone calls

    Other than the LHC, who the hells needs that kind of bandwidth?
    • Re:Who needs it? (Score:5, Insightful)

      by Alioth ( 221270 ) <no@spam> on Thursday February 28, 2008 @12:03PM (#22589670) Journal
      640k is roughly:

      10 Commodore 64s
      20 BBC Micros
      640 ZX-81s
      6 times a SDSS floppy disc

      Who needs that kind of memory?

      We might not need terabit ethernet *now*, but in 25 years time, it may be the basic expectation of your LAN's speed.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        640k is roughly:


        You miss the point.

        There is nothing you can do with a big-ass pipe except move bits.

        Plugging your firehose into the neighborhood drip irrigation system isn't going to get your lawn watered any faster. In situations where insane bandwidth can be installed end to end and there are insane amounts of data to move, this would be a great thing. However, the GP's point was that this really isn't the most common situation.

        Most LANs have TONS of bandwidth to spare today. Work on an Internet (both
        • Re:Who needs it? (Score:5, Insightful)

          by Anonymous Coward on Thursday February 28, 2008 @02:24PM (#22591582)
          There is nothing you can do with a big-ass pipe except move bits.

          Free clue: 10Gb ethernet is currently used mostly in clusters and as backbones for large network installations to move lots of data around very fast. It's a long way off being a LAN technology. In seven years time, Terabit ethernet will be used mostly in clusters and as backbones for large network installations and 10Gb ethernet will be a LAN technology.
      • Re:Who needs it? (Score:5, Insightful)

        by jandrese ( 485 ) <kensama@vt.edu> on Thursday February 28, 2008 @12:26PM (#22589962) Homepage Journal
        Maybe. One of the things that I've noticed is that as the bandwidth increases it becomes harder and harder to fill it up. Back in the Commodore 64 days it was not hard at all to run your machine out of memory by just typing a paper that was too long, and that's without graphics/charts/etc... These days there is no way a person would be able to type enough text to even make a noticeable dent in the main memory of any commodity machine. When everybody used 56k modems and serial lines it was trivially easy to fill up the link. However, when they moved to 10Mb Ethernet it got harder, but not impossible. Suddenly compressed music files were not a problem, although compressed video (DivX) still was. Then we went to 100Mb Ethernet and compressed video is no longer much of a bottleneck. Even now most modern machines come with Gigabit Ethernet ports that your average person can't fill with anything. Without new and bandwith intensive applications people won't be inclined to improve their bandwidth.

        That's not to say someone won't come up with some application that requires a ton of bandwidth (distributed neural nets?), but none of our current applications would even really scale up to requiring 10GbE. The only realistic thing that comes to mind is some sort of Super HD video format, but anything like that is at least a decade away.
        • If bandwidth speed (along with encode/decode time on your local machine) were trivial, we could be doing things like centralized video processing. Instead of buying video cards for each computer, we buy a central video processor, and have everything routed through that. 100 computers on a LAN running graphic design programs rendering at full-tilt could chew up that bandwidth really quickly. And would save the company money on having to buy good graphics processors for each system.
        • Re: (Score:2, Insightful)

          by FrzrBrn ( 651892 )
          It's not a question of your end points needing that kind of bandwidth, it's a question of the links between ISPs and such. Look at any of the large router manufacturers today and see what kind of interfaces they're putting on their high-end gear: multiple 10Gbps ports. You can safely believe that if 100Gbps were available now that people would be using it. The step to 1Tbps is a large one, but there's no such things as "too much" bandwidth.
        • Maybe. One of the things that I've noticed is that as the bandwidth increases it becomes harder and harder to fill it up.

          Really? I notice a very, very clear trend in the *other* direction.

          Web pages used to be 5k simple text, maybe a pic or two. Now they're routinely a 300k flash animation doohickey - for the HOME PAGE. Once upon a time, I didn't use my computer as a replacement for television. Today, it's normal to have 2 or 3 computers in my house watching different shows (a la Youtube, etc) concurrently.
        • by LWATCDR ( 28044 )
          I think you will see the migration to this the same way that a lot of people migrated to 1000Base-T.
          They will use it for connecting switches and for SANs.
          You may also see it replacing SATA or even for other internal links.
          I don't think that the limits are the applications. The limit to how useful this is will the machines hooked to it. Right now I wonder how many machines could saturate a 10GbE link?
        • you clearly aren't downloading nearly enough pr0n.
      • by Surt ( 22457 )
        In 25 years we better have 10 terabit wired speeds. I would be surprised if 100 terabit wasn't becoming common.
      • by phorm ( 591458 )
        In 25 years will be fine. But 2015 is well within that deadline.

        I think the grandparent meant "who needs it *now* or in the near future"?
      • It is not state that we will never need Terabit eathernet. It is an issue that we don't need it now... And our computers will need to run a full performace just to handle the data. Sure we could probably use Terabit Eathernet for such things like distributed computing or Speedy downloads. However for most tasks Gigabit is fast enough and still isn't being fully utilised.
        • I would think terabit ethernet would become mighty handy on the backbone although you certainly wouldn't need it in the home. Distributed computing can and eventually will take up this bandwidth. You're right, for most tasks gigabit is enough. Course with my 72port gigabit switch connecting to my backbone, now I'd need 72gigabit at least to sustain full bandwidth which I can't achieve. So I bottleneck it at the IDF with dual 10gig links. Of course the main bottleneck is the server then as dual gig nics can'

      • Re: (Score:3, Insightful)

        640k is roughly:....

        - over 10 DEC PDP-11/45s running the RSTS time-sharing system

        The maximum memory on these things was 28K words (16-bit) without memory extension hardware. In the 70s we had 8 users on a system with 28K of memory sorting lists, printing reports, data entry, editing with TECO, batch runs in the background at low priority, with relatively few swap thrashing problems. I implemented an ultra-low priority batch mode that waited until there was nothing else running for 5 minutes before act

      • The thing is, this terabit standard was "supposed" to be available now. I think the point was that the standard was way too far ahead of its time. Gigabit is just now becoming a baseline standard any day now. 10Gbit is barely on the horizon, 100Gbit still just a fantasy. Terabit will probably happen eventually, though I don't know if it would ever be used in a home because of the pressure to stick to wireless.
    • Re: (Score:2, Insightful)

      by L4t3r4lu5 ( 1216702 )
      Every ISP in the world, to meet the bandwidth allocations they've sold fraudulently.
    • Re: (Score:2, Insightful)

      by archen ( 447353 )
      Switches and machines that aggregate multiple saturated gigabit connections?
    • Re: (Score:3, Insightful)

      by EriDay ( 679359 )
      Regional ISPs. This is not a consumer product. Running ethernet on the backbone allows a homogeneous stack on all hosts from end to end.
    • Re: (Score:2, Funny)

      by kvezach ( 1199717 )
      Other than the LHC, who the hells needs that kind of bandwidth?

      Evil overlords who want to build their own Borg collective? If 10^10 bits per second bandwidth is required (comparable to the bandwidth of the bundle connecting the brain hemispheres), then you get 100 drones per wire. (On the other hand, wired Borg would be really limited -- for obvious reasons.)
    • Re: (Score:3, Funny)

      by Shados ( 741919 )
      Yeah yeah, who cares about all that abstract stuff. How many LIBRAIRIES OF CONGRESS is it?!
      • You're talking about data rate.

        So, make that LibrariesOfCongress/nanofortnight for example.
        • by Shados ( 741919 )
          Indeed. My short term memory had the last few lines of the parent post in mind, which were data quantities instead of rates, thus the mistake :)

          LoC per nanofortnight is full of win though.
    • by TeknoHog ( 164938 ) on Thursday February 28, 2008 @12:17PM (#22589852) Homepage Journal

      One terrabit per second is roughly:

      81 lunabits per second.

    • by altoz ( 653655 )

      Other than the LHC, who the hells needs that kind of bandwidth?
      640k ought to be enough for everybody, too, right?

      if it's there, it'll get used, probably for a purpose that's just the twinkle in some person's eye right now. think innovation, man!
    • by Kjella ( 173770 )
      Haven't got a clue, but *my* Internet isn't fast enough so faster equipment all around means it'll trickle down. Do I need 4GB RAM in my box? No, but it's very nice to have when it's cheap. If they can get terabit connections for cheap, I'll happily settle for a GigE connection...
    • by sloth jr ( 88200 )
      It's needed as long-haul, not end-to-end. I would hope to see this implemented on the backbone. Then, standardize on last-mile internet connectivity of 100Mbit or more. I'd be just fine if YouTube supported decent video quality.

      sloth jr
  • by Anonymous Coward on Thursday February 28, 2008 @12:00PM (#22589626)


    For those of you playing at home, a TB is a lot more than you can ever use in a million years...unless you link off the pirate bay, then it's not quite enough.

    • by Macrat ( 638047 ) on Thursday February 28, 2008 @12:34PM (#22590052)
      1Gbs is a bit slow when backup up a 1TB hard drive to the network server at home. ;-)
      • Should have thought of that when you were buying your tape libraries...

         
      • by jcnnghm ( 538570 )
        And I'd be willing to bet you're IO bound, not bound by the speed of your network. It is hard to saturate a gigabit link at this point, at least with consumer equipment.
      • really? i have a Gigabit network with a NAS and it seems to be just as fast as local drives (both are 7200 SATA 2). they both max out around 50MB/s. unless you have faster hard drives, I think the bottleneck on a gigabit NAS would be the hard drives and not the network.
      • Re: (Score:3, Informative)

        by TubeSteak ( 669689 )

        1Gbs is a bit slow when backup up a 1TB hard drive to the network server at home. ;-)

        1 Gb is 128 MBs. According to Storagereview.com [storagereview.com] the Seagate Barracuda ES.2 is the only terabyte drive that has a transfer rate (104 MB/s) which maxes out high enough to even come near filling a gigabit pipe.

        The bottleneck is your hard drive.

  • Really, you mean one of these nebulous 5, 10, 20 years from now predictions actually hasn't come true? Amazing.

    By now, I'd have thought that that with all the blown predictions like this, that it would only be a story if one actually came true.
  • I'd sooner have... (Score:5, Insightful)

    by Channard ( 693317 ) on Thursday February 28, 2008 @12:04PM (#22589680) Journal
    .. a technology that lets homes receive fast internet no matter where they are. My area's not cabled up, and thanks to me being too far from the exchange.. I just live in a normal street .. I can't reliably get more than 512KB a second. Fix that, and you'd be laughing. Powerline networking, maybe?
    • Re: (Score:3, Insightful)

      by Anti_Climax ( 447121 )
      Well if you're in the US, the greatest probability has you in the ATT/SBC/Bellsouth regions. Regardless of your ILEC, did they stick you with a shitty CPE like an actiontec or speedstream? If so switch it for something with a decent AFE like a 2Wire and you might be able to push that more toward the 1Mbit range. Not much of an increase but better than nothing.

      Also remember that, even if you get a decent DSL modem, they may still have you allocated under a lower performance profile just out of average expect
    • by bazorg ( 911295 )
      well I'd sooner have teleportation or a vaccine for dental caries but what the hell...
  • by pla ( 258480 ) on Thursday February 28, 2008 @12:06PM (#22589712) Journal
    "He's also suggesting that this be done in a non-standard way"

    No, he suggested that five years ago

    We don't yet have the technology described (wave division multiplexing) in our homes because very, very few of us want to bother with fiber in our homes at all.

    You can push an amazing amount of data over glass, no one would claim otherwise. You can't, however, drape it across the floor and up the stairs to your switch for a quick LAN connection... Not only does terminating a fiber suck, the first time the dog steps on that little yellow wire, end of connection. By contrast, I've used Cat5 as a structural material (tied a PC to a hook on the ceiling with it) WHILE USING IT for data.

    So no, we won't see terabit ethernet anytime soon, unless someone figures out a way to push it over copper.
    • If you can lay fibre across the bottom of an ocean and drag it through conduits you can sure has hell run it up your stairs. You're just using shitty fibre leads. I work in a telephone exchange and trust me, it take more than a dog stepping on it to kill one of the fibre patch leads used there. Terminating it does suck, but most people don't terminate their own CAT-5 either.
      • by pla ( 258480 )
        If you can lay fibre across the bottom of an ocean

        You mean to compare 7+ layer armored "sharkbite" cable with the sort of single nearly-naked fiber you'd use as a patch cable? You have waaaaay more money than I do!

        I work in a telephone exchange and trust me, it take more than a dog stepping on it to kill one of the fibre patch leads used there

        You probably have much higher quality fiber (or more accurately, fiber with much stronger jacketing) than the than what average Joes would use. But that examp
    • Re: (Score:3, Interesting)

      by ChrisA90278 ( 905188 )
      I disagree. I bought a spool of fiber for some users who needed to deploy a temporary network and then roll it back up and use it again later. We bought a one kilometer roll on a wooden spool and they would just as you say pull it down stairwells through doors and toss it up in trees. Once they hung it over a freeway in Germany from some utility poles (had to hire local linemen for that one) and then after a few day rolled it back.

      I told the fiber cable sales guy I was going to test their sample by placi
    • by djp928 ( 516044 )
      Fiber is actually surprisingly tough. Yeah, it's glass, but as long as the light gets through, you've got a good signal. Where I work we once accidentally rolled a 700 pound cabinet over a fiber run. The fiber rolled up under the caster and got crunched a second time (once from the wheel going over it, once from it curling over the wheel and getting jammed in the housing) and we had to slowly back the cabinet off the fiber while gently tugging on the free end to get it out of the wheel housing. It came
    • "Ethernet" is layer 2, not layer 1. It can happily run over copper or fiber or any other physical medium.

      Yes, once you get beyond GbE then you are most likely going to need fiber. That means much more expensive equipment, considering I can get an 8-port GbE copper switch that supports jumbo frames for $100. You *might* be able to pump 10 GbE over copper for a few meters.
  • So, without going into too much detail, he said he expects a technology revolution, during which proprietary and innovative approaches to Terabit Ethernet will rule, at least at first.
    At first? With the way patent trolling is going right now, I wonder if anyone will see it all, but I commend him for trying to break out of the strangle hold that corps have on the standards in favor of innovation rather than profit and stifling competition.
  • by Bogtha ( 906264 ) on Thursday February 28, 2008 @12:12PM (#22589788)

    Has this guy done anything relevant in the past couple of decades? Here's a choice quote [infoworld.com] of his:

    Unix and the Internet turn 30 this summer. Both are senile, according to journalist Peter Salus, who like me is old enough, but not too old, to remember. The Open Sores Movement asks us to ignore three decades of innovation. It's just a notch above Luddism. At least they're not bombing Redmond. Not yet anyway.

    The hard part of being down on Linux and the Open Sores Movement is worrying about that menace hanging over us at year's end. No, not Y2K, but Linux's nemesis, W2K, Windows 2000, the operating system formerly known as Windows NT 5.0.

    W2K is software also from the distant past -- VAX/VMS for Windows. But it will overpower Linux. NT, now approaching 23x6 availability, is already overpowering Linux. NT and NetWare constitute 60 percent of server software shipments. All Unixes make up 17 percent, and Linux is a small fraction of that. When W2K gets here, goodbye Linux.

    • Re: (Score:3, Interesting)

      by anticypher ( 48312 )
      I had the pleasure to work on projects associated with Metcalfe at the beginning of my career, notably the migration of Ethernet I to Ethernet II standard. He was an autistic, anti-social, self-centered, egotistical curmudgeon from the start, and despite those charming qualities he nevertheless adopted an ivory-tower academic approach in his later life of hating anything created since his 15 minutes of brilliance.

      He can always point to DJB as a worse curmudgeon, so there is that solace in knowing he isn't t
  • by 1336 ( 898588 ) on Thursday February 28, 2008 @12:14PM (#22589820) Homepage
    As in the Robert Metcalfe whose Wikipedia article [wikipedia.org] has an "Incorrect predictions" section listing where he wrongly thought that "the internet would suffer a catastrophic collapse" in 1996 and this gem:

    Metcalfe is also known for his harsh criticism of open source software, and Linux in particular, predicting that the latter would be obliterated after Microsoft released Windows 2000:

    The Open Source Movement's ideology is utopian balderdash [... that] reminds me of communism. [...] Linux [is like] organic software grown in utopia by spiritualists [...] When they bring organic fruit to market, you pay extra for small apples with open sores - the Open Sores Movement. When [Windows 2000] gets here, goodbye Linux.
    Just because he did something really cool 35 years ago doesn't make him an expert on related matters now.
    • he wrongly thought that "the internet would suffer a catastrophic collapse" in 1996
      To be fair, IE 3 was released in 1996...
  • by Z00L00K ( 682162 ) on Thursday February 28, 2008 @12:15PM (#22589840) Homepage Journal
    For everyone that has been working with communication since the early datacom ages Shannon's law [wikipedia.org] has been important. It's still important, and it means that you can't just push everything through, you have to consider the media used.

    In a way it can be tweaked a bit, and that has caused a confusion among those that aren't well into the technological difference between Baud (modulation changes per second) and BPS (bits per second).

    Anyway - The classical phone modems can have a speed up to 56kbps, but effectively they stay at 28 to 33kbps. And that on a line that actually only provides 3kHz bandwidth. The trick is that in the 3KHz bandwidth you can have a carrier with less than 3000 modulation changes per second, often 2400. In each modulation change you not only have one bit transferred, but multiple bits. This is achieved by having a variation in both phase and amplitude of the signal.

    So to utilize the cabling at the extreme speeds that a terabit Ethernet is you may have to resort to the same technique.

    There have also been other techniques in use like using multiple carrier frequencies, like what the Telebit Trailblazer modems did. That technology was very resilient to interference compared to the CCITT standards, but it had other disadvantages instead.

  • Progress! (Score:5, Funny)

    by DragonWriter ( 970822 ) on Thursday February 28, 2008 @12:25PM (#22589946)

    Five years ago, we were talking about using Terabit Ethernet in 2008. Those plans have been pushed back a bit, but Ethernet inventor Bob Metcalfe this week is starting to throw around a new date for Terabit Ethernet: 2015.


    So, 5 years ago, Tb-E was 5 years away, and now its 7 years away. So by 2015, it should be about 10 years away, and by 2025 it should be about 14 years away, etc.
    • Re: (Score:3, Funny)

      by Hugonz ( 20064 )

      So, 5 years ago, Tb-E was 5 years away, and now its 7 years away. So by 2015, it should be about 10 years away, and by 2025 it should be about 14 years away, etc.

      Talk about exponential backoff...

  • Let's see, 10 Gb Ethernet came out about 5 years ago. It started out only being able to do about 2 Gb/s with tweaks to PCI-X. PCI-X, at best, I have only seen capable of doing around 6Gb/s with a NIC. PCI-e x8 does much better and you can get very close to ~10 Gb/s. Theoretically, 8x can do 16 Gb/s...and there is a 16 lane version as well. Current standards that are being developed are for 40Gb/s and 100Gb/s. 40 isn't too bad since backplanes can already do that speed, of course, if you have multiple
  • by s31523 ( 926314 ) on Thursday February 28, 2008 @12:45PM (#22590158)
    Forget terabit ethernet. I will settle for full, actual 1Mbps (10,100, 1000, etc.) speed for both transmit and receive. Even on my home network, I rarely get full %100 utilization on my LAN. Some PC's are linux, some are Windows. Neither machine ever really reaches its full potential. I looked at other networks as well, even my work LAN, and they have similar issues. I am not a network guru and don't want to spend the time tweaking and configuring. The damn Gbps NIC and network drive I bought should just plug and go and I expect to see speeds reasonably close to 1Gbps, but nope. I see like 1% utilization. Seriously, lets make current technology work as advertised before we start claiming super-fast speeds on other vapor-ware technology. Please?
    • by ivan256 ( 17499 ) on Thursday February 28, 2008 @01:03PM (#22590360)
      Spend 5 minutes troubleshooting.

      Consumer grade copper gigabit in crappy low-end PCs (made in the last 4 years) should be able to give you at least 300mbit of transferred data over TCP given 10 minutes of tuning, and using the correct cables.

      Don't use a USB NIC. Don't transfer your data to/from a 4000rpm laptop hard drive... Etc..

      You're not going to get 1Gbps though, 'cause your hard drive probably can't go that fast. The average low-end desktop drive isn't going to give you more than 30MB/sec. Depending on your system, the bus you have the NIC plugged into can't do 1000mbps. Your network can handle the advertised speed just fine though. If you've got high end gear (motherboard, disk array) you can peg a gigabit ethernet link in a point to point transfer... Right now it's not the ethernet holding consumer grade equipment back.
      • I've spent more time on this than it probably deserves and I have found that the order of performance can be the opposite of the advertised speeds. With all other hardware equal, I have found that, usually, FireWire 400 is faster than USB 2.0 which is faster than gigabit ethernet (with a cat 6 crossover cable).
        • by tknd ( 979052 )

          Tip: use FTP or something really low level to determine the actual bandwidth you're getting over the wire. There are also some programs that will specifically test bandwidth without other limitations (disk speed).

          I've found some protocols like SMB can be really flaky at high speeds. But FTP has reliably been able to hit the hard disk transfer speed limit which is much lower than 1gbps.

        • by ivan256 ( 17499 )
          GigE is really much faster than the other two, but it is suited to different kinds of traffic. FireWire and USB mass storage do small block transfers. GigE preforms optimally with larger packet sizes. You should use a protocol which takes this into account. Not SMB on Windows unless you want to do a *lot* of tuning... FTP with a client that supports a sliding window size (Not Internet Explorer, but almost every other FTP client), NFS, HTTP, SCP if you have a very low latency link and a fast CPU...
  • by scaryjohn ( 120394 ) <john,michael,dodd&gmail,com> on Thursday February 28, 2008 @12:47PM (#22590182) Homepage Journal

    I humbly submit that the R&D money that could have increased the upper boundary of Ethernet speeds was spent to bring wireless to the masses. Five years ago, if you'd told me WiFi would now be a year away from nominal speeds of 250Mb/s I might have thought you were talking about prototypes. The dorms where I was a tech had just finished upgrading from 10Mb/s to 100Mb/s Ethernet. The few laptops that were sold with external wireless cards had nominal speeds of 10Mb/s. But now we have 802.11g and next year we should have 802.11n on the store shelves.

    I think we've gained much more by pushing out the median speed of wireless than we could have gained from pushing out the marginal speed of twisted pair.

    • But now we have 802.11g and next year we should have 802.11n on the store shelves.


      I don't know about you, but I've been enjoying 802.11n for the past few months [apple.com] quite happily.

      The AirPort Extreme BaseStation (and Leopard) even includes the drivers to upgrade earlier MacBook/MacBook Pros that have the hardware and not the drivers.
  • Any chance Bob's patents on Ethernet are expiring? Don't need no steenking standards, mon. Need new ways, we do! My ways! My Money! MY MONEY!

    ps- all you hotshot engineers; rotsa ruck beating the real thing.
  • It's in that same universe where everyone uses "... free GNU on their 200 MIPS, 64M SPARCstation-5".

    (http://en.wikipedia.org/wiki/Tanenbaum-Torvalds_debate#Erroneous_predictions)

Any sufficiently advanced technology is indistinguishable from magic. -- Arthur C. Clarke

Working...