Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking

New Ethernet Standard — Both 40 and 100 Gbps 141

Artemis recommends a blog entry that does a nice job of summarizing the history and current state of the Higher Speed Study Group and the IEEE's next-generation Ethernet standard. "When IEEE 802.3ba was originally proposed [there] were multiple possible speeds that were being discussed, including 40, 80, 100, and 120Gbps. While there options were eventually narrowed down to just two, 40 and 100Gbps, the HSSG had difficulties [deciding] on the one specific speed they wanted to become the new standard... [T]wo different groups formed, one which wanted faster server-to-switch connections at 40Gbps and one which wanted a more robust network backbone at 100Gbps... Unable to come up with a consensus the HSSG decided to standardize both 40Gbps and 100Gbps speeds..."
This discussion has been archived. No new comments can be posted.

New Ethernet Standard — Both 40 and 100 Gbps

Comments Filter:
  • by superpulpsicle ( 533373 ) on Thursday July 26, 2007 @12:49PM (#19998633)
    Major telcos has increased the upload speed to 800k at a cost for only $70.00 a month.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      The Telco's know full and well that once they let the genie out of the bottle, there is no turning back. REAL* broadband service (10+Mb/s at minimum) across the entire US, i.e. DIAL-UP becomes infrastructurably(new word??) unmanageable and non-existent, means Cable TV and Satellite become unstable as a market. Period. The media companies know this, which is why HD mandates keep getting pushed back. Its an all out fight for who can get their fist in the cookie jar first.

      Better get used to the idea that HIGHS
      • by Anonymous Coward on Thursday July 26, 2007 @01:16PM (#19999069)
        "There is one hope though. And its name is Google......"

        No. There is another.
        • by Poltras ( 680608 )
          Luke Skywalker?

          /out

        • by jd ( 1658 )
          But [itnews.com.au] now [freerepublic.com] his [geek.com] provider [theregister.co.uk] is [bbc.co.uk] complete...

          (Given that last link, expect the RIAA to become part of Homeland Security.)

          The Japanese have gigabit with IPv6 to the home already, but this makes that look like dial-up in comparison.

      • by nuzak ( 959558 )
        > There is one hope though. And its name is Google

        Google's proposed free ad-supported wi-fi for SF is like 300 kilobits. Better than nothing, I'll grant, but the phone companies are pitching a screaming hissy fit over even that. Why on earth do you think Google can implement or is even interested in universal high-speed access?
        • Google's proposed free ad-supported wi-fi for SF is like 300 kilobits. Better than nothing, I'll grant, but the phone companies are pitching a screaming hissy fit over even that. Why on earth do you think Google can implement or is even interested in universal high-speed access?
          Because it will make them more money?
      • by morcego ( 260031 )

        REAL* broadband service (10+Mb/s at minimum)


        Isn't broadband defined as 2Mbps+ ? From what I've heard in my telco days, that was the speed threshold.
        • Re: (Score:3, Informative)

          by Detritus ( 11846 )
          Not really. Broadband usually means FDM, like a cable plant or a microwave relay.

          I like this definition:

          Narrowband, Wideband, and Broadband

          Narrowband is a transmission medium or channel with a single voice channel (with a carrier wave of a certain modulated frequency). Wideband is a transmission medium or channel that has a wider bandwidth than one voice channel (also with a carrier wave of a certain modulated frequency). Broadband refers to telecommunication that provides multiple channels of data over a single communications medium using frequency division multiplexing.

          Through the Wires: Bandwidth [thinkquest.org]

        • by DrSkwid ( 118965 )
          ntl: got 128k officially recognised as broadband for advertising purposes here in the uk.
        • You are correct Broadband=2Mb/s+, all other contrary comments are silly marketeer-spin for politicians and corporatist.
          Also, the USA ranks 20+ in telecommunications (we ain't #1), because of corporatist marketeer-spin to silly politicians.

          AAMOMFF, the USA ranks #1 in international debt only. We're #1, We're #1, We're #1 in debtor nations. THANK GOD and POLITICIANS!

          !HAVEFUN!
  • Cable Length (Score:5, Interesting)

    by fishybell ( 516991 ) <fishybell.hotmail@com> on Thursday July 26, 2007 @12:55PM (#19998729) Homepage Journal
    Interesting to see that the faster 100Gbps also has the longer cable lengths built into the standard. From TFA:


    40Gbps can be 1 meter long on the backplane, 10 meters for copper cable and 100 meters for fiber-optics. The 100Gbps standard includes specifications for 10 kilometer and 40 kilometer connections over single-mode fiber.

    I'm seeing the 100Gbps used for infrastructure with its larger bandwidth and longer cable length while the 40Gbps would be used for datacenters, server rooms, etc. with its faster "connect" speeds (clarification on what exactly this would mean?).

  • one which wanted faster server-to-switch connections at 40Gbps and one which wanted a more robust network backbone at 100Gbps

    Why is the 40 Gbps one considered "faster" and the 100 Gbps one considered "more robust"?
    • by ciroknight ( 601098 ) on Thursday July 26, 2007 @12:59PM (#19998817)
      You misunderstand; one group said "We need to connect our servers to the switches with a faster connection." The other group said "we need to make our network backbone more robust by adding faster connections between buildings and such." The group that needed faster server-switch speeds don't need 100Gbps, they just need better than what they've got. The group that needed faster building-building/infrastructure links didn't believe 40Gbps is fast enough.

      Adding both takes care of both groups of people.
      • by GeckoX ( 259575 )
        That's where I'm confused on this though...adding ONE would appear to take care of both groups of people...what am I missing? What does the 40Gb standard have that the 100Gb standard doesn't cover?

        If the answer is nothing, than this seems to be a pretty stupid move...
        • Re: (Score:1, Informative)

          by Anonymous Coward
          Looks like the 40 allows for fiber or copper connections, while the 100 is pretty much fiber-only (for now?). Fiber is still far more expensive than copper, especially when you're just interconnecting two switches that are next to one another in the rack.
          • by jabuzz ( 182671 )
            That is just such complete nonsense. Firstly fibre is not "far" more expensive than copper, it is a bit more expensive. However look at the cost of a 10Gbps switch, and now tell me that the cost of fibre is prohibitive. If you can afford the switch you can sure as hell afford a few fibre patch leads. Not only that I bet it will be CX4 type Infiniband cables which are not cheap, and far more trouble in a rack than a fibre patch lead.

            What beats me is why they are bothering with multimode fibre. The cost of st
            • by psmears ( 629712 )

              That is just such complete nonsense. Firstly fibre is not "far" more expensive than copper, it is a bit more expensive. However look at the cost of a 10Gbps switch, and now tell me that the cost of fibre is prohibitive. If you can afford the switch you can sure as hell afford a few fibre patch leads. Not only that I bet it will be CX4 type Infiniband cables which are not cheap, and far more trouble in a rack than a fibre patch lead.

              I'd be surprised if they did that—the biggest advantage of copper over fibre is that everyone still has it! If you're going to move to Infiniband (multi-coax) cables, you might as well go for fibre as you say...—

              What beats me is why they are bothering with multimode fibre. The cost of stocking both types quickly outweighs the slight increase in cost for single mode.

              There are plenty of places where it's practical and economical just stocking multimode fibre...

        • by Midnight Thunder ( 17205 ) on Thursday July 26, 2007 @01:16PM (#19999073) Homepage Journal
          What does the 40Gb standard have that the 100Gb standard doesn't cover?

          In one word: cost. The 100Gb connection is limited to fibre optics, whereas the slower connection support copper. Fibre optics are still more expensive than copper. It should also be noted that backbones deal with more traffic than non-backbone networks. Think of the difference between inter-city high ways and local back streets and you should get the picture.
          • Re: (Score:3, Funny)

            by steveo777 ( 183629 )

            Think of the difference between inter-city high ways and local back streets and you should get the picture.

            So does that mean that their either coated in ice or being dug up by MN/Dot [state.mn.us]?

          • Would you mind coming to my office to explain a 'database' to the manager?

            Then you can move onto normalization.

            Thanks in advance.
          • by haruchai ( 17472 )

              Is it still true that fiber costs more than copper? Considering that copper's price
              has long been at the point where thieves have been stealing copper plating off church
              roofs, that is a shocking statement of the relative cost of fiber-optics.

            • Is it still true that fiber costs more than copper? Considering that copper's price has long been at the point where thieves have been stealing copper plating off church roofs, that is a shocking statement of the relative cost of fiber-optics

              The cabling is not the only thing that needs to be taken into account. Think of optic network cards, switches and routers, since none of the come cheaply.
        • Re: (Score:1, Informative)

          by Anonymous Coward
          Let me clarify this even more. The 40Gb standard is aimed at LANs. The 100Gb standard is aimed at WANs / the Internet backbone. One is a method well suited to connecting machines in one room or a building to each other, the other is a way to connect cities. This is actually a very remarkable new role for "ethernet" standards, since most backbone trunk lines use special protocols today.

          Make sense?
    • by Doctor Memory ( 6336 ) on Thursday July 26, 2007 @01:13PM (#19999043)
      I wonder if it has something to do with latency. Maybe the 40Gb connections are faster because they have a simpler routing protocol or they use smaller packet sizes with no CRC. I haven't been able to get through to the actual proposed spec yet, so it's hard to say...
      • I wonder if it has something to do with latency. Maybe the 40Gb connections are faster because they have a simpler routing protocol or they use smaller packet sizes with no CRC. I haven't been able to get through to the actual proposed spec yet, so it's hard to say...

        As a general rule, Ethernet does not concern itself with routing protocols. It's to do with that whole "layering" thing you may have heard of. It's really quite popular in the world of networking.

        And I would bet a whole lot of money that they a
  • Ars Technica? (Score:5, Interesting)

    by conigs ( 866121 ) on Thursday July 26, 2007 @12:56PM (#19998747) Homepage
    I'm normally not one to do this, but the article linked is nearly identical to the coverage over at Ars Technica [arstechnica.com]. It seems that only a few words were changed and without even a link to the original ars article.
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      More likely is that they both cribbed the same press release.

    • Re: (Score:3, Informative)

      by Red Flayer ( 890720 )
      It's a press release. Check out ITwire.au, or do a google news search for HSSG. You'll see that the release went out 7/23, with almost everyone publishing on 7/24 [google.com]. This guy was just a day late (7/25).
    • by evw ( 172810 ) on Thursday July 26, 2007 @01:11PM (#19999011)
      If you want all the gory details rather than a copy of a summary of a summary, here is a link to all the presentations at the meeting.

      http://www.ieee802.org/3/hssg/public/july07/index. html [ieee802.org]

      Read through the minutes [ieee802.org] (warning PDF) to get a summary.

      Motion #4: Move that the HSSG adopt the following objectives in replacement of
      existing HSSG objectives:

      o Support full-duplex operation only
      o Preserve the 802.3 / Ethernet frame format utilizing the 802.3 MAC
      o Preserve minimum and maximum FrameSize of current 802.3 standard
      o Support a BER better than or equal to 10-12 at the MAC/PLS service interface
      o Provide appropriate support for OTN
      o Support a MAC data rate of 40 Gb/s
      o Provide Physical Layer specifications which support 40 Gb/s operation over:
      - at least 100m on OM3 MMF
      - at least 10m over a copper cable assembly
      - at least 1m over a backplane
      o Support a MAC data rate of 100 Gb/s
      o Provide Physical Layer specifications which support 100 Gb/s operation over:
      - at least 40km on SMF
      - at least 10km on SMF
      - at least 100m on OM3 MMF
      - at least 10m over a copper cable assembly
  • Standards (Score:1, Offtopic)

    by edittard ( 805475 )
    The great thing about standards is ther's so many to choose from.
    • The great thing about standards is ther's so many to choose from.
      Unfortunately, not everyone chooses to follow them.
  • Exactly how far will ethernet efficiently scale? As I understand it there were problems with 1Gbp/s as first planned leading to jumbo frames, and ethernet isn't (wasn't) that efficient a protocol.

    Are there any other serious contenders which could/should be examined as a replacement for ethernet?
    • by XSforMe ( 446716 )
      Yep, Token Ring was indeed more efficient. Good luck reviving it.
      • Re: (Score:3, Insightful)

        by DFDumont ( 19326 )
        >Yep, Token Ring was indeed more efficient. Good luck reviving it.

        Token Ring (spitting) was only more efficient as compared to the original ethernet specification, with all of its collisions. Once we went to a switched architecture and reduced all conversations to two participants that advantage evaporated.

        Remember this, being deterministically bad is still bad. Have you ever been on a ring with > 200 nodes? Don't.

        Ethernet won because it was cheap. It beat token ring to switching. It beat everyth
        • by Nynaeve ( 163450 )
          Ethernet ... beat everything else to get to 100Mbps.

          Are you forgetting FDDI/CDDI [cisco.com]? As I recall, it was available before 100 Mbps Ethernet.

          "The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring LAN using fiber-optic cable."

          "Copper Distributed Data Interface (CDDI) is the implementation of FDDI protocols over twisted-pair copper wire."
        • by Detritus ( 11846 )
          Collisions are good. That's how you arbitrate access to the media. The important part of CSMA/CD is the CD part, which removes most of the penalty from collisions. Token ring salesmen spread a lot of FUD about how Ethernet behaves under load, which many people still believe.

          See:

          D. Boggs, J. Mogul, and C. Kent, "Measured Capacity of an Ethernet: Myths and Reality," WRL Research Report 88/4, Western Research Laboratory, September 1988. http://citeseer.ist.psu.edu/boggs88measured.html [psu.edu]

        • by tbuskey ( 135499 )
          A place I worked at had a FDDI/CDDI backbone at 100Mbs before 100T. They also had 155Mbs ATM. This was used to develop the 100T switch. It turned out to be too late.
      • by Intron ( 870560 )
        fibre channel arbitrated loop was pretty much like token ring, and has been largely abandoned for the same reasons. Switched fabrics support multiple connections, and single misbehaving machines can't create havoc for everyone else on the loop.
        • by jabuzz ( 182671 )
          Except most if not all fibre channel devices still support arbitrated loops. I have an arbitrated loop at 4Gbps at work hooking up a tape library to a server. I would have been nuts to buy a fibre channel switch for the job.
          • by Intron ( 870560 )
            If you only have two connections, you can do point-to-point, there is no need to run the arbitrated loop protocol.
    • Re: (Score:2, Informative)

      by hardburn ( 141468 )

      The big problem with ethernet's design was its "spew everything to everyone" mentality. In practice, this was fixed by good switches becoming almost as cheap as hubs.

      The main alternative to ethernet was token ring, which works much like a meeting where you have big stick that's passed around, and only the person with the stick can talk.

    • Re: (Score:3, Insightful)

      Ethernet is useful because it's cheap, I can attach a 10bt host to a switch and have it transmit the same frame over 100kbt with very little work. I have clients that love Ethernet it's orders of magnitude cheaper than it's main alternative Packet over Sonet. So pretty much it's good enough for most and cheap. In the PC server world the marketing guys want to say they have the latest and greatest copper Ethernet built in and supporting every old standard back to 10bt. This means they ask there chip supp
    • by sharkey ( 16670 )

      As I understand it there were problems with 1Gbp/s as first planned leading to jumbo frames

      So we'll move on to Hyper and Monster frames as the tech speeds up. Going along with those will be Mini-Hyper and Mini-Monster frames, of course.

    • Re: (Score:3, Insightful)

      by gad_zuki! ( 70830 )
      Probably for quite a bit. The biggest hurdle with ethernet is dealing with half-duplex connections and all the collions/detections. These new standards dont even do half-duplex. Everything is full duplex, thus requiring a switch. You've tossed out your biggest setback right there.

      Ethernet still is pretty lean. I can imagine an alternative to it, but it might not be worth the trouble, like the anyLAN stuff from a while back. We also still used TCP, but really dont need all the overhead it generates.
    • Exactly how far will ethernet efficiently scale? As I understand it there were problems with 1Gbp/s as first planned leading to jumbo frames, and ethernet isn't (wasn't) that efficient a protocol.

      Are there any other serious contenders which could/should be examined as a replacement for ethernet?

      Perhaps we should look toward a high speed LocalTalk or PhoneNet implementation?

    • Re: (Score:2, Insightful)

      The scaling issue had to do with CSMA/CD, collision detection. To detect collisions, the network propagation diameter/delay must be at most the slot time.

      These newer versions of Ethernet apparently don't bother supporting CD. All links must be switched through a hub, period. The hub saves up your packet and prevents collisions, and forwards your packet onto the next link. The "Ether" and "Like Talking" aspect of Ethernet has been lost. Ethernet has become just another framing choice other than SONET, f
  • Looks like (Score:1, Funny)

    by iminplaya ( 723125 )
    the network will soon be faster than the computer. Any chance we can syphon off of this speed to do some computing? Make the network become the computer?
    • Re: (Score:3, Insightful)

      by DaMattster ( 977781 )
      Well, if you think about it, Beowulf and similar Linux clusters take advantage of network speed to distribute processing load. This isn't really a case where the network does the computing but with 40 GBs of bandwith, you can perform some serious parallel processing.
      • Re: (Score:3, Insightful)

        by brsmith4 ( 567390 )
        With the 12x QDR InfiniBand spec, 96Gb (after factoring the protocol's overhead) is already on the table and at much lower latencies. This is more helpful for parallel applications (though it really depends on the properties of your application). I've not even worked with 12x nor any applications that would benefit from it. We currently run a 4x SDR setup (which will soon be upgraded to DDR) and it is ample for most of our needs. A cheap 40Gb ethernet solution would be killer for consolidating node mana
      • ...you can perform some serious parallel processing.

        You know, I wish people would make up their mind on these things. First we are being told that parallel is faster, then it's serial. Using hard drive interfacing here. What's it gonna be? Are we going to be told ten years from now the inline serial processing is faster? This is like these "nutritionists" telling us that eggs are bad for you and margarine is good. Later they come out with just the opposite. I guess I'll just keep what I have until it runs o
      • by CETS ( 573881 )
        Whooooosh.
    • Saw it in Nature magazine- the short answer is yes we can:

      http://en.wikipedia.org/wiki/Parasitic_computing [wikipedia.org]
    • You can think of it this way:

      If CPUs are so fast that pushing the data to be executed elsewhere over a LAN is a performance hit then parallel processing will go out of style.

      If networks are so fast that pushing data to be executed elsewhere over a LAN is a net perofrmance gain then parallel processing is back in style.

      Right now, we're seeing some pretty damn fast CPUs with multiple cores. Once these gains show down and network gains increase you'll see parallel stuff everywhere again.
    • by DrSkwid ( 118965 )
      IBM's Blue Gene [ibm.com] still uses Ethernet. Eric's [blogger.com] added Jumbo Frame support to Plan 9 From Bell Labs which boots on the cpu and I/O nodes [blogspot.com] now.

      In that case the network has it's own dedicated nodes, so yes, the network is the computer!
  • nice increase (Score:1, Interesting)

    by poetmatt ( 793785 )
    Considering that this stuff was doing 10 GB in 2005, to see 100 in 2007 is a pretty nice upgrade...my question is, given that the speeds are increasing, will we see any of this as consumers in the US? Not a "providers suck" (which we already know), but more of a "will this potentially make connections cheaper"?
    • Re: (Score:3, Interesting)

      Those of us in security are dreading this. IDS/IPS companies are only now dealing efficiently with multi-gigabit solutions for a reasonable price, and no one that I have talked to will do line-speed 10Gbs processing (some boxes can use parallel processing to handle streams from multiple inputs going up to 10Gbps, but not from a single line through a single processor to ensure that attack streams are properly reviewed). I shudder to think of what a 40Gbps stream will be like to monitor.
      • 40Gbps is from server or desktop to the switch chief. Try doing IDS/IPS on a 100Gbps link.

        I'll be able to cook eggs on my Snort box.

        • I suspect that we'll be seeing 40GBps on the switch interlinks long before I see 10Gbps links to the servers here. There just isn't a major call for quite that much on a regular basis, and the 100Gbps ports are going to be very, very pricey.
      • The practice of using general-purpose processors for 'streamed' data is being phased out in favor of FPGAs and DSPs, since they excel at the task.
    • I think the problem with connections in the U.S. is mostly related to problems with the last mile, or the "last few miles" (the backhaul from the local node to the C.O.).

      I suppose that this might make the node-to-CO link faster/cheaper, which would be good because it would raise the amount of actual capacity that the ISPs have to oversell, meaning that when everyone else in your neighborhood is trying to get online and play WoW, there's still some bandwidth left ... but in terms of actually making your inte
  • Edit much? (Score:3, Funny)

    by sakonofie ( 979872 ) on Thursday July 26, 2007 @01:21PM (#19999137)

    When IEEE 802.3ba was originally proposed [there] were multiple possible speeds that were being discussed, including 40, 80, 100, and 120Gbps. While there options were eventually narrowed down to just two, 40 and 100Gbps, the HSSG had difficulties [deciding] on the one specific speed they wanted to become the new standard...
    Slashdot editors and their homonyms have a wonderful relationship. There may be "there"s in the summary, but they're subject their edits.
    • I misspeak and write these words all the time, all the while understanding their proper meanings, but nonetheless... say what?
  • Who will be the lucky slashdotter?
  • excellent! (Score:3, Funny)

    by hcdejong ( 561314 ) <hobbes@nOspam.xmsnet.nl> on Thursday July 26, 2007 @02:00PM (#19999777)
    Why have one standard when you can have two instead! This strategy has worked so well in the past...
    • If you'd read the information properly, you'd realize they're directed at different needs and cost. The 40Gbps can work over copper, but has limited range, whereas the 100Gbps is high-distance but fiber-only.
  • Assuming it's adopted, the 40gb standard may be the first Ethernet standard to have widespread fraud in the capabilities of hardware sold. Lots of hardware will be built that can't even come close to actually getting 40 gigabits advertised. Why? Many motherboards still can't utilize the full 10gbps even if the card can. The bad guys will catch on to this the second time around.

    If you are the type to do the numbers and get a MB with sufficent bus speed. Buyer beware. The lack of speed may not be obv

    • Nothing new (Score:3, Informative)

      When 10Mb Ethernet came out there was widespread debate about its performance, because computers weren't fast enough to saturate it. It was probably the same for 100Mb, and I know the early 1Gb NICs could only handle ~700Mb.
    • Assuming it's adopted, the 40gb standard may be the first Ethernet standard to have widespread fraud in the capabilities of hardware sold. Lots of hardware will be built that can't even come close to actually getting 40 gigabits advertised. Why? Many motherboards still can't utilize the full 10gbps even if the card can.

      And who exactly do you think is going to make a motherboard with a 40Gb Ethernet connection in the next 5 years? Are there any motherboard designers who were dropped on their heads as babies?
  • This should take care of the "Enormous amounts of material" the great Ted Stevens warned us about.
  • So they couldn't come to a resolution on who to make happy, so they decided to make both people happy. If only Microsoft offered 2 versions, 1 for those hardcore performance nazi's(myself included) that has no extras, just the OS and that's all, or a slow performance sapping, DRM loaded, 'feature' full version! Microsoft should take notes from these guys. So 40 Gbps or 100 Gpbs? I'll settle for just the 40Gbps internet connection for now.
  • Hopefully we might soon be able to let copper cabling die.

    Cheap high speed optical chips: http://hardware.slashdot.org/article.pl?sid=07/07/ 25/2046208 [slashdot.org]

    Flexible, robust optical cables: http://theinquirer.net/default.aspx?article=41171 [theinquirer.net]
  • Why do will we need ISPs? For some things, anyway?

    We can string backbones using standard ethernet, at these speeds. We can use radio to bridge gaps. As I understand it, using copper across open outdoor spaces is electrically mad, so optical cabling is necessary, but the cost is dropping. We can run our own naming system. As for file sharing piggies, they can be screened out. We need a simple communication system that isn't under the boot.

    Let's face it, the corporations and the moral police have taken over t
    • Problem one is there's no proven routing algorithm for flat networks. Also, how do you screen out file sharers? Most routers under this system will be run by ordinary people who don't want to sysadmin, so they won't screen them out, and with no central authority getting a new address will have to be trivial.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...