Forgot your password?
typodupeerror
The Internet

10-Gigabit Ethernet Standard Approved 311

Posted by michael
from the faster-pr0n dept.
A little birdie brings news that that 802.3ae standard for 10 Gigabit/second Ethernet has been approved. Everyone out there with Gigabit Ethernet - you are now officially obsolete. The new standard is fiber only, no more of that nasty copper stuff.
This discussion has been archived. No new comments can be posted.

10-Gigabit Ethernet Standard Approved

Comments Filter:
  • by CodeMonky (10675) on Thursday June 13, 2002 @08:55AM (#3692925) Homepage
    Approved or not it will still be some time before costs come down enough so that comapnies can justify replacing their gig backbone with 10gig.
    • by ajvtoo (206001)
      Be good for education and research though. JANET is planning to be 10Gbps on the core by late-Summer so they'll be pleased the standard has been approved (see http://www.ja.net/superjanet/index.html [ja.net])
    • Gigabit?

      Hell, where I work, we're still patting ourselves of the back for getting rid of that rotten old coax. We'll probably be languishing in 100Base-T land for a while yet.

      The early adopters of 10Gb ethernet are certain to be:
      Universities
      e-Commerce/ISP outfits
      Large corporation's data centers

      There is still plenty of life left in gigabit ethernet. In fact, it is still just gaining momentum.

  • not obsolete (Score:3, Insightful)

    by Mortin (538824) on Thursday June 13, 2002 @08:55AM (#3692928) Homepage
    considering hdds can hardly transfer at 1gbps, gigabit is hardly obsolete... yet :)
    • Re:not obsolete (Score:4, Informative)

      by larien (5608) on Thursday June 13, 2002 @09:00AM (#3692956) Homepage Journal
      One word: striping. If you put enough disks in, you can get more than 1gbps out of a disk array. Realistically, though, you're limited to using this in two places:
      1. Large server with many, many disk controllers and even more disks
      2. Network backbones
      It'll creep in to the second quickly enough (once Cisco et al support it in hardware), I'd imagine (we already have a 4gbps backbone using 4 gigabit lines in our site) and the former will start happening at the top-end installations of E15K's and the like.
      • Re:not obsolete (Score:4, Insightful)

        by GigsVT (208848) on Thursday June 13, 2002 @09:27AM (#3693114) Journal
        We're going to need something to replace PCI before we can use 10Gbit ethernet fully though. Even 64bit 66Mhz PCI has a max (in theory) of 528 Megabytes/sec.

        On a side note, I have sucessfully pulled 130Mbytes/sec out of 5400 RPM IDE Disks on 3ware controllers, with a cost less than $9000. 3 controllers, 24 disks, 64 bit 33Mhz PCI. RAID 0 over 5. So the potential is there to exceed current GigE, without too many disks or controllers, or getting too expensive.

        It would also help a lot if we could get regular gigabit ethernet working well first. I think there was a story here on Slashdot not long ago that showed that most GigE cards had trouble pushing over 400Mbits even with large frames. Only the expensive $500 one came close to it's full potential (900Mbits). My experience is that without jumbo frames, there is hardly any advantage with lower end GigE cards.
        • Re:not obsolete (Score:5, Informative)

          by questionlp (58365) on Thursday June 13, 2002 @09:54AM (#3693285) Homepage
          The highest speed PCI-X (64-bit @ 133Mhz) is capable of reaching ~1GByte/sec which is just about the speed of 10 Gig Ethernet. There was/is the promise of Araphoe (sp?) that resembles AMD's HyperTransport but would be used for expansion cards rather than a chip-to-chip pathway.

          The other bottleneck with even high-end Intel-based servers could easily choke when dealing with not only 10 Gig Ethernet but also add Fiber Channel, multiple channels of Ultra 160 or Ultra 320 SCSI RAID, etc., since the memory bandwidth (and processor bus speed?) would then become the possibly the next bottleneck. RISC servers don't have that much of a problem just yet, but sooner or later it will be.
          • Re:not obsolete (Score:2, Informative)

            by questionlp (58365)
            Oops... forgot to mention that the currently available chipsets that support one or more PCI-X busses include the Intel E7500 and the ServerWorks Grand Champion (GC) series (either the HE or the LE, depending on the number of processors required).

            The "northbridge" of the Intel E7500 supports two PCI-X busses (more information about the chipset can be found here [intel.com].

            The ServerWorks GC series support for PCI-X start from 2 independent busses (the GC-SL) up to six PCI-X busses (the GC-HE). Specs on the ServerWorks stuff is located here [serverworks.com].

            I'm not completely sure if the AMD Hammer chipsets will include PCI-X support initially, but if one were to give up AGP 8x (which isn't really needed on a server) then you can turn that into a PCI-X bus to support a single 10 Gig Ethernet controller.

            Of course, there is still the bottleneck of the memory subsystem which can make or break a high-end system.

        • I can pull around 90MB/s off of my dual 7200 RPM drive RAID array. I could imagine that 4 striped standard ATA drives could do well over 130-150 MB/s making full use of gigabit ethernet on a fast bust and fast ethernet card.
        • We're going to need something to replace PCI before we can use 10Gbit ethernet fully though.

          If a network has more than two nodes, as most networks do, then each node isn't expected to saturate the network. Think of adding more lanes to highway, rather than increasing the speed limit.

        • I've seen system controllers that had the memory controller, CPU bus, and 10 gigabit ethernet in the same chip. If you can get 10 gigabit to memory, you can use all the bandwidth. At least in bursts...
        • We're going to need something to replace PCI before we can use 10Gbit ethernet fully though.

          Good point. We need something like 3GIO [intel.com]. Plus something has to be done about the bandwidth between the northbridge and the southbride. Right now it is at 266 MB/sec with plans to increase to 533 MB/sec.
      • Re:not obsolete (Score:3, Informative)

        by Ogun (101578)
        Oh, you mean like this:
        Cisco 12000 10Gb line card [cisco.com]
        or like this:
        Catalyst 6500 10Gb line card [cisco.com]

        Cisco did announce these a while ago.
    • Re:not obsolete (Score:3, Insightful)

      by GeckoUK (58633)
      You are correct, but only in the case were you have a network of two computers.

      In the real world a company deploying this is likely to have hundreds if not thousands of machines all connected at once.
    • Re:not obsolete (Score:2, Insightful)

      by Anonymous Coward
      not too many people would hook up a single box to a 10GB pipe (although many of us would like to). who knows - it might happen eventually!
  • by taliver (174409) on Thursday June 13, 2002 @09:00AM (#3692952)
    Here's [10gigabit-ethernet.com] one that might be a little more informative. I leave the google link to someone else.
  • In meaningful terms (Score:5, Informative)

    by Throatwarbler Mangro (584565) <delisle42&yahoo,com> on Thursday June 13, 2002 @09:02AM (#3692967) Homepage
    10Gigabit/sec = 1.25Gigabytes/sec

    1 LoC (Library of Congress) = 10 Terabytes [jamesshuggins.com] = 10,000 Gigabytes

    That's 0.000125LoC/sec, or roughly 2.22 hours to transfer the entire contents across 10GigE.

    Wow.

    • by glwtta (532858) on Thursday June 13, 2002 @09:45AM (#3693232) Homepage
      I thought the time honoured (and extremely relevant) measure of LoC/s has been officially replaced with HG/s (Human Genomes per second)? Mostly because it allowed for a lot more flexibility in making up figures that don't really tell you much, if I remember correctly.
    • by Dark Nexus (172808) on Thursday June 13, 2002 @10:03AM (#3693336)
      As a slight correction, when it comes to baud ratings, 10 Gigabit/sec = 1 Gigabyte/sec

      It's 8:1 for storage, but generally 10:1 for network ratings (an example [mathworks.com] - more for serial ports, but it still applies), thanks to a header and a footer bit sent with every byte. Sometimes (rarely), throw in a parity bit for good measure.

      Mind you, that's still only 2.78 hours.
      • by psychos (2271) on Thursday June 13, 2002 @11:24AM (#3693907)
        This is incorrect. Low speed serial interfaces do tend to use a start bit and a stop bit, but higher speed interfaces generally do not.

        I'm not very familiar with 10gige technology yet, but my brief research shows that it uses 64B/66B coding (e.g., 2 overhead bits out of every 66). Running at a clock rate of 10.3125GHz, that gives you a full 10Gbps of throughput, or 1.25 GB/sec.

        100baseT uses 5B/4B coding, which does result in 2 overhead bits out of every 10 just like your serial line example. However, 100baseT actually runs at 125MHz so you do get a real 12.5 MB/sec out of it.

        Of course, if you really want to be picky about "LoC/sec" or whatever pointless measure the popular media has latched onto this week, you need to consider the overhead of TCP headers, whether or not you want to allow jumbo frames in your calculations, and so on.
  • by GnomeKing (564248) on Thursday June 13, 2002 @09:03AM (#3692972)
    *looks at his 14.4k modem*

    *looks at the article*

    *looks at his modem*

    *cries*
  • by SkyLeach (188871) on Thursday June 13, 2002 @09:03AM (#3692977) Homepage
    It should be obvious that to burry copper is completely obsolete. Per yard, fiber should be cheaper to manufacture and bury.

    10Gb speeds should be enough for anybody, so start building the infrastructure now and leave the telcos in the dust.

    Will they do it? No. Why not? Because they think that they should bury the copper/fiber hybrid cable that they have been burying and come back and do it again later.

    Burying cable is the most expensive part of telecomm.... retards.
    • 10Gb speeds should be enough for anybody

      Just like 640K is enough for anybody.

      HH
      --
    • by stevelinton (4044) <sal@dcs.st-and.ac.uk> on Thursday June 13, 2002 @09:33AM (#3693158) Homepage
      Actually, burying pipe is the most expensive part..... I worked briefly for a university computing service a few years ago and they spent an absolute fortune to buy a network of yellow plastic pipe connecting all their buildings. A relatively trivial incidental expenditure was to pull some cable through it. When that sort of cable is obsolete, a further trivial expenditure will replace it, etc....
    • by cnladd (97597)
      10Gb speeds should be enough for anybody

      Just like 640KB of RAM should be enough for anybody? :)
    • 10Gb speeds should be enough for anybody, so start building the infrastructure now and leave the telcos in the dust.

      It doesn't matter how much bandwidth you give me, I will always want more. And so will the people who design software to run on higher-bandwidth networks.

    • The price of cable (copper or fibre) by the foot is not the real factor. Most of the real cost lies in labour, equipment, permits, etc.

      Here's a real example from SoCal: by the spool, telephone line itself costs about $5/foot. But the total cost to lay underground lines is about $40/foot. (Compare to stringing now-mostly-prohibited overhead lines at $16/foot.)

    • It should be obvious that to burry copper is completely obsolete.

      Wrong! Copper is already strung around every city and home in America (probably a hefty portion of the world). And, there's a standard for gigabit over copper:

      Deployment Guide [intel.com]
      [PDF] [10gea.org]

      It's limited to 100 meters, but for communities, home networks and any switched network, I don't see a point in passing up what is already laid in the building. For future digs, they could go either way, and I'll agree fiber is the way to go. But let's not ditch copper just yet...it seems to have some usefulness left in it.
  • by Anonymous Coward
    And I'm just waiting for the new 10/100/1000/10000 NIC's to appear

    ...

    iSpeed ... do you?
    http://www.ispeed.com
  • by Anonymous Coward
    Hi,
    anyway, why use fiber, when you can have copper and squeeze it between doors, windows and everything that closes away the server's hum from a peaceful, quiet home? As far as I know, using fiber would be a *snap* between a door.
    OK, having said this, 802.11 should rule. But too expensive. snif.

    ineiti
  • Copper vs. Fiber (Score:5, Insightful)

    by jandrese (485) <kensama@vt.edu> on Thursday June 13, 2002 @09:05AM (#3692986) Homepage Journal
    IIRC the original Gig-E hardware (if not the original spec) was Fiber only as well. Eventually people started coming out with copper hardware to save on costs. In most cases, the only real advantages to fiber are the long cable runs and the immunity to interference in noisy EM environments (like your typical computer room). The downside is the cost.
    • IIRC, some researchers had succeeded in making cables suitable for data transmission from some plastic. dirt cheap, and quite fast too (of course not as good as single mode fiber, but better than copper).
  • by Yoda2 (522522) on Thursday June 13, 2002 @09:05AM (#3692989)
    Lynx [browser.org] will rock!!!
  • wither Cat6 ? (Score:3, Interesting)

    by green pizza (159161) on Thursday June 13, 2002 @09:12AM (#3693028) Homepage
    My building recently had new copper installed. Previously had Cat5 (great for 100BaseT) but was upgraded to cable meeting the specs for the latest Cat6 draft spec (rather than just Cat5e).

    Is 1000BaseTX the end of the line for copper? Or will there eventually by a 10000BaseT that will run on Cat6?
    • Re:wither Cat6 ? (Score:3, Informative)

      by Barche (233137)
      This thing runs at 10 Gbps. Ethernet uses Manchester encoding (+-=1, -+=0), which means you have to double the bps to obtain the bandwidth. So you need (about) 20GHz of bandwidth on the cable. At that frequency, losses in a copper cable are just too high. You'd need to use either wave conduits (big metal pipes, not an option) or optical fiber.
    • Re:wither Cat6 ? (Score:3, Informative)

      by ivan256 (17499)
      Gigabit ethernet took all it's electronic specifications from fibre channel. When Gigabit came out, there was already copper available for fibre channel, and there was nothing stopping you from using those GBICs. The recent development was that they figured out how to get the signal over regular CAT5.

      I'm sure that there will be a copper spec for 10 gigabit too, it's probably just not ready yet. Consider that people will be wanting to use this on the backplane of embedded network hardware, and blade servers.
  • by Jugalator (259273) on Thursday June 13, 2002 @09:12AM (#3693029) Journal
    Time to...

    Download a typical 100K pr0n JPG: 0.00001 s
    Umm...
    Download a 650Mb ISO: 0.52 s
    Hm...
    Download 2 650Mb ISO's: 1.04 s
    Eeh?!
    Download 100 650Mb ISO's: 52 s
    Wow!
    Download 1000 650Mb ISO's: 8.7 min
    Jeez!
    Download an image of CowboyNeal: 12.31 hours
    Bah... Tech still need to catch up.
  • This really stinks. It took me 3 years to avocate 100mbs. And now its 2 revs behind.
    • Seriously. Unless you're pushing quad-digit node-counts or are sharing streaming video all over the place (or just have lots of 0-day servers), 10Gb isn't going to really provide you with any appreciable performance gain over 100Mb.

      In most cases, small files are sucked down well before your bandwidth usage ramps up that far. And even larger files would probably only be sucked down a few seconds faster (mainly because of the speed of the storage medium on your system).
  • Maybe now those damn geeks will stop tearing the copper pipes out of old buildings to reuse as network cabling. Now its time to toughen security on our fiber!

  • by Andy_R (114137) on Thursday June 13, 2002 @09:16AM (#3693058) Homepage Journal
    Can someone bring me up to speed?

    1) The link shows it has been approved by "Revcom" - who are Revcom, and why should I be interested in their approval?

    2) Seeing as ethernet seems to speed up by an order of magnitude each time, why does the standard not allow for many more x10 jumps?

    3) How far is 10Gb Ethernet from getting to the consumer/business market?
    • by GigsVT (208848) on Thursday June 13, 2002 @09:35AM (#3693172) Journal
      revcom [ieee.org]

      IEEE

      Consider yourself hit with clue-stick.
    • 3) How far is 10Gb Ethernet from getting to the consumer/business market?

      I know of companies that have had 10 gigabit ethernet chips working internally for over 3 years now. They were just waiting for the standard to come out. Now they'll tweak their chips to meet the standard and release them. You should be seeing these in stores Real Soon Now(TM). Expect them to cost between $1k and $3k per HBA at first though. They probably won't reach an affordable level for 5 years or so. We still haven't completed the transition to gigabit.
  • by green pizza (159161) on Thursday June 13, 2002 @09:18AM (#3693068) Homepage
    I'm guessing 10GbitE will be used for inter-switch and inter-router connections long before it gets to the desktop. Ever looked at performance comparisions between 100BT and 1000BT between just two PCs? A couple years ago the difference wasn't much... NICs weren't efficent enough and the host PC's didn't have enough CPU power to handle that many tiny packets per second. Jumboframes and faster CPUs have helped a lot since then, but we're still a long ways away from even 90% utilization between two PCs with 1000BT. And here we are with 10GigE, with 10x as many packets per second.

    I'm I the only one that thinks the only efficent 10GigE NICs are going to be PCI-X cards with an onboard 2.6 GHz P4 co-processor and 512 MB of buffer?
    • by Enry (630) <`enry' `at' `wayga.net'> on Thursday June 13, 2002 @09:34AM (#3693167) Journal
      There was an article in the Linux Journal a few months ago (February issue I think) that talked about intelligent network cards. They had an onboard XScale CPU and its own OS and TCP/IP stack.

      What would happen is the OS (Linux) would get intercepted at the socket layer and pass the data to the network card. The card would then handle the process of building the packet and all the remaining layers of communication.

      This allowed for a high amount of main CPU time left over for actually doing processing while the network card CPU was focused on handling the TCP/IP packet work. IIRC, you could saturate a 1Gb line with data at only 5% main CPU usage.
      • I'm I the only one that thinks the only efficent 10GigE NICs are going to be PCI-X cards with an onboard 2.6 GHz P4 co-processor and 512 MB of buffer?

      Sure, today. I'm still glad to see that networking standards are being pushed far forward in advance of computing equipment. 10MB/sec Ethernet was hard for computers to keep up with when it first came out, but I'm glad they didn't wait for the computers to catch up before establishing the standard.

      New low-end desktops today can comfortably handle 100mb/sec no problem. High end handles 1Gig/sec. In about 5 years, by Moore's law, the new high end machines point to point will be able to use up all of that 10Gig/sec Ethernet. Ethernet is supposed to support all the machines on a LAN.

      The fact that we can bearly support 10Gig/sec Ethernet now seems pretty irrelevant to me.

      • Ethernet is supposed to support all the machines on a LAN.

        That is what switches are for. You don't need to share bandwidth with other computers on the LAN. If you have too much traffic between two computers, and are saturating the link to either of them, then you have a network design issue.

      • Trivial point but. . .
        TSMC is currently in production at 130nm, starting 90nm process by the end of this summer and jumping to 65nm process in 2005. IBM says 40nm is the final limit for CMOS transistor gates. If it is indeed the middle of 2002, then I don't think Moore's Law is going to be holding in five years.
        Of course there's always multiple processor configurations, advances in circuit designs and better nanotech (since processor designs already are properly classified as nanotech at the 0.1 micron level) and all sorts of things to push the limits one more time, but Moore's Law and CMOS are more or less at the end of the road once you're dealing with resolutions of a few dozen atoms which is what you've got at 40nm. And that information is according to the people who have the most to gain by denying it --IBM, Intel, TSMC, UMC etc.
  • but why would you need networking to the desktop that's so much faster than the data transfer rates if the internal components?
  • by WolfDeusEx (310788) <(mark) (at) (no33.co.uk)> on Thursday June 13, 2002 @09:33AM (#3693161)
    10Gigabit still woun't standand up to the slashdot effect
    • Don't be too proud of this technical monstrosity you've constructed. The ability to transfer the Library of Congress in less than three hours is insignificant compared to the power of the slashdot effect.

      Rats. It would have been funny if you hadn't gotten to it first. :)

  • that's the great thing about working at a national lab [bnl.gov] - in my office i have a gigabit network connection straight to the backbone (the advantages to being tech-savvy in a generally retarded department..."oh come on, the 100/1000 card is like $25 more than the 10/100...and it's not your money anyway"). wonder how long before they upgrade the network, those iso's take *forever* at 700KB/s...
    yes, i know i'm not pushing my connection at all @ 700K, and i know 10-gig ethernet wouldn't make a rat's ass of a difference, but i like to gloat (/. on mozilla 1.0 takes, oh, 0.981 seconds to load and render)
  • Now it seems feasible to actually share RAM over the ethernet - That would be nice :D
  • Switch prices (Score:4, Insightful)

    by stevelinton (4044) <sal@dcs.st-and.ac.uk> on Thursday June 13, 2002 @09:41AM (#3693213) Homepage
    Switches for these speeds are still kind of large, awkward and pricy. We had a visiting lecturer from one of the major players in this level of kit talking here about 6 months ago, and their top-end product (he showed a photo) was a 48-way full bandwidth 10Gb switch, It filled two full height 19" racks, consumed 20kW and cost upwards of $2M.

    Of course they've probably come down a but in the last few months...
  • 10Gb Eithernet is pushing the limits of hardware. Everyone working on this are only running the optical link at 10Gb, but then splitting the signal into 4 lanes (called XAUI [electronic...eering.com]) so the signal can be processed at sane speeds. Both the 10Gb Ethernet spec. and the 10Gb Fibre Channel spec. take this into account, so all the data is 128-bit alligned.
    Companies won't have hardware in their labs until early next year, so don't expect that you will see and 10Gb NICs at Best Buy any time soon.
  • by xt (225814) on Thursday June 13, 2002 @10:05AM (#3693347)
    The reason is material properties.

    Six months ago, I had the chance to talk with the 3Com technical manager who was on the board drafting the spec.

    What he said was very simple; all tests indicated that the only way to have 10Gb over copper is to limit the connection distance to centimeters!

    1Gb already pushed the envelope for copper, using all pairs, multiplexing, and error correction; 10Gb is just not possible.

  • Do the math - even on a "high end" server:

    Sun SBus - 25mhz x 64bit = ~800mbps
    PCI 33mhz x 32bit = ~1000mbps
    PCI 66mhz x 64bit = ~4000mbps

    And that of course is the raw speed for the whole bus. It's shared between multiple device - and even then you usually can't get the real theoretical maximum throughput.

    Until busses at least 3x faster than 64/66 pci become common on server hardware, this will only be realistically deployable as network infrastructure (eg Inter-Switch Links between high end Cisco Catalysts). Even at 3x 64/66 pci, one 802.3ae card will saturate the bus.

    Of course 10Gbit Fibre Channel is also coming down the pipe soon - hopefully between the two there will be a real drive for newer bus architectures to actually go mainstream in the server market.

    • oops, my quick mental math led me astray - SBus would be ~1600mbps, not ~800mbps. In any case, doesn't change the point :)
    • If you can connect two computers together across the office and run them as if you had the two processors in one box, this is a big leap for distributed computing. The main problem today with distributed computing is that the network is the bottleneck, so you can only run tasks that can be easily broken into small chunks. You therefore cant use software designed for the big 64 way IBM Big Iron, because all their processors are on the same bus so you dont have to split stuff up, the processors talk to each other realtime. 10G ethernet allows you to string 64 cheap boxes together and run them as if all the processors were on the same bus, so you can run all that nuclear explosion simulation or weather simulation software that youve always wanted to but couldnt find the spare $10M to buy a supercomputer. Id wager that apples Xserve will be one of the first widely available computers to run 10G (you could get 1G from them like 2 years ago) Imagine a beowulf cluster of those.... ;-)

      • It will make it better, but you 10G ethernet doesn't match the speed of an SMP interconnect. If a processor has a 133 Mhz DDR bus (like an AthlonMP), that's ~ 17Gbps. You might assume that by the time 10G ethernet is widely deployed, processors might be utilizing 200 Mhz DDR busses for ~25Gbps. It's also considerably lower latency across those little copper traces on the board compared to going through 10G ethernet.

        Technology will improve for all sorts of networks and busses, but it will almost always be universally true that a tight interconnect inside a single machine will perform better than an externally cabled network between machines.
  • by Ashurbanipal (578639) on Thursday June 13, 2002 @10:15AM (#3693402)
    Every time they come out with a new standard for ethernet it's the same old schpiel - "you need this special expensive coax/shielded-pair/fiber-optic etherhose to make it work; you canna change the laws o' physics Cap'n!"

    Then eight months later somebody figures out how to run it on old lamp cords and string.

    Don't rush out to buy fiber unless you need the noise isolation (glass is great for that!) and don't care about the cost.
  • by buss_error (142273) on Thursday June 13, 2002 @10:21AM (#3693448) Homepage Journal
    Sure, you can't use all 10G on ONE machine. Even a server can't use all that speed. Even using many NICs. (Buss congestion, ya know.) That isn't the point here. The point is that instead of having to segment a lot of traffic off to a vlan or other workaround, that traffic can be supported on one lan. This reduces equipment, interconnections, configuration, and alot of other headaches. In short, you can reduce the total points of failure.

    And remember, Intel isn't the only hardware platform out there. While I don't know of a hardware platform that can make fully support the speeds needed, there are some that can support better than 4000 Kbps now.

  • Arg. (Score:4, Insightful)

    by be-fan (61476) on Thursday June 13, 2002 @10:24AM (#3693463)
    Now my PC133 RAM is *really* obsolete. It can't even handle an ethernet connection!
  • by sacremon (244448) on Thursday June 13, 2002 @10:27AM (#3693486)
    I work in a data center for a major ISP/backbone provider. While we've got OC48's coming in from the backbone, it's 1Gb ethernet to the LAN distribution routers. With 10Gb ethernet, we can finally fully utilize that incoming bandwidth without having to use a lot of ports.

    Another good use is the emerging use of iSCSI, or SCSI over ethernet. 1Gbps ~ 100MBps, but more likely around 60-80Mbps. With 10Gbps, a SAN based on iSCSI will actually be able to use the throughput of those SCSI drive arrays.

    Eventually this will trickle down to the desktop, but not right now. So it doesn't really matter what PCI can handle - this isn't presently meant for it. BTW, 133MHz PCI-X will give 10Gbps, so if you have a dedicated PCI-X bus to that adapter, you can handle it will today's technology.
  • Obsolete? (Score:4, Insightful)

    by mindstrm (20013) on Thursday June 13, 2002 @10:28AM (#3693488)
    I know you say this jokingly.. or do you?

    This is not THE new standard, it is A new standard.

    It is THE standard for 10Gbps ethernet. Nothing more.

    Gigabit is hardly obsolete when a) very few corporate networks are using Gigabit outside the server room, and...

    Your average workstation can probably not even push 10Gbps, or anywhere near it in the first place. (Of course, that's not as big a deal, because it's ethernet, right? A single host can't max it out anyway.. the higher capacity means more hosts with lower latency.)
  • by Anonymous Coward on Thursday June 13, 2002 @10:45AM (#3693629)
    Okay, now PCI is a bottleneck. Even 64bit PCI, quad-pumped, would still only support around 8Gb/sec... So I suggest that we repurpose the AGP port. We can go back to boring old PCI for the graphics card, so lets implement AGP network cards! Of course, it won't be the "Accelerated Graphics Port" anymore, it will be the... "Always Generous Pornography" "Accelerated Game Piracy" "Automatic Grits-to-Pants"
  • by Oestergaard (3005) on Thursday June 13, 2002 @10:50AM (#3693662) Homepage
    Does anyone know how big packets one can send thru such a pipe ?

    100MBit maintained the same MTU as 10MBit, 1GBit maintained the same MTU too - leading to severe problems with performance. It's bad enough on 100Mbit, it's horrible on 1Gbit, to think that they maintained the 1500 byte limit on 10Gbit gives me the shakes...

    Yes, I know about "jumbo frames", and I challenge you to find an affordable 1Gbit switch that actually supports it.

    Anything below 64KByte packets would be insane as I see it.

    Anyone knows ?
  • by Rorschach1 (174480) on Thursday June 13, 2002 @11:32AM (#3693967) Homepage
    I can strip copper wire with my teeth, and terminate it with a Leatherman tool. Until I can do that with fiber, my network's sticking to good old-fashioned electrons.

I put up my thumb... and it blotted out the planet Earth. -- Neil Armstrong

Working...