Forgot your password?
typodupeerror

Ethernet The Occasional Outsider 169

Posted by Zonk
from the popular-kid-gets-snubbed dept.
coondoggie writes to mention an article at NetworkWorld about the outsider status of Ethernet in some high-speed data centers. From the article: "The latency of store-and-forward Ethernet technology is imperceptible for most LAN users -- in the low 100-millisec range. But in data centers, where CPUs may be sharing data in memory across different connected machines, the smallest hiccups can fail a process or botch data results. 'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,' Garrison says. This forced many data center network designers to look beyond Ethernet for connectivity options."
This discussion has been archived. No new comments can be posted.

Ethernet The Occasional Outsider

Comments Filter:
  • The NSA's network sniffer, recently discovered at an AT&T broadband center, can only sniff up to 622MB [slashdot.org]. Sounds to me like if you use an InfiniBand switch, that would effectively make the output of the NSA's network sniffers complete gibberish.
  • Software design (Score:3, Interesting)

    by nuggz (69912) on Thursday May 25, 2006 @03:26PM (#15404213) Homepage
    The origional post makes some comments that
    sharing memory ... the smallest hiccups can fail a process or botch data results.
    Sounds like bad design, or a known design trade off.
    Quite reasonable, when on a slow link, until I know better assume the data I have is correct, if it isn't throw it out and start over. Not wildly different than branch prediction or other approaches to this type of information.

    'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,'
    Faster is faster, not really a shocking concept.
  • by pla (258480) on Thursday May 25, 2006 @03:26PM (#15404218) Journal
    The latency of store-and-forward Ethernet technology is imperceptible for most LAN users -- in the low 100-millisec range.

    I don't know what sort of switches you use, but my home LAN, with two hops (including one over a wireless bridge) through only slightly-above-lowest-end DLink hardware, I consistantly get under 1ms.



    When you get into application-layer clustering, milliseconds of latency can have an impact on performance

    Again, I get less than 1ms, singular.



    Now, I can appreciate that any latency slows down clustering, but the ranges given just don't make sense. Change that to "microseconds", and it would make more sense. But Ethernet can handle single-digit-ms latencies without breaking a sweat.
  • Re:Long Live! (Score:3, Interesting)

    by EnderWiggnz (39214) on Thursday May 25, 2006 @03:35PM (#15404298)
    yes, people (mostly the government) do have token ring setups.

    the funnest, is that i've done work for naval ships that required 10base2. You know... CheaperNet!
  • by hguorbray (967940) on Thursday May 25, 2006 @03:38PM (#15404322)
    Actually, even with Gigabit ethernet availability HPTC and other network intensive data center operations have moved to Fibre Channel and things like:

    Infiniband http://en.wikipedia.org/wiki/Infiniband [wikipedia.org]

    and Myrinet http://en.wikipedia.org/wiki/Myrinet [wikipedia.org]

    http://h20311.www2.hp.com/HPC/cache/276360-0-0-0-1 21.html [hp.com]
    HP HPTC site

    -What's the speed of dark?
  • No kidding (Score:5, Interesting)

    by ShakaUVM (157947) on Thursday May 25, 2006 @03:46PM (#15404389) Homepage Journal
    Er, yeah. No kidding.

    When I was writing applications at the San Diego Supercomputer Center, latency between nodes was the single greatest obstacle to getting your CPUs to running at their full capacity. A CPU waiting to get its data is a useless CPU.

    Generally speaking, clusters who want high performance used something like Myrnet instead of ethernet. It's like the difference between consumer, prosumer, and professional products you see in, oh, every industry across the board.

    As a side note, how many parallel apps solve the latency issue is by overlapping their communication and computation phases, instead of having them in discrete phases, this can greatly reduce the time a CPU is idle.

    The KeLP kernel does overlapping automatically for you if you want: http://www-cse.ucsd.edu/groups/hpcl/scg/kelp.html [ucsd.edu]
  • by khasim (1285) <brandioch.conner@gmail.com> on Thursday May 25, 2006 @04:34PM (#15404805)
    People run different speeds on the same switch all the time, and for not necessarily poor reasons: If you have a SMB (in this case, that's small or medium business) with maybe one big fileserver, you don't need to run gigabit to everyone...
    What's with the "need to"?

    I'm talking performance. Store & Forward hammers your performance. In my experience, you get better performance when you run the server at 100Mb full duplex (along with all the workstations) and use Cut Through than if you have the server on a Gb port, but run Store & Forward to your 100Mb workstations.
  • by m.dillon (147925) on Thursday May 25, 2006 @04:45PM (#15404920) Homepage
    The slashdot summary is wrong. If you read the actual article the author has it mostly correct except for one comment near the end.

    Ethernet latency is about 100uS through a gigE switch, round-trip. A full-sized packet takes about 200uS (micro seconds), round-trip. Single-ended latency is about half of that.

    There are proprietary technologies that have much faster interconnects, such as the infiniband technology described in the article. But the article also mentions the roadblock that a proprietary technology respresents over a widely-vendored standard. The plain fact of the matter is that ethernet is so ridiculously cheap these days it makes more sense to solve the latency issue in software, for example by designing a better cache coherency management model and by designing better clustered applications, then it does with expensive proprietary hardware.

    -Matt
  • by mrjimorg (557309) on Thursday May 25, 2006 @05:20PM (#15405245) Homepage
    Note: I do have a dog in this fight.
    One thing that isn't mentioned in the article is the amount of CPU power required to send out ethernet packets. The typical rule is 1 GHz of processing power is required to send 1 Gb of data on the wire. So, if you want to send 10 Gbs of data, you'd need 10 GHz of processor - pretty steep price. Some companies have managed to get this down to 1 GHz/3 Gbs of processing, and one startup(NetEffect) is now claiming roughly ~0.1 Ghz for ~8 Gbs on the wire, using iWarp. With this, your system can be processing information rather than creating packets.
    The problem with Infiniband, Myranet, etc is that they require another card in your system (and associated heat problems, size issues, etc), special switches and equipment, and new training for your staff on how to get it up and going. However, IWarp, which is based on TCP/IP can use your standard DHCP, ping, tracert, ipconfig, etc and can allow a single card to be used for networking to the outside world (TCP/IP), clustering in the datacenter(IWarp), and storage (IScsi). 1 card, no special new software widgets, 10 Gb speeds.
    However, you cant go and buy a iWarp card from Fry's today. Although, you cant buy an infiniband or myranet card there either
  • Bad Parent Article (Score:0, Interesting)

    by Anonymous Coward on Thursday May 25, 2006 @05:53PM (#15405509)
    Article is retarded/biased/misinformed/didn't even bother to do actual research; my 48 node gigabit switch has 1ms and its quite busy b/w-wise the way we abuse it...has this genius ever heard of things called full duplex and not mixing speeds in a freaking DATA CENTER? Mine sits in a closet, not even on a rack.

    Hell, I think most of my gaming servers are all sub 50ms and thats internet via cableco_that_massively_oversells_bandwidth...
  • by mrchaotica (681592) * on Friday May 26, 2006 @12:37AM (#15407552)
    Maybe I should RTFA...
    Either that, or you should take the class [gatech.edu] that I took this past semester. There's a bunch of links to research papers and lecture slides about distributed shared memory (and other kinds of parallel/shared computing issues), if you care to read them.

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...