Ethernet The Occasional Outsider 169
coondoggie writes to mention an article at NetworkWorld about the outsider status of Ethernet in some high-speed data centers. From the article: "The latency of store-and-forward Ethernet technology is imperceptible for most LAN users -- in the low 100-millisec range. But in data centers, where CPUs may be sharing data in memory across different connected machines, the smallest hiccups can fail a process or botch data results. 'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,' Garrison says. This forced many data center network designers to look beyond Ethernet for connectivity options."
Long Live! (Score:5, Funny)
One Ring to rule them all
Re:Long Live! (Score:2)
(I was actually going to post something similar, but this one beat me to the punch).
Do they even make Token Ring anymore? I know the MAUs were hella-expensive.
Re:Long Live! (Score:2)
Re:Long Live! (Score:3, Interesting)
the funnest, is that i've done work for naval ships that required 10base2. You know... CheaperNet!
Didn't RTFA? -Infiniband, FC and Myrinet beat Eth0 (Score:4, Interesting)
Infiniband http://en.wikipedia.org/wiki/Infiniband [wikipedia.org]
and Myrinet http://en.wikipedia.org/wiki/Myrinet [wikipedia.org]
http://h20311.www2.hp.com/HPC/cache/276360-0-0-0-
HP HPTC site
-What's the speed of dark?
Re:Didn't RTFA? -Infiniband, FC and Myrinet beat E (Score:2)
http://en.wikipedia.org/wiki/Fibre_Channel [wikipedia.org]
Re:Didn't RTFA? -Infiniband, FC and Myrinet beat E (Score:2)
However, if FC doesn't embrace higher speeds such as 10gbit I can see 10gigE displacing it, provided that companies can produce switching hareware that'll have the same latency speed as FC.
For what I use FC for it's great (storage), but I'd be much happier if everything was just Ethernet.
Re:Didn't RTFA? -Infiniband, FC and Myrinet beat E (Score:2)
Re:Long Live! (Score:3, Funny)
Re:Long Live! (Score:2)
Apparently, my college got a great deal on token ring from IBM in the early 90s, and at the time it was plenty fast. But by the mid 90s, it was showing its age, with no upgrade path. Back when my college still had no clue how to manage their network (read: 1997, pre-Napster), it consisted of a "turbo" (16Mbit) token ring backbone with various 4Mbit and 16Mb
Re:Long Live! (Score:2)
Nothing stops ARCNET!
Re:Long Live! (Score:2)
Re:Long Live! (Score:2)
Re:Long Live! (Score:2)
(From an old Dilbert comic IIRC)
More Dilbert (Score:2)
Re:Long Live! (Score:2)
Looking at it on my wall right now....
Dilbert (to PHB): Here's your problem. The connection to the network is broken. Uh-oh. It's a "token ring" LAN. That means the token fell out and it's in this room someplace.
*PHB searches for token under desk*
Dilbert (to Wally): I'll wait a week then tell him the token must be in the "ethernet"
Wally: You are the wind beneath my wings.
My idea: a vat of salt water & CAT5 (Score:5, Funny)
(Seriously, haven't people heard cut-through switches which just look at the first part of the header and switch based on that... store-and-forward switches are so "1990s")
TDz.
Re:My idea: a vat of salt water & CAT5 (Score:2)
we have a small office ~20 computers and 3 servers.. and i refuse to buy switchs that can't do cut through.. store and forward is slow..and very memory entisive for switchs on high speed networks..
Store & Forward ONLY for 10 to 100 to 1,000. (Score:4, Informative)
#1. You're running different speeds on the same switch (why?).
#2. You really want to cut down on broadcast storms (just fix the real problem, okay?)
Other than that, go for the speed! Full duplex!
Re:Store & Forward ONLY for 10 to 100 to 1,000 (Score:2)
Re:Store & Forward ONLY for 10 to 100 to 1,000 (Score:2)
Re:Store & Forward ONLY for 10 to 100 to 1,000 (Score:2)
For performance, run the same speed. (Score:5, Interesting)
I'm talking performance. Store & Forward hammers your performance. In my experience, you get better performance when you run the server at 100Mb full duplex (along with all the workstations) and use Cut Through than if you have the server on a Gb port, but run Store & Forward to your 100Mb workstations.
Re:Store & Forward ONLY for 10 to 100 to 1,000 (Score:2)
AFAIK, wireless doesn't consistantly support 100Mbps compared to local Ethernet. Usually, I get around 54Mbps, or possibly 10 Mbps on a weak signal.
That's why you still see store-and-forward - Wireless and wired networks are different speeds.
Re:Store & Forward ONLY for 10 to 100 to 1,000 (Score:2)
lets see:
you have an older but still functional and economical to run printer with a 10base2/T combo card in it and for which a replacement card would be either expensive or unobtainable.
you have 100mbit to most of the desktops because your wiring wasn't done well enough for gig-e to cope.
you have gigabit to your servers
you have a 10 gigabit backbone link.
also even if a switch is cutting through a lot of packets its still going to have to queue t
Re:My idea: a vat of salt water & CAT5 (Score:2)
Even still - low 100 ms for store-and-forward ethernet switches? That seems really, really high. I would've said more like single milliseconds, which is still high, but it isn't 100 ms.
I know from experience that I've used store-and-forward ethernet switches with much, much better latency than 100 ms.
Re:My idea: a vat of salt water & CAT5 (Score:2)
Re:My idea: a vat of salt water & CAT5 (Score:2)
I don't doubt that it's wrong (like I said, I know from experience that it's of order 1 ms, not 100 ms) but the article is the one that's wrong, not the story summary.
Re:My idea: a vat of salt water & CAT5 (Score:2)
30 GB? Take that NSA and your outdated 622MB! (Score:2, Interesting)
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
Yes, and you're right, I should have said Mbps and Gbps (30 Gigabit networking is going to create a packet flow that far outstrips the NSA's 622Megabit packet sniffing capability).
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
My journal has more info on the latest Naurus, the Insight, as well as info gleaned from their website and links from their website on the connections between Naurus and intelligence agencies and contractors. The 6400 was installed 6 years ago at AT&T by the NS
Re:30 GB? Take that NSA and your outdated 622MB! (Score:2)
100ms ethernet latency? (Score:5, Informative)
This author does not understand the subject material.
(I suppose you could deliberatly overload a switch enough to get this number, maybe, but that would be silly, and your switch would need 1.25Mbytes of packet cache.)
Re:100ms ethernet latency? (Score:5, Informative)
"By comparison, latency in standard Ethernet gear is measured in milliseconds, or one-millionth of a second, rather than nanoseconds, which are one-billionth of a second"
http://www.google.com/search?hl=en&q=define%3Amil
"One thousandth of a second"
Seriously. How the fuck does this idiot get published?
Re:100ms ethernet latency? (Score:2)
Subject: FW: Feedback http://www.networkworld.com/news/2006/052506-data- center-ethernet.html [networkworld.com]
Date: Thu, 25 May 2006 17:03:03 -0400
From: "Phil Hochmuth"
To: merreborn@*****.com
Thanks for your correction. We're making that change now. (Journalist math for you).
As for 1 millisecond of delay, are you talking about on the wire (node-to-node), or inside-the-box
(port-to-p
Re:100ms ethernet latency? (Score:2, Informative)
This author does not understand the subject material.
I disagree. The author has simply misplaced his metric units. He used the word "milliseconds", where he should have used the word "microseconds". You can see an example of this where he refers to milliseconds as one millionth of a second, rather than the one thousandth that they actually are.
Re:100ms ethernet latency? (Score:2)
Re:100ms ethernet latency? (Score:2)
So what you're saying is that the author may understand the source material, but he's an idiot too stupid to proofread
Yeah, pretty much.
Re:100ms ethernet latency? (Score:2)
Re:100ms ethernet latency? (Score:2)
I see NetworkWorld fixed the article.
Re:100ms ethernet latency? (Score:2)
Re:100ms ethernet latency? (Score:2)
Re:100ms ethernet latency? (Score:2)
If store-and-forward is indeed used in two very different contexts, it might be helpful for someone to update the Wikipedia article on store and forward [wikipedia.org] with current, and accurate meanings.
Low-cost options? (Score:2)
Re:Low-cost options? (Score:3, Informative)
For some people, that's cheap. If not, sorry.
Re:Low-cost options? (Score:2)
Oh well. Back to knocking around $300 servers.
Not an Auspicious Start (Score:5, Informative)
"(By comparison, latency in standard Ethernet gear is measured in milliseconds, or one-millionth of a second, rather than nanoseconds, which are one-billionth of a second)"
That would be one-thousandth, not millionth (aka micro second). Not a good start...
Re:Not an Auspicious Start (Score:2)
Seems to me you'd need to measure ethernet gear in mic
When you get to many hops (Score:5, Funny)
Re:When you get to many hops (Score:2)
Re:When you get to many hops (Score:2)
I needed a laugh after the work day I've had.
Software design (Score:3, Interesting)
sharing memory
Sounds like bad design, or a known design trade off.
Quite reasonable, when on a slow link, until I know better assume the data I have is correct, if it isn't throw it out and start over. Not wildly different than branch prediction or other approaches to this type of information.
'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,'
Faster is faster, not really a shocking concept.
Re:Software design (Score:3, Funny)
Did you mean "microseconds"? (Score:4, Interesting)
I don't know what sort of switches you use, but my home LAN, with two hops (including one over a wireless bridge) through only slightly-above-lowest-end DLink hardware, I consistantly get under 1ms.
When you get into application-layer clustering, milliseconds of latency can have an impact on performance
Again, I get less than 1ms, singular.
Now, I can appreciate that any latency slows down clustering, but the ranges given just don't make sense. Change that to "microseconds", and it would make more sense. But Ethernet can handle single-digit-ms latencies without breaking a sweat.
Re:Did you mean "microseconds"? (Score:2)
Re:Did you mean "microseconds"? (Score:3, Informative)
On a Force10 switch, with 2 nodes on the same blade:
tg-c844:~ # ping tg-c845
PING tg-c845.ncsa.teragrid.org (141.142.57.161) from 141.142.57.160 : 56(84) bytes of data.
64 bytes from tg-c845.ncsa.teragrid.org (141.142.57.161):
Re:Did you mean "microseconds"? (Score:2)
2) A fair number of ethernet switches exist for ~500 nodes @ 1Gbps that will have predictable latency, like the force10 you are describing. 900 nodes would be tough, admittedly, at the moment. Also, I don't think you meant to say "router" -- you almost certainly are switching if it's all configured right.
3) Myrinet is very specialized and uses cut-through switching. Ethernet is a generalized protocol that can be used on a WAN, and is almost always store-and-forward. Store-a
Re:Did you mean "microseconds"? (Score:2)
Wow, you must use some really old hardware. My packets arrive before I send them:
Re:Did you mean "microseconds"? (Score:2)
Milliseconds? (Score:2, Funny)
Oh, well. People tell me I'm just slow.
sharing memory over ethernet? (Score:1, Flamebait)
Maybe I should RTFA...
Re:sharing memory over ethernet? (Score:2)
Re:sharing memory over ethernet? (Score:2)
However, there are applications that "share memory" over networks; Oracle RAC springs to mind, where the nodes in the cluster share database blocks as required. However, Oracle recommend gigabit point-to-point connections between nodes, rather than a general-purpose network. The latter tends to make the cluster unusable.
Re:sharing memory over ethernet? (Score:3, Interesting)
Lesson: Use appropriate Tech. (Score:2)
But it's never been a really high speed protocol. It's easy to beat, speed-wise, as long as you know what the network use looks like ahead of time.
Which of course is a killer for m
Re:Lesson: Use appropriate Tech. (Score:2)
Basically, at a given level of tech, you should be able to build a network that is faster than an ethernet network at that level of tech. But it will be more complicated to set up and maintain.
No kidding (Score:5, Interesting)
When I was writing applications at the San Diego Supercomputer Center, latency between nodes was the single greatest obstacle to getting your CPUs to running at their full capacity. A CPU waiting to get its data is a useless CPU.
Generally speaking, clusters who want high performance used something like Myrnet instead of ethernet. It's like the difference between consumer, prosumer, and professional products you see in, oh, every industry across the board.
As a side note, how many parallel apps solve the latency issue is by overlapping their communication and computation phases, instead of having them in discrete phases, this can greatly reduce the time a CPU is idle.
The KeLP kernel does overlapping automatically for you if you want: http://www-cse.ucsd.edu/groups/hpcl/scg/kelp.html [ucsd.edu]
Maybe useless info: TOP500 interconnect statistics (Score:3, Informative)
That reminded me of the TOP500's statistics generator [top500.org], so I just had to look up the current list's (November 2005) statistics for "interconnect family" [top500.org]. For those that are curious:
Re:Maybe useless info: TOP500 interconnect statist (Score:2)
It depends on what you're doing. If your job is highly parallel, Ethernet is fine. But what happens when every CPU needs access to every other CPUs results in "real time?" Well, low latency is then a must. 1 ms latency is potentially millions of wasted cycles.
Re:Maybe useless info: TOP500 interconnect statist (Score:2)
In practice, cheap is the reality. Just like how consumer goods dominate the market, with much less prosumer and professional equipment sold.
Fast interconnects are way more expensive than ethernet. People that want the extra performance, though, pay for it.
OK article, bad title (Score:2)
I guess "Some Tiny Percentage of Data Centers use Something Faster than Ethernet in addition to Ethernet" didn't fit on the page.
The real title, "Most Data Centers aren't stupid.. (Score:2)
SGI had some kind of shared-memory-over-Ethernet protocol back in the day. Worked about as well as a steam-powered ornithopter. It was designed for customers too cheap or unconcerned about performance to use when they had to.
And I dabbled in OpenMP or whateveritwas back at a contract with just one such cheap customer, and they got what they paid for. Here's a nickel, kid.
Ethernet is Ethernet, and Infiniband et.al. is Infiniband et.al., dad-gummit.
Milliseconds? 100's of them? (Score:2)
Someone needs to look at their network... (Score:2)
Average latency is around 20ms.
Now I know this isn't as plain as straight ethernet but I'd have guessed the latency if anything on ATM + the change from 802.11g to ethernet to atm to ethernet to whatever would have been worse.
So either someone is usin
The worst post! (Score:3, Informative)
$ ping google.com
PING google.com (64.233.167.99) 56(84) bytes of data.
64 bytes from 64.233.167.99: icmp_seq=1 ttl=241 time=20.1 ms
64 bytes from 64.233.167.99: icmp_seq=2 ttl=241 time=19.6 ms
64 bytes from 64.233.167.99: icmp_seq=3 ttl=241 time=19.5 ms
What a shame that such a post is on the front page of slashdot! Someone please s/milli/micro.
Slashdot summary wrong, actual article is better (Score:4, Interesting)
Ethernet latency is about 100uS through a gigE switch, round-trip. A full-sized packet takes about 200uS (micro seconds), round-trip. Single-ended latency is about half of that.
There are proprietary technologies that have much faster interconnects, such as the infiniband technology described in the article. But the article also mentions the roadblock that a proprietary technology respresents over a widely-vendored standard. The plain fact of the matter is that ethernet is so ridiculously cheap these days it makes more sense to solve the latency issue in software, for example by designing a better cache coherency management model and by designing better clustered applications, then it does with expensive proprietary hardware.
-Matt
Re:Slashdot summary wrong, actual article is bette (Score:2, Insightful)
Re:Slashdot summary wrong, actual article is bette (Score:2)
My comments stand. Start posting prices and lets see how your idea of an open standard stacks up the reality. Oh yah, and remember for every $1000 you spend on your interconnect, that's $1000 less you have to spend on cpus and programmers with a clue.
The reality is that there is only one *correct* way to do a fast interconnect, and that is to build it into the CPU itself. Oh wait, AMD intends to do just that! That's what I'm waiting fo
Re:Slashdot summary wrong, actual article is bette (Score:2)
I'd just like to quietly point out that costs are often switching costs, not HBA. Even 10GigE is getting reasonable for the adapters, but the switching costs are still completely astronomical.
It will be very interesting to see what the future of interconnects & cache coherency holds. We're definately rapidly approaching a cusp of something new. AMD today announced they're making a new socket for external peripherials, a
Re:Slashdot summary wrong, actual article is bette (Score:2)
Open Standard says nothing about price.
IB HBA's might be cheap, but the switching fabric sure as fuck aint.
As for cache coherency, you were addressing the man trying to change the cache coherency game. Watch out, skinny.
Lastly, there are some proprietary gigabit technologies (non IP based) that, while not 2.7 usec, are very close. Numerous MPI implementations are written with these technologies, although many also require hardware.
I dont think anyone is wr
Ethernet Problems, IB problems, etc (Score:2, Interesting)
One thing that isn't mentioned in the article is the amount of CPU power required to send out ethernet packets. The typical rule is 1 GHz of processing power is required to send 1 Gb of data on the wire. So, if you want to send 10 Gbs of data, you'd need 10 GHz of processor - pretty steep price. Some companies have managed to get this down to 1 GHz/3 Gbs of processing, and one startup(NetEffect) is now claiming roughly ~0.1 Ghz for ~8 Gbs on the wire, using iWarp. With t
Tolkien ring (Score:4, Funny)
Well yeah... (Score:2)
For those who don't understand... (Score:4, Informative)
If you go to a high-speed network, what you get is a packet being forwarded as it's being read. By the time the first few bits are through the switch, it should be able to figure out the next hop and have the packet moving in that direction. Phone companies have huge problems with the delays in Ethernet. This is why the ATM protocol was invented, it's hard to use, awkward and not too graceful, but it can fly through a switching network like nobody's business.
Ethernet is also extremely sloppy--Any switch along the way is allowed to throw a packet away and wait for the originator to resend causing a HUGE hiccupp in the communication stream (Most if not all routers do this whenever an address is not in it's forwarding table yet).
IIRC the faster protocols see a "Routing" packet in the stream and set up forwarding hardware before getting the actual packet/stream, then wait until the end of the packet (or entire stream) to tear the route down again.
Ethernet, however, due to it's simplicity is bridging the gaps. It's a pretty crappy protocol in general, but we keep throwing better, smarter hardware at it in an effort to brute-force it into the parameters we require. (I work for a company that makes Ethernet over fiber hardware, and have worked for companies based around ATM, SONET and other interesting solutions).
I guess the point of the article was to remind a world that is coming to believe that ethernet is the end-all be-all of networking that it was always just the simplest hack available and therefore the easiest to deal with.
Just like SNMP.
Re:For those who don't understand... (Score:2)
First, Ethernet doesn't forward packets. It forwards frames.
Most (all?) ethernet switches read just the destination MAC and start forwarding it, just as you've described in the next paragraph. If it can't, because there's no bridge table entry for the destination, it floods the frame.
Re:For those who don't understand... (Score:2)
Not so sure (Score:2)
But here's where I notice some performance. We've got all the servers on a gigabit vlan. I can shift a 300MB file between servers in under 20 seconds. Transitioning a 5MB link takes five minutes.
So we did what we could to eliminate latency and we see it in the performanc
The didn't do enough research on the topic (Score:2)
Re:Store & Forward Unnecessary? (Score:2)
Re:Store & Forward Unnecessary? (Score:2)
Hard to credit any article with so little context, though.
Seriously folks, we engine-hearing types had better learn to write, because it's a fair call that the journalists don't understand engineering.
Re:Channel Bonding (Score:2)
Re:Channel Bonding (Score:5, Funny)
Re:Channel Bonding (Score:2)
Re:Ugh, they must have read an old paper (Score:2)
http://www.foundrynet.com/services/documentation/e dgeiron_install/7_intro_8X10G.html [foundrynet.com]
It employs store-and-forward, as do most new Cisco switches (if I remember correctly)