Ethernet at 10 Gbps 462
An anonymous reader writes "This article talks about 10 Gigabit Ethernet and asks, 'But just how much data can a person consume?' Currently at work, we're working on a major project to re-architect our core application platform so that the different systems can be de-coupled and hosted separately. The legacy design implicitly relies on systems being in the same LAN due to bandwidth-expensive operations (e.g., database replication). Having this much bandwidth would change the way we design. What would you do with this much bandwidth?"
What would I do? (Score:5, Interesting)
What would Joe Sixpack do with it? I'm not sure at the moment. Thing is, since we're working within our limitations today it's hard to concieve of whta use it'd be. However, what happens when it becomes commonplace? It does open doors. Imagine if cable companies traded in coax for ethernet. They could easily send uncompressed HDTV. That'd be pretty slick.
Re:What would I do? (Score:1, Interesting)
note: i'm pulling most of these numbers out of my ass, as i am too lazy to look up the real values. but they're close enough for our purposes here.
say hdtv is 1280x1024 (yes, i know it's not 4x3), at 24 bit color. that's 3,932,160 bytes per frame, uncompressed. at 60 frames per second, that's 235,929,600 bytes per second, or 1,887,436,800 bits per second, uncompressed. note that this is without sound. with ethernet overhead, that's about 2 gigabits per second PER CHANNEL. assuming that you could even approach a real throughput of 10 gbps on 10gbps ethernet, you'd have 5 channels (with no sound).
care to rethink your statement?
Lots (Score:4, Interesting)
We migrated to 100meg, it was like night and day, and we still need more. We finally got 1gig to IT's network, and still to slow to push files with lots of users.
We have a burstable OC192 to our 2nd remote datacenter, OC48/12's to the smaller datacenters. But this is for production networks that need bandwidth, not desktop usage.
Also, my buddy in Japan just told me he got 100Meg DSL, the stuff you can do when bandwidth isn't a concern. Already Internet TV stations popping up there, amazing. Can't wait for this to catch on in the US. I just upgraded to 6M DSL from speakeasy, and its too fast for fileplanet.
Speed kills
oooh (Score:5, Interesting)
Interactive porn? No? (Score:2, Interesting)
on 30 year old buried POTS wiring that's 5
(plus) miles away. Fiber to terminal point
will not happen here before hell freezes over,
since the Baby Bells are not spending that
kind of money.
However, with that kind of bandwidth to the
internet, I could set up some homebrew web
sites, and telecommute to work, and go back
to (online) school all at the same time.
I hate to be repetitious, but that kind of
infrastructure would allow some really great
collaborative (beowolf?) computing.
Re:What would I do? (Score:3, Interesting)
While I agree it's basically two paragraphs of the same standard, keeping the mediums seperate certainly makes sense if you want to talk about market penetration, mostly because the market penetration for both mediums so far, is radically different.
Coax is ok, TP is a nice hack, both do... what they were designed to do. Fiber is better, but it's not marketed the way that will encourage people to switch just yet(when a 1GE link is firmware upgradable to 10GE, we can talk).
The note at the bottom about desktop use did confuse me though, how would 1GE reduce latency in desktops?? Maybe it's just that I'm used to a different market, but I get the impression that this beneficial aspect of a bigger pipe is only visible in server-to-desktop large non-streaming tranfers? Like say database select queries or spreadsheets/word processing documents? Is that enough to consider it "lower latency", unqualified? When smaller packets, like AIM/mail checking/other regular, small transfers, can take considerably longer, simply because a lot of the larger bandwidth link is really an optimisation for larger packet sizes and such?
Videophones, duh (Score:1, Interesting)
Re:What would I do? (Score:3, Interesting)
Animated movie! That is a hog! Even with today's DVI codecs, I can't even play a video over the network. Save high resolution (Well, it's still 720x576, but still that's not that high)
Fast Networking and NAS at home (Score:3, Interesting)
I beg to differ. Sit tight.
Here's an idea for you geeks that for some reason nobody is busy doing yet.
Quite a few IT people I know run some form of Linux or BSD server at home, doing a variety of stuff from fileserver to firewall to mail/DNS server etc., though on their desktops they run 2K or XP for reasons such as gaming, simplicity, wife, and so forth.
Here's the idea. Pool all your harddrives at home on the Linux/BSD box, configure a software RAID-5, share it using samba and network-boot all the 2K/XP machines at home from this network-attached storage. Using Gig ethernet of course.
What do you get? Every box gets a system drive "Drive C" that can go at 100MBytes/sec. RAID-5 redundancy for all your machines at home. Harddrives, which generate heat and noise are no longer in your computers.
The benefits are enormous.
There's a small con though - you won't be able to drag your computer to a LAN-party (unless you drag the server too
Currently there is a shortage of one element though: Software that can boot Win2K/XP using PXE from a fileserver. Such software exists in the commercial world and is made by a french company called Qualystem, which doesn't sell it in less than 1-server+25-client licenses, which costs a whopping 2750Euro. They show zero interest in smaller clients. A second product, Venturcom BXP, does the same but falls short as it has a dedicated server that only runs on 2K/XP/2K3 - no BSD/Linux with SAMBA for you.
If someone in the open-source community were to pick this glove up and write a small driver that emulates a harddisk for 2K/XP on one side (the kind you throw in for a RAID controller by pressing F6 when installing windows), and uses SMB
We'd also realize that Gigabit Ethernet is not enough, as a cheap 4-modern-ATA-drive RAID5 setup (which effectively streams enough data to store on 3 of them, one of the four being used to store parity info at any given moment) writes at 40MByte/Sec x 3 = 120MByte/Sec, and reads at 60MByte/Sec x 3 = 180MByte/Sec.
The Gigabit Ethernet _will_ pose a bottleneck.
If we add more drives, the bandwidth requirement broadens.
There's also the small issue of the PCI bus, your server must have its ethernet off the PCI bus, like in Intel's 875 chipset, nVidia's nForce 250 or on a PCI-Express card. Otherwise the IDE and GB will choke each other on the too-narrow PCI bus.
Anyway. once people start doing this, 1000BaseT is back to where 100BaseTX has been for 5 years - choking. I say - Bring on 10GbE!
Re:That's exactly the quote I remembered (Score:3, Interesting)
He programmed on a Mac, and the compilation took typically 5 to 10 minutes. Enough to get a cup of coffee, check the newspaper and have a quick chat with a cow-orker. Then he got a new Mac, and it compiled the program in a minute or so. No time for coffee, no time for news, no time for smalltalk.
So the new, faster computer was too fast... he had to wait at his desk more with the new computer.
First of all... (Score:3, Interesting)
Try to find a host OS with a TCP/IP stack that can properly utilize 1 gigabit ethernet, let alone 10 gigabits. Hint: It ain't Linux...
Try to find a storage solution that can read or write that fast. I'm thinking something like EMC with about 6-8 2 gigabit HBAs using Veritas DMP (dynamic multi-pathing).
Try to get all of the above, along with a 133 mhz. 64-bit PCI-X bus that still can't actually keep up with 10 gigabits of data. (133 mhz. 64-bit PCI-X is only about 1024 megabytes per second, not counting overhead).
The problem is, right now, the rest of the parts of a system just can't keep up with 10 gigabit ethernet. The only box that I would use that can handle that many I/O paths to storage (we're talking six to eight 64-bit 66 mhz. 2 gigabit FC host adapters) is a Sun Fire 6800 or something larger. The problem is, Sun doesn't yet support PCI-X, so now your 10 gig ethernet card is going to be limited to a 66 mhz. 64-bit PCI version, which will only transfer a maximum of 512 MB per second, not counting overhead. That is less than half of the available bandwidth of 10 Gig Ethernet.
You can forget about putting it in any Intel based system. There are not enough I/O busses and I/O controllers in even the beefiest Xeons or Opterons that can handle this much bandwidth (to disk).
Also, if your application doesn't need to write all of that data to disk, then how large is this dataset in memory that needs to be transferred at 10 gigabit speeds? If you had a server with 64 GB of memory, it could transfer it's entire memory set over 10 gigabit ethernet in less than 60 seconds.
A far better, and more economical solution, if you really need 10 gigabits of data throughput to the network, would be to use the same Sun server, and a product called Sun Trunking, which allows you to bond multiple gigabit ethernet interfaces together. You get all of the throughput you want, plus more fault tolerance. I've set it up before, and you can have a continuous ping going, across 4 connections, and pull 3 of those 4 connections and the ping keeps going, without even a dropped packet. It's really fault tolerant, and uses your existing switches, NICs, and hardware, without forcing you to upgrade your entire core switch architecture.
10 Gb/s thats it. (Score:4, Interesting)
And thats just the tip of the iceberg. Back when the 300bps modem came out they figured the speed was as fast as anyone needed because it was near impossible for anyone to type more then 30 characters per second. Then the 1200 and 2400 bps modem cam out and they though those were as fast as anyone needed because almost no one can read at that rate. Then the 9200 and 14.4k because it takes almost no time to go to the next page 80x25 of colored text. then the 33.6k and 56k modems (Still the fastest modems for 1 normal telephone line) you can now download a 300x200x256 colors picture in no time. As bandwidth increases we find new ways to max it out and also with increased bandwidth we come with new methods of using the computer because it can now do it.
someone's trying to sound important (Score:3, Interesting)
The term "cardinality" is wrong for several reasons. First, image data isn't represented as sets, it's represented as ordered sequences, and when talking about ordered sequences, both computer scientists and mathematicians talk about their "length", not their "cardinality".
Furthermore, what matters is not the size of what you want to transmit, but the rate at which you need to transmit it. We call that the "data rate" or (somewhat sloppily) the "required bandwidth".
So, the overall point of the article, that there is no single media stream that requires 10 Gbit bandwidth, is correct. However, that's pretty much irrelevant: file servers, video servers, and aggregate usage still require that kind of bandwidth. A family of four might require that bandwidth. You might want that bandwidth to have your backup happen in 1 minute instead of 10 minutes. So, there are lots of reasons to want 10 Gbit Ethernet, provided the price is right.
As for his use of the term "cardinality", the author apparently doesn't quite know the terminology of the field.
Re:Play original quake obviously (Score:3, Interesting)
1. A stream of data being pumped via UDP over a WAN which has a satellite link bang in the middle of it. Very high latency i.e. the bits take >600ms to get to the other end, but data can be sent at "wire speed" as there is no acknowledgement of each packet required == potentially massive bandwidth.
2. A large file being FTPed over the same WAN link. FTP typically runs over TCP/IP. TCP requires acknowledgement of each packet being sent. TCP (wrongly) interprets the long Round Trip Time i.e. >1200ms as link congestion and lowers the transmission rate. Oops !
10GbE for SuperComputing (Score:2, Interesting)
For those posters who are complaining about not getting near GbE performance, you are not properly tuning your system and network. You need think big - large frame sizes (network, 9k - 64k), large TCP windows (system buffers - think MB for GbE and GB for 10GbE), large I/O read/write(system disk), and account for latency (calculate your bandwidth*delay product). I've gotten constant ~980 Mbps throughput on a GbE network that was tuned.