Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet

Ethernet at 10 Gbps 462

An anonymous reader writes "This article talks about 10 Gigabit Ethernet and asks, 'But just how much data can a person consume?' Currently at work, we're working on a major project to re-architect our core application platform so that the different systems can be de-coupled and hosted separately. The legacy design implicitly relies on systems being in the same LAN due to bandwidth-expensive operations (e.g., database replication). Having this much bandwidth would change the way we design. What would you do with this much bandwidth?"
This discussion has been archived. No new comments can be posted.

Ethernet at 10 Gbps

Comments Filter:
  • What would I do? (Score:5, Interesting)

    by Anonvmous Coward ( 589068 ) on Sunday July 25, 2004 @01:05AM (#9792681)
    The company I used to work for was sending very high resolution images from multiple cameras uncompressed from one unit to another to perform analytical operations on them. I think they manged to work at a gigabit, but 10 would be much nicer for them.

    What would Joe Sixpack do with it? I'm not sure at the moment. Thing is, since we're working within our limitations today it's hard to concieve of whta use it'd be. However, what happens when it becomes commonplace? It does open doors. Imagine if cable companies traded in coax for ethernet. They could easily send uncompressed HDTV. That'd be pretty slick.
  • Re:What would I do? (Score:1, Interesting)

    by Anonymous Coward on Sunday July 25, 2004 @01:20AM (#9792772)
    . Imagine if cable companies traded in coax for ethernet. They could easily send uncompressed HDTV. That'd be pretty slick.

    note: i'm pulling most of these numbers out of my ass, as i am too lazy to look up the real values. but they're close enough for our purposes here.

    say hdtv is 1280x1024 (yes, i know it's not 4x3), at 24 bit color. that's 3,932,160 bytes per frame, uncompressed. at 60 frames per second, that's 235,929,600 bytes per second, or 1,887,436,800 bits per second, uncompressed. note that this is without sound. with ethernet overhead, that's about 2 gigabits per second PER CHANNEL. assuming that you could even approach a real throughput of 10 gbps on 10gbps ethernet, you'd have 5 channels (with no sound).

    care to rethink your statement?
  • Lots (Score:4, Interesting)

    by BrookHarty ( 9119 ) on Sunday July 25, 2004 @01:23AM (#9792796) Journal
    We use lots of shared drives, remote desktop applications, X traffic, moving core files, database dumps, email with very large attachments (exchange to boot).

    We migrated to 100meg, it was like night and day, and we still need more. We finally got 1gig to IT's network, and still to slow to push files with lots of users.

    We have a burstable OC192 to our 2nd remote datacenter, OC48/12's to the smaller datacenters. But this is for production networks that need bandwidth, not desktop usage.

    Also, my buddy in Japan just told me he got 100Meg DSL, the stuff you can do when bandwidth isn't a concern. Already Internet TV stations popping up there, amazing. Can't wait for this to catch on in the US. I just upgraded to 6M DSL from speakeasy, and its too fast for fileplanet.

    Speed kills :)
  • oooh (Score:5, Interesting)

    by iamdrscience ( 541136 ) on Sunday July 25, 2004 @01:39AM (#9792869) Homepage
    IDE over IP. Yes, it does exist.
  • by quarkscat ( 697644 ) on Sunday July 25, 2004 @02:36AM (#9793088)
    I'm still waiting for decent DSL, since I'm
    on 30 year old buried POTS wiring that's 5
    (plus) miles away. Fiber to terminal point
    will not happen here before hell freezes over,
    since the Baby Bells are not spending that
    kind of money.

    However, with that kind of bandwidth to the
    internet, I could set up some homebrew web
    sites, and telecommute to work, and go back
    to (online) school all at the same time.

    I hate to be repetitious, but that kind of
    infrastructure would allow some really great
    collaborative (beowolf?) computing.
  • Re:What would I do? (Score:3, Interesting)

    by perlchild ( 582235 ) on Sunday July 25, 2004 @02:40AM (#9793103)
    Oddly enough, the article barely mentions 10G over fibre, which would be good(if a bit expensive to put in someone's home). It focuses... on 10GBase-CX4... to rehash a bit more the idea that existing equipment is reusable. And gets even more confusing when it speaks about the advantages of 10Gbase-CX4 in one paragraph, and quotes the sale of fiber equipment(FTTH to be specific) the next.

    While I agree it's basically two paragraphs of the same standard, keeping the mediums seperate certainly makes sense if you want to talk about market penetration, mostly because the market penetration for both mediums so far, is radically different.

    Coax is ok, TP is a nice hack, both do... what they were designed to do. Fiber is better, but it's not marketed the way that will encourage people to switch just yet(when a 1GE link is firmware upgradable to 10GE, we can talk).

    The note at the bottom about desktop use did confuse me though, how would 1GE reduce latency in desktops?? Maybe it's just that I'm used to a different market, but I get the impression that this beneficial aspect of a bigger pipe is only visible in server-to-desktop large non-streaming tranfers? Like say database select queries or spreadsheets/word processing documents? Is that enough to consider it "lower latency", unqualified? When smaller packets, like AIM/mail checking/other regular, small transfers, can take considerably longer, simply because a lot of the larger bandwidth link is really an optimisation for larger packet sizes and such?
  • Videophones, duh (Score:1, Interesting)

    by Anonymous Coward on Sunday July 25, 2004 @04:23AM (#9793353)
    We're still stuck with voice-only phones. One day we'll look back at these times and wonder how in the world we survived without video-phones.

  • Re:What would I do? (Score:3, Interesting)

    by Pieroxy ( 222434 ) on Sunday July 25, 2004 @04:37AM (#9793383) Homepage
    Forget about HDTV dude! It's already taking so much freaking time when I try to save my 600 dpi US letter image that I just scanned.

    Animated movie! That is a hog! Even with today's DVI codecs, I can't even play a video over the network. Save high resolution (Well, it's still 720x576, but still that's not that high)
  • by MikShapi ( 681808 ) * on Sunday July 25, 2004 @05:06AM (#9793431) Journal
    I saw some comments here saying Gigabit Ethernet is enough. There's nothing we can do with 10GbE.

    I beg to differ. Sit tight.

    Here's an idea for you geeks that for some reason nobody is busy doing yet.

    Quite a few IT people I know run some form of Linux or BSD server at home, doing a variety of stuff from fileserver to firewall to mail/DNS server etc., though on their desktops they run 2K or XP for reasons such as gaming, simplicity, wife, and so forth.

    Here's the idea. Pool all your harddrives at home on the Linux/BSD box, configure a software RAID-5, share it using samba and network-boot all the 2K/XP machines at home from this network-attached storage. Using Gig ethernet of course.

    What do you get? Every box gets a system drive "Drive C" that can go at 100MBytes/sec. RAID-5 redundancy for all your machines at home. Harddrives, which generate heat and noise are no longer in your computers.

    The benefits are enormous.

    There's a small con though - you won't be able to drag your computer to a LAN-party (unless you drag the server too ;-)

    Currently there is a shortage of one element though: Software that can boot Win2K/XP using PXE from a fileserver. Such software exists in the commercial world and is made by a french company called Qualystem, which doesn't sell it in less than 1-server+25-client licenses, which costs a whopping 2750Euro. They show zero interest in smaller clients. A second product, Venturcom BXP, does the same but falls short as it has a dedicated server that only runs on 2K/XP/2K3 - no BSD/Linux with SAMBA for you.

    If someone in the open-source community were to pick this glove up and write a small driver that emulates a harddisk for 2K/XP on one side (the kind you throw in for a RAID controller by pressing F6 when installing windows), and uses SMB /whatever to access a UNIX fileserver on the other, we'd all be able to rig up a very nifty setup, and use the combined speed of all our harddrives at home.

    We'd also realize that Gigabit Ethernet is not enough, as a cheap 4-modern-ATA-drive RAID5 setup (which effectively streams enough data to store on 3 of them, one of the four being used to store parity info at any given moment) writes at 40MByte/Sec x 3 = 120MByte/Sec, and reads at 60MByte/Sec x 3 = 180MByte/Sec.

    The Gigabit Ethernet _will_ pose a bottleneck.
    If we add more drives, the bandwidth requirement broadens.

    There's also the small issue of the PCI bus, your server must have its ethernet off the PCI bus, like in Intel's 875 chipset, nVidia's nForce 250 or on a PCI-Express card. Otherwise the IDE and GB will choke each other on the too-narrow PCI bus.

    Anyway. once people start doing this, 1000BaseT is back to where 100BaseTX has been for 5 years - choking. I say - Bring on 10GbE!
  • by TheToon ( 210229 ) on Sunday July 25, 2004 @06:16AM (#9793554) Journal
    I haven't said that, but a columnist in Byte Magazine in the mid-80s had a rant about this.

    He programmed on a Mac, and the compilation took typically 5 to 10 minutes. Enough to get a cup of coffee, check the newspaper and have a quick chat with a cow-orker. Then he got a new Mac, and it compiled the program in a minute or so. No time for coffee, no time for news, no time for smalltalk.

    So the new, faster computer was too fast... he had to wait at his desk more with the new computer.

  • First of all... (Score:3, Interesting)

    by illumin8 ( 148082 ) on Sunday July 25, 2004 @07:43AM (#9793717) Journal
    Let's see, what would I do with all that bandwidth:

    Try to find a host OS with a TCP/IP stack that can properly utilize 1 gigabit ethernet, let alone 10 gigabits. Hint: It ain't Linux...

    Try to find a storage solution that can read or write that fast. I'm thinking something like EMC with about 6-8 2 gigabit HBAs using Veritas DMP (dynamic multi-pathing).

    Try to get all of the above, along with a 133 mhz. 64-bit PCI-X bus that still can't actually keep up with 10 gigabits of data. (133 mhz. 64-bit PCI-X is only about 1024 megabytes per second, not counting overhead).

    The problem is, right now, the rest of the parts of a system just can't keep up with 10 gigabit ethernet. The only box that I would use that can handle that many I/O paths to storage (we're talking six to eight 64-bit 66 mhz. 2 gigabit FC host adapters) is a Sun Fire 6800 or something larger. The problem is, Sun doesn't yet support PCI-X, so now your 10 gig ethernet card is going to be limited to a 66 mhz. 64-bit PCI version, which will only transfer a maximum of 512 MB per second, not counting overhead. That is less than half of the available bandwidth of 10 Gig Ethernet.

    You can forget about putting it in any Intel based system. There are not enough I/O busses and I/O controllers in even the beefiest Xeons or Opterons that can handle this much bandwidth (to disk).

    Also, if your application doesn't need to write all of that data to disk, then how large is this dataset in memory that needs to be transferred at 10 gigabit speeds? If you had a server with 64 GB of memory, it could transfer it's entire memory set over 10 gigabit ethernet in less than 60 seconds.

    A far better, and more economical solution, if you really need 10 gigabits of data throughput to the network, would be to use the same Sun server, and a product called Sun Trunking, which allows you to bond multiple gigabit ethernet interfaces together. You get all of the throughput you want, plus more fault tolerance. I've set it up before, and you can have a continuous ping going, across 4 connections, and pull 3 of those 4 connections and the ping keeps going, without even a dropped packet. It's really fault tolerant, and uses your existing switches, NICs, and hardware, without forcing you to upgrade your entire core switch architecture.

  • 10 Gb/s thats it. (Score:4, Interesting)

    by jellomizer ( 103300 ) * on Sunday July 25, 2004 @08:53AM (#9793894)
    Remember this is 10 Gigabits per second that is only 1.25 Gigabytes per second (Assuming 100% speed which I never seen happen). Although Right now that is faster then most computers can handle data internally but there are its uses.
    1. System to System Backups. If you have seen the prices of memory and Hard drives space $/GB dropping at a fast rate while magnetic tape remains near constant. Soon the price/GB of memory and Hard drives will be lower then tape so it would be cheaper to backup your data on other Systems and Removable hard drive. So if you have a 3 terrabytes of data it can still take up to 40 minutes.
    2. Imagine a Beowolf Cluster connected at 10Gbs still right now the main slow point with a Beowolf cluster is the network bandwidth. at 10Gbs you are getting closer to the speeds of a supercomputer bus.
    3. Uncompress Video and Sound. No more lossy compression needed with a bunch of people fighting over compression standards. And we always get high quality Audio and Video realtime off the network
    4. 3D Now with 3D displays starting to become available there now will be more data needed to send 3D information. Over the network.

    And thats just the tip of the iceberg. Back when the 300bps modem came out they figured the speed was as fast as anyone needed because it was near impossible for anyone to type more then 30 characters per second. Then the 1200 and 2400 bps modem cam out and they though those were as fast as anyone needed because almost no one can read at that rate. Then the 9200 and 14.4k because it takes almost no time to go to the next page 80x25 of colored text. then the 33.6k and 56k modems (Still the fastest modems for 1 normal telephone line) you can now download a 300x200x256 colors picture in no time. As bandwidth increases we find new ways to max it out and also with increased bandwidth we come with new methods of using the computer because it can now do it.
  • by dekeji ( 784080 ) on Sunday July 25, 2004 @08:59AM (#9793908)
    "Cardinality" is the number of elements in a given mathematical set. When modems ran at 300 baud, you could forget about sending large data sets, such as images, because text and voice data took up all the available bandwidth. As connection rates increased, so did the cardinality of data that users could send. [...] Video currently represents the highest cardinality data

    The term "cardinality" is wrong for several reasons. First, image data isn't represented as sets, it's represented as ordered sequences, and when talking about ordered sequences, both computer scientists and mathematicians talk about their "length", not their "cardinality".

    Furthermore, what matters is not the size of what you want to transmit, but the rate at which you need to transmit it. We call that the "data rate" or (somewhat sloppily) the "required bandwidth".

    So, the overall point of the article, that there is no single media stream that requires 10 Gbit bandwidth, is correct. However, that's pretty much irrelevant: file servers, video servers, and aggregate usage still require that kind of bandwidth. A family of four might require that bandwidth. You might want that bandwidth to have your backup happen in 1 minute instead of 10 minutes. So, there are lots of reasons to want 10 Gbit Ethernet, provided the price is right.

    As for his use of the term "cardinality", the author apparently doesn't quite know the terminology of the field.
  • by maharg ( 182366 ) on Sunday July 25, 2004 @04:01PM (#9795920) Homepage Journal
    umm. you are kind of correct. two scenarios:

    1. A stream of data being pumped via UDP over a WAN which has a satellite link bang in the middle of it. Very high latency i.e. the bits take >600ms to get to the other end, but data can be sent at "wire speed" as there is no acknowledgement of each packet required == potentially massive bandwidth.

    2. A large file being FTPed over the same WAN link. FTP typically runs over TCP/IP. TCP requires acknowledgement of each packet being sent. TCP (wrongly) interprets the long Round Trip Time i.e. >1200ms as link congestion and lowers the transmission rate. Oops !
  • by nboscia ( 91058 ) on Sunday July 25, 2004 @06:30PM (#9796641)
    Our plan for 10GbE is to support researchers with huge datasets (in the terabytes) who use our supercomputing facility. We currently use GbE, which is not sufficient for transfering such large amounts of data. So we are upgrading to 10GbE and also getting WAN connectivity at that rate (not sure if this is going to be 10-GigE-WAN-PHY or not) so that researchers across the country can transfer their data in a matter of minutes or hours, as opposed to days or weeks.

    For those posters who are complaining about not getting near GbE performance, you are not properly tuning your system and network. You need think big - large frame sizes (network, 9k - 64k), large TCP windows (system buffers - think MB for GbE and GB for 10GbE), large I/O read/write(system disk), and account for latency (calculate your bandwidth*delay product). I've gotten constant ~980 Mbps throughput on a GbE network that was tuned.

MESSAGE ACKNOWLEDGED -- The Pershing II missiles have been launched.

Working...