Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

Ethernet at 10 Gbps 462

An anonymous reader writes "This article talks about 10 Gigabit Ethernet and asks, 'But just how much data can a person consume?' Currently at work, we're working on a major project to re-architect our core application platform so that the different systems can be de-coupled and hosted separately. The legacy design implicitly relies on systems being in the same LAN due to bandwidth-expensive operations (e.g., database replication). Having this much bandwidth would change the way we design. What would you do with this much bandwidth?"
This discussion has been archived. No new comments can be posted.

Ethernet at 10 Gbps

Comments Filter:
  • HDTV baby! (Score:1, Insightful)

    by Rectal Prolapse ( 32159 ) on Sunday July 25, 2004 @01:02AM (#9792663)
    1920x1080p, minimal compression, streamed...

    HDTV recording...

    Porn. Lots of porn.

    Obvious isn't it?
  • by weston ( 16146 ) <westonsd@@@canncentral...org> on Sunday July 25, 2004 @01:11AM (#9792709) Homepage
    But just how much data can a person consume?

    If I was going under the knife remotely [wustl.edu], I'd want the surgeon to have as much bandwidth as possible (and very, very, very low latency).

  • by gusnz ( 455113 ) on Sunday July 25, 2004 @01:16AM (#9792747) Homepage
    OK, so for stuff like streaming MP3s and so forth, this is a little overkill for the current style of usage. However, where I think this will come in useful is for stuff like remote disk and memory access over IP.

    With a 10G LAN, you'd be able to come up with a great distributed computer system (e.g. for compiling software). IIRC protocols are in the works now for native-ish memory access over networks, turning a network into one huge computer, and you can already access remote disks with the right software. Imagine the simultaneous distributed encoding of several HDTV streams to redundant archives on several different computers, and you'll probably find that more bandwidth = better.

    So yeah, there'll definitely be possibilities for this sort of stuff, even if it is only as a base requirement for the post-Longhorn Windows version :).
  • by 3) profit!!! ( 773340 ) on Sunday July 25, 2004 @01:16AM (#9792755) Homepage
    Seems like this would be useful for people trying to build clusters with commodity hardware.
  • by erice ( 13380 ) on Sunday July 25, 2004 @01:19AM (#9792765) Homepage
    For distributing intermediate results, I don't imagine there is such a thing as too fast.

    While there are certainly applications that don't need to communicate that fast, more bandwidth means more alogrithms can become practical.

    It's not like you can use it download porn, unless the action is happening in the next room. This is not a WAN technology.
  • by Jarnis ( 266190 ) on Sunday July 25, 2004 @01:19AM (#9792766)
    Diskless workstations aren't.

    They are just (ab)using the disks of the servers. How Uber Are Your Servers(tm)? Show me a server that can sustain that 1 gigabyte a sec disk access to support those workstations... :p
  • NC-PC-NC (Score:5, Insightful)

    by basking2 ( 233941 ) on Sunday July 25, 2004 @01:20AM (#9792773) Homepage

    So, we used to have little dumb terminals that talked to the big smart backend. Then computer became cheaper and we had Personal Computers, but we have to manage and distribute all these updates and it's a real pain and it sometimes destroys your computer during the upgrade/install process. Now we can swing the pendulum back towards the Network Computer a little more.

    This isn't a new idea. Software companies like MS would love to sell you a subscription to MS Office which you renew and they in turn patch and maintain the software on your company's server or on the MS servers. It's a neat idea for sure. Companies like Novel have made some interesting claims about Network Computers.

    There is also the whole Plan9 [bell-labs.com] type of mentality too.

  • by nine-times ( 778537 ) <nine.times@gmail.com> on Sunday July 25, 2004 @01:21AM (#9792781) Homepage
    Thing is, since we're working within our limitations today it's hard to concieve of whta use it'd be.

    Isn't that always the way? I remember having a 20Mhz IBM PS/2 and wondering "How am I going to use all this power?" And the 30 MB hard drive- how would I ever use all that space?

    It seems like when we have the capabilities, we find something to do with the extra. HDTV sounds probable, and more bandwidth can only help working over networks on a mass scale (remote home folders and roaming profiles, VNC/Citrix), but you never know. When processors were getting to the 1Ghz point, a bunch of industry analysts were predicting "Now that we have enough power to make working speech-recognition software, we can finally ditch those keyboards!" Yeah, right.

    The big concern is, with the extra bandwidth, will Microsoft see this as an opportunity to release new, extra-inefficient network protocols?

  • by overshoot ( 39700 ) on Sunday July 25, 2004 @01:21AM (#9792788)
    It's the latency. No matter what your bandwidth may be, some tasks (e.g. file servers) need to be "close" to keep latency from being nasty.

    "Close" applies both in physical distance (I have to count picoseconds for the kind of stuff I do) and in network distance, since every router adds considerably.

    For some jobs (backup is a classic) latency is relatively tolerable. However, even for those you have to watch out because one error can cause the whole process to back up for retries. Ten to the minus fifteen BER sounds good until you look at what it can do to your throughput in a long-latency environment.

  • by empaler ( 130732 ) on Sunday July 25, 2004 @01:22AM (#9792791) Journal
    when I saw the news item.

    I mean... there's no such thing as too much of anything in computers. When's the last time you said "Accursed be this high transfer rate" or "I wish the computer had less RAM so it would swap more!".

    Come on.
  • by empaler ( 130732 ) on Sunday July 25, 2004 @01:45AM (#9792893) Journal
    your harddrive is too slow ;p
  • by Chmcginn ( 201645 ) on Sunday July 25, 2004 @02:04AM (#9792968) Journal
    Didja ever notice how long-distance transfer rate is a few years behind short-distance transfer rate... and it is pretty consistent?

    (In other words... true, 10Gb per second isn't available from New York to Hong Kong today... but in 2014, that'll be standard... if not so-three-years-ago.)

  • by Entropius ( 188861 ) on Sunday July 25, 2004 @02:19AM (#9793026)
    Let's see. There are about a million pixels on my screen (1280 x 800). Assume 24 bit color, so that's 24 megabits per frame.

    This at 60 fps will be 1.44 Gbps.

    So 10-Gbps ethernet is enough to stream the output of a monitor, *uncompressed*, at full framerate, to either a dumb terminal or another computer. Even the most elementary compression (only reporting changed pixels, or PNG/jpeg techniques) could cut this to a fraction of 1.44Gbps.

    More generally, it could allow more of the things that are currently on the PCI/USB bus to become external, and could become a more flexible replacement for USB. Scanners, cd writers, audio devices, you name it ... lots of things could be externalized and generalized. This would also allow more devices to be shared across networks more easily, since they're *on* the network in any case. With the Internet, nobody cares about the physical location of the machines they access; likewise, with this system peripherals aren't associated as strongly with one specific computer.

    This sort of thing might also have applications for cluster computing, allowing more sorts of things to be done with clusters since you have higher inter-node bandwidth.
  • by Gabrill ( 556503 ) on Sunday July 25, 2004 @02:29AM (#9793062)
    maybe so, but you don't work with all your data on the hard drive. Working with data on another computer can really speed up with faster ethernet, especially databases that stay partially in RAM.
  • Re:Way overkill (Score:5, Insightful)

    by dbarclay10 ( 70443 ) on Sunday July 25, 2004 @02:42AM (#9793111)

    Most of your argument rests on people not being able to read/write data from hard drives fast enough to use the network bandwidth. Some examples:

    The only time I've ever seen near gigabit traffic at a steady pace was at network servers, where traffic can reach a steady 600mbps on a single gig link - which is maxing out the speed at which the server drive can read/write data to its hard drive. Think of it this way, a 1 gigaBIT link can transfer a 1 gigaBYTE file in about 10 seconds, that's FAST! Conversely, it takes nearly 20-30 seconds just to write that large a file to the hard drive.

    More:

    Even at these tremendous speeds, they are only used at traffic aggregation points, again because any network device, even a turbocharged SAN couldn't handle reading/writing at those speeds for anything longer than a quick burst.

    And lastly, your conclusion:

    I say this: If you think that 10gig/sec is your answer, you're looking at the wrong problem. You can get the performance you need at gigabit rates.

    Given your premise, you argue for your conclusion quite well. I don't, however, think your premise is accurate. Or perhaps better, I don't think it's relevant. First and foremost, there's all sorts of storage mechanisms which can transfer data as fast or faster than 10Gbps. Think solid-state drives and some decent-sized drive arrays (they don't need to be *that* large, we're talking roughly 1 gigabyte per second; that can be done with 5-10 consumer-grade drives, let alone the arrays of hundreds of high-end 15kRPM SCSI drives and the like). So on the basis of storage speed alone, your argument fails.

    Second, what does storage speed have anything to do with it? You mention servers not needing this - a *huge* number of servers never touch their drives to read the data they're serving. Drive access == death in most Internet services, and people invest thousands of dollars in huge RAM pools to cache all the data (they used to invest tens of thousands, but now RAM is cheap :). So for a huge number of servers, drive speed is simply irrelevant; it's all served from RAM and generated by the CPU, so unless you're trying to say that CPUs can't deal with 10Gbps (which you aren't, and quite rightly), the conclusion falls down again.

    Do desktops need this? No, of course not. If that's what you're really trying to say, then all fine and dandy, just say it. Acceptable reasons would be "people don't need to be able to transfer their 640MB files in less than 10 seconds" and "their Internet connections aren't even at 10Mbps yet, they certainly don't need 10Gbps!" However, you'll find that this technology quickly percolates downwards, so at some point in the future people will be able to transfer their 4GB (not 640MB at this point) files in a few seconds, and their "little" 640MB files will transfer near-instantaneously.

  • Re:HDTV baby! (Score:3, Insightful)

    by WiKKeSH ( 543962 ) <slashspam@downmix.com> on Sunday July 25, 2004 @03:58AM (#9793303) Homepage
    HDTV is broadcast at 20mbit, give or take
  • by Anonymous Coward on Sunday July 25, 2004 @04:29AM (#9793370)
    I upgraded my network to gigabit ethernet about a year ago (from 100 mbit), and much to my surprise, the speed increase was only about 3 times when copying files from one machine to another.

    Latency and the "jumbo packet" is the problem.

    With cheap hardware, I can push around 16-20MB/s across the network between two machines. At least with a gigabit switch, that means I'm not going to saturate the network and cause problems for anyone else.

    SCSI 320 is 320MB/s (or 3.2Gbps)... which is 1/3 of the 10Gbps network. SATA/300 is supposed to come out in a year or two and is competitive.

    (There's something faster then SCSI's Ultra320... but I forget what it's called. I do know that SATA is planned to go to 600MBps at some point, which is 6.0Gbps.)

  • Re:Way overkill (Score:3, Insightful)

    by JRHelgeson ( 576325 ) on Sunday July 25, 2004 @04:46AM (#9793400) Homepage Journal
    Quite right, thanks for the reply.

    I think it's fairly obvious by now that my experience lies primarily in the corporate environment with database servers and the like.

    I do have experience in internet convergence points, but not as much with ISP's serving up video files, or rather the same video again and again. When I think of data transfers, I think of hauling bits from server to workstations, or servers to servers where sustained transfer rates would kill a server - much as you stated; drive access == death.

    On servers that can handle 1 gig throughput, as you stated, the CPU is at or near 100%, ad a second CPU, plug in another gigabit uplink and team them. Even still, disk access==death. Its kinda like the indy 500: at 200+ MPH, touch wall==race over.

    I'm just saying that even at today's CPU speeds, with huge chunks of ram, trying to handle speeds like this at the server level creates serious scalability problems.
  • Huh? (Score:2, Insightful)

    by Anonymous Coward on Sunday July 25, 2004 @06:07AM (#9793536)
    I don't understand why everyone is so impressed with that post. Insightful? I'd say thats just common knowledge.

    How fast can your processor communicate with your RAM?

    How fast can that ram communicate with the HDD?

    How fast can your computer connect to the LAN?

    How fast can your LAN communicate with the internet?

    See what I mean? It's pretty common understanding that the farther you wanna go, the slower things get.

    Well, I guess its not that common since you all modded that +5. I am disappointed in all of you. You are not geeks. Go away.
  • by tijsvd ( 548670 ) on Sunday July 25, 2004 @06:17AM (#9793558) Homepage
    10G Ethernet is currently being sold (a lot), but not to connect computers. There are currently three major drivers for 10G Ethernet:
    • Connect backbone LAN switches, e.g. two Cisco 6500 machines, each one full of Gigabit links to access/distribution switches.
    • High-speed links for ISPs or research networks. 10GE can reach about 70 km without repeaters and is significantly cheaper than OC-192.
    • Link between access and core switches. Since more and more offices are (for some reason) switching to Gigabit to the desktop, the access switches need much bandwidth to deal with the (possible) traffic. A good example is the new Cisco 3750 stackable with 16 GE and one 10GE links.
    10GE to the desktop is at this time ridiculous, but don't think that means that 10GE technology is not used.
  • Did you RTFS? (Score:1, Insightful)

    by Anonymous Coward on Sunday July 25, 2004 @06:17AM (#9793559)
    I think you are totally missing the point.

    OK, transfering between two computers won't beable to consume all that bandwidth.

    HOWEVER, when TENS or HUNDREDS of machines are using a network, then you will really see a difference. And I belive that the SUBJECT OF THIS STORY IS AT WORK, WHERE THERE IS PROBABLY MORE THAN TWO MACHINES.

    In WORK networks, not your pansy home setup, the LAN or whatever can often be running at capacity. When many machines are busy transfering large loads, you will easily get a bottleneck at the 'network' level. If you increase this 10 fold, you will likely get much more than just a 3x increase in performance, as a result of widening that bottleneck. Especially with MANY nodes using a shared network, you will find these gains to be substantial.
  • by Some Dumbass... ( 192298 ) on Sunday July 25, 2004 @09:55AM (#9794109)
    I have to count picoseconds for the kind of stuff I do

    Unless you are working with individual gates inside a chip, I doubt picoseconds really matters.


    I think you're missing something. If the cabling adds a constant delay to any times this guy's measuring, then he can still measure times in picoseconds (assuming his timer is accurate enough, of course). The fact that network cabling would add nanoseconds to a recorded time is irrelevant. Just as long as it doesn't add a variable delay (I wouldn't recommend doing this timing through any sort of switch or router, for example).

    Not that this guy is necessarily using ethernet for what he's doing. Note that he didn't actually say that -- he just said that you had to be close for the kind of stuff he does.

    One possibility is that the guy's a physicist working with a particle detector. He's could be talking about detecting the exact timing of the decay of various particles. If these decays occur on the order of picoseconds, and his equipment can accurately keep time in picoseconds, then the fact that the cabling adds, say, 5ns to all of the measured times is no big deal. Just subtract 5ns from everything. That's good enough to get the relative times of all the measured events, e.g. the amount of time between the detection of emissions created by the initial collision (and thus presumably particle creation) and the decay of the various particles.
  • by Namarrgon ( 105036 ) on Sunday July 25, 2004 @12:34PM (#9794860) Homepage
    Try to find a host OS with a TCP/IP stack that can properly utilize 1 gigabit ethernet, let alone 10 gigabits. Hint: It ain't Linux...

    Define "properly". If you mean efficiency, that's desirable but not critical. If an Intel/Linux server is 75% the efficiency of a Sun server, yet costs 30% the price, you can install two or three for the same bucks. That's efficiency of a sort too, yes?

    Try to find a storage solution that can read or write that fast.

    Well, in terms of raw sustained bandwidth, this doesn't seem all that difficult. A single Ultra320 SCSI HBA manages about 2.5 Gb/s, and 4-6 of those should meet requirements. Modern drives can sustain 400-650 Mb/s easily enough, 32 or even 24 of them would give plenty of headroom. 4 per HBA would be ideal. Even consumer 4-way SATA RAIDs would likely do the trick - being point-to-point they have more headroom than a shared SCSI bus (though less transfer efficiency).

    Try to get all of the above, along with a 133 mhz. 64-bit PCI-X bus

    Thanks, I'd rather use PCI Express. A 4x PCIe slot easily matches PCI-X, but it's point-to-point rather than shared, so I get that much bandwidth for each HBA. Motherboards are in production now with 4x, 8x and 16x slots, chipsets with 32 available PCIe lanes - that's around 80 Gb/s total bandwidth. A dual Opteron system today also has around 80 Gb/s memory bandwidth, and quad- and 8-way systems have much more.

    Sun systems have traditionally been right up there with sgi for high-bandwidth servers while humble x86 consumer systems haven't held a candle to them. But that ole' world, it just keeps on changing...

  • by Anonymous Coward on Sunday July 25, 2004 @04:26PM (#9796040)
    Ya know, so far everyone seems to think of this as a long distance pipe. It's not, it ethernet. RTFA useful distance is in meters *NOT* kilometers. This is an intraoffice connection not a WAN pipe.

    Single mode fiber gives you upwards of 20 kilometers.

Today is a good day for information-gathering. Read someone else's mail file.

Working...