Ethernet at 10 Gbps 462
An anonymous reader writes "This article talks about 10 Gigabit Ethernet and asks, 'But just how much data can a person consume?' Currently at work, we're working on a major project to re-architect our core application platform so that the different systems can be de-coupled and hosted separately. The legacy design implicitly relies on systems being in the same LAN due to bandwidth-expensive operations (e.g., database replication). Having this much bandwidth would change the way we design. What would you do with this much bandwidth?"
HDTV baby! (Score:1, Insightful)
HDTV recording...
Porn. Lots of porn.
Obvious isn't it?
Remote Virtual Immersion (Score:5, Insightful)
If I was going under the knife remotely [wustl.edu], I'd want the surgeon to have as much bandwidth as possible (and very, very, very low latency).
The Network Is The Computer (tm) (Score:5, Insightful)
With a 10G LAN, you'd be able to come up with a great distributed computer system (e.g. for compiling software). IIRC protocols are in the works now for native-ish memory access over networks, turning a network into one huge computer, and you can already access remote disks with the right software. Imagine the simultaneous distributed encoding of several HDTV streams to redundant archives on several different computers, and you'll probably find that more bandwidth = better.
So yeah, there'll definitely be possibilities for this sort of stuff, even if it is only as a base requirement for the post-Longhorn Windows version
Imagine a beowulf cluster... (Score:2, Insightful)
Distributed computing? (Score:3, Insightful)
While there are certainly applications that don't need to communicate that fast, more bandwidth means more alogrithms can become practical.
It's not like you can use it download porn, unless the action is happening in the next room. This is not a WAN technology.
Re:true remote storage transparency (Score:2, Insightful)
They are just (ab)using the disks of the servers. How Uber Are Your Servers(tm)? Show me a server that can sustain that 1 gigabyte a sec disk access to support those workstations...
NC-PC-NC (Score:5, Insightful)
So, we used to have little dumb terminals that talked to the big smart backend. Then computer became cheaper and we had Personal Computers, but we have to manage and distribute all these updates and it's a real pain and it sometimes destroys your computer during the upgrade/install process. Now we can swing the pendulum back towards the Network Computer a little more.
This isn't a new idea. Software companies like MS would love to sell you a subscription to MS Office which you renew and they in turn patch and maintain the software on your company's server or on the MS servers. It's a neat idea for sure. Companies like Novel have made some interesting claims about Network Computers.
There is also the whole Plan9 [bell-labs.com] type of mentality too.
Re:What would I do? (Score:5, Insightful)
Isn't that always the way? I remember having a 20Mhz IBM PS/2 and wondering "How am I going to use all this power?" And the 30 MB hard drive- how would I ever use all that space?
It seems like when we have the capabilities, we find something to do with the extra. HDTV sounds probable, and more bandwidth can only help working over networks on a mass scale (remote home folders and roaming profiles, VNC/Citrix), but you never know. When processors were getting to the 1Ghz point, a bunch of industry analysts were predicting "Now that we have enough power to make working speech-recognition software, we can finally ditch those keyboards!" Yeah, right.
The big concern is, with the extra bandwidth, will Microsoft see this as an opportunity to release new, extra-inefficient network protocols?
It's not the bandwidth (Score:3, Insightful)
"Close" applies both in physical distance (I have to count picoseconds for the kind of stuff I do) and in network distance, since every router adds considerably.
For some jobs (backup is a classic) latency is relatively tolerable. However, even for those you have to watch out because one error can cause the whole process to back up for retries. Ten to the minus fifteen BER sounds good until you look at what it can do to your throughput in a long-latency environment.
That's exactly the quote I remembered (Score:1, Insightful)
I mean... there's no such thing as too much of anything in computers. When's the last time you said "Accursed be this high transfer rate" or "I wish the computer had less RAM so it would swap more!".
Come on.
Ah, but that is because (Score:2, Insightful)
Re:What would I do with this much bandwidth?-Music (Score:2, Insightful)
(In other words... true, 10Gb per second isn't available from New York to Hong Kong today... but in 2014, that'll be standard... if not so-three-years-ago.)
Dumb terminals? Cluster computing? (Score:5, Insightful)
This at 60 fps will be 1.44 Gbps.
So 10-Gbps ethernet is enough to stream the output of a monitor, *uncompressed*, at full framerate, to either a dumb terminal or another computer. Even the most elementary compression (only reporting changed pixels, or PNG/jpeg techniques) could cut this to a fraction of 1.44Gbps.
More generally, it could allow more of the things that are currently on the PCI/USB bus to become external, and could become a more flexible replacement for USB. Scanners, cd writers, audio devices, you name it
This sort of thing might also have applications for cluster computing, allowing more sorts of things to be done with clusters since you have higher inter-node bandwidth.
Re:What would I do with this much bandwidth? (Score:3, Insightful)
Re:Way overkill (Score:5, Insightful)
Most of your argument rests on people not being able to read/write data from hard drives fast enough to use the network bandwidth. Some examples:
More:
And lastly, your conclusion:
Given your premise, you argue for your conclusion quite well. I don't, however, think your premise is accurate. Or perhaps better, I don't think it's relevant. First and foremost, there's all sorts of storage mechanisms which can transfer data as fast or faster than 10Gbps. Think solid-state drives and some decent-sized drive arrays (they don't need to be *that* large, we're talking roughly 1 gigabyte per second; that can be done with 5-10 consumer-grade drives, let alone the arrays of hundreds of high-end 15kRPM SCSI drives and the like). So on the basis of storage speed alone, your argument fails.
Second, what does storage speed have anything to do with it? You mention servers not needing this - a *huge* number of servers never touch their drives to read the data they're serving. Drive access == death in most Internet services, and people invest thousands of dollars in huge RAM pools to cache all the data (they used to invest tens of thousands, but now RAM is cheap :). So for a huge number of servers, drive speed is simply irrelevant; it's all served from RAM and generated by the CPU, so unless you're trying to say that CPUs can't deal with 10Gbps (which you aren't, and quite rightly), the conclusion falls down again.
Do desktops need this? No, of course not. If that's what you're really trying to say, then all fine and dandy, just say it. Acceptable reasons would be "people don't need to be able to transfer their 640MB files in less than 10 seconds" and "their Internet connections aren't even at 10Mbps yet, they certainly don't need 10Gbps!" However, you'll find that this technology quickly percolates downwards, so at some point in the future people will be able to transfer their 4GB (not 640MB at this point) files in a few seconds, and their "little" 640MB files will transfer near-instantaneously.
Re:HDTV baby! (Score:3, Insightful)
Re:What would I do with this much bandwidth? (Score:1, Insightful)
Latency and the "jumbo packet" is the problem.
With cheap hardware, I can push around 16-20MB/s across the network between two machines. At least with a gigabit switch, that means I'm not going to saturate the network and cause problems for anyone else.
SCSI 320 is 320MB/s (or 3.2Gbps)... which is 1/3 of the 10Gbps network. SATA/300 is supposed to come out in a year or two and is competitive.
(There's something faster then SCSI's Ultra320... but I forget what it's called. I do know that SATA is planned to go to 600MBps at some point, which is 6.0Gbps.)
Re:Way overkill (Score:3, Insightful)
I think it's fairly obvious by now that my experience lies primarily in the corporate environment with database servers and the like.
I do have experience in internet convergence points, but not as much with ISP's serving up video files, or rather the same video again and again. When I think of data transfers, I think of hauling bits from server to workstations, or servers to servers where sustained transfer rates would kill a server - much as you stated; drive access == death.
On servers that can handle 1 gig throughput, as you stated, the CPU is at or near 100%, ad a second CPU, plug in another gigabit uplink and team them. Even still, disk access==death. Its kinda like the indy 500: at 200+ MPH, touch wall==race over.
I'm just saying that even at today's CPU speeds, with huge chunks of ram, trying to handle speeds like this at the server level creates serious scalability problems.
Huh? (Score:2, Insightful)
How fast can your processor communicate with your RAM?
How fast can that ram communicate with the HDD?
How fast can your computer connect to the LAN?
How fast can your LAN communicate with the internet?
See what I mean? It's pretty common understanding that the farther you wanna go, the slower things get.
Well, I guess its not that common since you all modded that +5. I am disappointed in all of you. You are not geeks. Go away.
10G not for desktop but for core network (Score:2, Insightful)
Did you RTFS? (Score:1, Insightful)
OK, transfering between two computers won't beable to consume all that bandwidth.
HOWEVER, when TENS or HUNDREDS of machines are using a network, then you will really see a difference. And I belive that the SUBJECT OF THIS STORY IS AT WORK, WHERE THERE IS PROBABLY MORE THAN TWO MACHINES.
In WORK networks, not your pansy home setup, the LAN or whatever can often be running at capacity. When many machines are busy transfering large loads, you will easily get a bottleneck at the 'network' level. If you increase this 10 fold, you will likely get much more than just a 3x increase in performance, as a result of widening that bottleneck. Especially with MANY nodes using a shared network, you will find these gains to be substantial.
Re:It's not the bandwidth (Score:4, Insightful)
Unless you are working with individual gates inside a chip, I doubt picoseconds really matters.
I think you're missing something. If the cabling adds a constant delay to any times this guy's measuring, then he can still measure times in picoseconds (assuming his timer is accurate enough, of course). The fact that network cabling would add nanoseconds to a recorded time is irrelevant. Just as long as it doesn't add a variable delay (I wouldn't recommend doing this timing through any sort of switch or router, for example).
Not that this guy is necessarily using ethernet for what he's doing. Note that he didn't actually say that -- he just said that you had to be close for the kind of stuff he does.
One possibility is that the guy's a physicist working with a particle detector. He's could be talking about detecting the exact timing of the decay of various particles. If these decays occur on the order of picoseconds, and his equipment can accurately keep time in picoseconds, then the fact that the cabling adds, say, 5ns to all of the measured times is no big deal. Just subtract 5ns from everything. That's good enough to get the relative times of all the measured events, e.g. the amount of time between the detection of emissions created by the initial collision (and thus presumably particle creation) and the decay of the various particles.
Not just Sun anymore (Score:3, Insightful)
Define "properly". If you mean efficiency, that's desirable but not critical. If an Intel/Linux server is 75% the efficiency of a Sun server, yet costs 30% the price, you can install two or three for the same bucks. That's efficiency of a sort too, yes?
Try to find a storage solution that can read or write that fast.
Well, in terms of raw sustained bandwidth, this doesn't seem all that difficult. A single Ultra320 SCSI HBA manages about 2.5 Gb/s, and 4-6 of those should meet requirements. Modern drives can sustain 400-650 Mb/s easily enough, 32 or even 24 of them would give plenty of headroom. 4 per HBA would be ideal. Even consumer 4-way SATA RAIDs would likely do the trick - being point-to-point they have more headroom than a shared SCSI bus (though less transfer efficiency).
Try to get all of the above, along with a 133 mhz. 64-bit PCI-X bus
Thanks, I'd rather use PCI Express. A 4x PCIe slot easily matches PCI-X, but it's point-to-point rather than shared, so I get that much bandwidth for each HBA. Motherboards are in production now with 4x, 8x and 16x slots, chipsets with 32 available PCIe lanes - that's around 80 Gb/s total bandwidth. A dual Opteron system today also has around 80 Gb/s memory bandwidth, and quad- and 8-way systems have much more.
Sun systems have traditionally been right up there with sgi for high-bandwidth servers while humble x86 consumer systems haven't held a candle to them. But that ole' world, it just keeps on changing...
Re:What would I do with this much bandwidth?-Music (Score:1, Insightful)
Single mode fiber gives you upwards of 20 kilometers.