Researchers Transmit Optical Data at 16.4 Tbps 2550km 126
Stony Stevenson writes "The goal of 100 Gbps Ethernet transmission is closer to reality with the announcement Wednesday that Alcatel-Lucent researchers have recorded an optical transmission record along with three photonic integrated circuits. Carried out by researchers in Bell Labs in Villarceaux, France, the successful transmission of 16.4 Tbps of optical data over 2,550 km was assisted by Alcatel's Thales' III-V Lab and Kylia, an optical solution company. The researchers utilized 164 wavelength-division multiplexed channels modulated at 100-Gbps in the effort."
CPU speeds? (Score:1, Interesting)
error checks? (Score:1, Interesting)
on Another note... What did they do with all that Pr0n once it got to the other end?
Re:Make it Short and Fast and Snappy (Score:3, Interesting)
So the bottleneck is 10G-e. There are already supercomputer clusters using multiple parallel Cells, so I'm disappointed that they're not already widening the pipe.
Re:Make it Short and Fast and Snappy (Score:5, Interesting)
It doesn't really matters (yet, and considering Ethernet technology) if the BW of the fiber is a zillion Petabits/sec.
The problem is now at 1Gbps and 10Gbps in Ethernet technology, and is because the processor overloads with the amount of hardware interrupts. The processors that are general purpose have to waste too many clock cycles processing that much interrupts, the processors nowadays are superscalar [ http://en.wikipedia.org/wiki/Superscalar [wikipedia.org] ]and every time the processor have to change the context (to attend an interrupt) has to do lots of things like unloading the registers, saving the context, loading the registers of the new process, and has to drop something out of the pipeline [ http://en.wikipedia.org/wiki/Pipeline_(computing) [wikipedia.org] ] loosing performance.
Ethernet tech has a huge latency [ http://en.wikipedia.org/wiki/Latency_(engineering) [wikipedia.org] ] and a stack that makes processing not so easy (if you look at te code of a linux network device driver it handles pretty much everything including writing the mac address that is only copied when the driver initialize).
That is why there are some relative new things (NAPI in Linux) that try to make lessen the overload, there are new network devices that handle layer 2 and 3 (or at least parts of those, for example, is used to be handled the checksum algorithm) to avoid doing it in the processor. There are some white papers (one from intel, another from NetXen, I'm sorry I don't have the links now) that explain the problem and some approach to a possible solution.
Yes, I know, there is something I have not said, and is that the main switches or routers have to deal with that and have hardware specially designed to do heavy network packet processing, and that is the point, the network cards will have to do that (and are already starting to), neither is an easy job for hardware designers, nor for the market, is easier and cheaper to have a machine that you can change the behaviour only changing the firmware or changing settings from a program (routers have an operating system, and lots of those are a general purpose microprocessor with a linux kernel and a web server to configure it, for example home routers).
There is much to say yet in this field.
Re:maybe its just me (Score:2, Interesting)
I worked at the victim-of-the-telecoms-bubble that was Marconi 2000-02 and there was a bit of kit, the snappily titled UPLx, that could deal with 160 10Gbps channels down a pair of fibres, unregenerated over about 1000km - using soliton wave shaping and some sodding great Ramen pump laser to get there (nothing to do with noodles before you ask). It was demoed in the labs reliably, and I believe sold in to Telstra Australia
In 5 years, they've added 4 Gbps
Re:Translation please? (Score:3, Interesting)
Redoing for the 747-400ERF:
DVD(DL) = 51 PB
HD-DVD = 184,8 PB
BD = 308,3 PB
DVD(DL) = 1.38 TBps
HD-DVD = 5 TBps
BD = 8.35 TBps
Now, according to wikipedia the Airbus A380F has a maximum range of 3800 km less, but has a maximum payload of about 37240 kg more, and would thus be better for bandwidth over normal distances as opposed to extreme long haul transmission. The calculations for this are left as an excercise to the reader.