Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking

Researchers Transmit Optical Data at 16.4 Tbps 2550km 126

Stony Stevenson writes "The goal of 100 Gbps Ethernet transmission is closer to reality with the announcement Wednesday that Alcatel-Lucent researchers have recorded an optical transmission record along with three photonic integrated circuits. Carried out by researchers in Bell Labs in Villarceaux, France, the successful transmission of 16.4 Tbps of optical data over 2,550 km was assisted by Alcatel's Thales' III-V Lab and Kylia, an optical solution company. The researchers utilized 164 wavelength-division multiplexed channels modulated at 100-Gbps in the effort."
This discussion has been archived. No new comments can be posted.

Researchers Transmit Optical Data at 16.4 Tbps 2550km

Comments Filter:
  • CPU speeds? (Score:1, Interesting)

    by Anonymous Coward on Thursday February 28, 2008 @09:50AM (#22587312)
    Surely there must be some incredibly processing power behind transmission speeds like this? Anyone one have any idea?
  • error checks? (Score:1, Interesting)

    by sjs132 ( 631745 ) on Thursday February 28, 2008 @09:59AM (#22587410) Homepage Journal
    Slim article... How long would it take to error check that much data?

    on Another note... What did they do with all that Pr0n once it got to the other end?
  • by Doc Ruby ( 173196 ) on Thursday February 28, 2008 @11:15AM (#22588282) Homepage Journal
    I'm running Linux on a Playstation3 with SPU video drivers in its Cell uP that can run at over 150GFLOPS. Since the PS3 has only 512MB RAM, it needs to be fed by the LAN and just buffer the LAN in its RAM. Even if SATA drives are delivering only 1.2Gbps, there's no reason I can't have multiple parallel drives on independent servers (if a single server's IO isn't fast enough for multiple SATA at full bore) on my SAN delivering multiple streams through my switch all to my Playstation. Now, the PS3's 1Gb-e is as hardwired to it as is its 512MB RAM, but the point is that there are already machines that can use that bandwidth. The total bandwidth doesn't have to reach 100Gbps, but only exceed 10Gbps, to require faster than 10G-e, which only 8-10 SATA drives in parallel could do today.

    So the bottleneck is 10G-e. There are already supercomputer clusters using multiple parallel Cells, so I'm disappointed that they're not already widening the pipe.
  • by leomaro ( 1221010 ) on Thursday February 28, 2008 @11:17AM (#22588310)
    There is another problem, and is actually the bottleneck of transmitting packets at high rates.

    It doesn't really matters (yet, and considering Ethernet technology) if the BW of the fiber is a zillion Petabits/sec.
    The problem is now at 1Gbps and 10Gbps in Ethernet technology, and is because the processor overloads with the amount of hardware interrupts. The processors that are general purpose have to waste too many clock cycles processing that much interrupts, the processors nowadays are superscalar [ http://en.wikipedia.org/wiki/Superscalar [wikipedia.org] ]and every time the processor have to change the context (to attend an interrupt) has to do lots of things like unloading the registers, saving the context, loading the registers of the new process, and has to drop something out of the pipeline [ http://en.wikipedia.org/wiki/Pipeline_(computing) [wikipedia.org] ] loosing performance.

    Ethernet tech has a huge latency [ http://en.wikipedia.org/wiki/Latency_(engineering) [wikipedia.org] ] and a stack that makes processing not so easy (if you look at te code of a linux network device driver it handles pretty much everything including writing the mac address that is only copied when the driver initialize).

    That is why there are some relative new things (NAPI in Linux) that try to make lessen the overload, there are new network devices that handle layer 2 and 3 (or at least parts of those, for example, is used to be handled the checksum algorithm) to avoid doing it in the processor. There are some white papers (one from intel, another from NetXen, I'm sorry I don't have the links now) that explain the problem and some approach to a possible solution.

    Yes, I know, there is something I have not said, and is that the main switches or routers have to deal with that and have hardware specially designed to do heavy network packet processing, and that is the point, the network cards will have to do that (and are already starting to), neither is an easy job for hardware designers, nor for the market, is easier and cheaper to have a machine that you can change the behaviour only changing the firmware or changing settings from a program (routers have an operating system, and lots of those are a general purpose microprocessor with a linux kernel and a web server to configure it, for example home routers).

    There is much to say yet in this field.
  • Re:maybe its just me (Score:2, Interesting)

    by Teiresias_UK ( 413251 ) on Thursday February 28, 2008 @01:39PM (#22590102)
    To be honest I don't find this *that* amazing.

    I worked at the victim-of-the-telecoms-bubble that was Marconi 2000-02 and there was a bit of kit, the snappily titled UPLx, that could deal with 160 10Gbps channels down a pair of fibres, unregenerated over about 1000km - using soliton wave shaping and some sodding great Ramen pump laser to get there (nothing to do with noodles before you ask). It was demoed in the labs reliably, and I believe sold in to Telstra Australia

    In 5 years, they've added 4 Gbps ... wow.
  • by fbjon ( 692006 ) on Thursday February 28, 2008 @04:59PM (#22592816) Homepage Journal
    Your calculations are a bit off. The LCF, like the Beluga and similar, is meant to transport aircraft parts which are large, not heavy. Additionally, the bulky airframe means it can actually lift less weight than a regular cargo carrier, and maximum takeoff weight is the limitation for bandwidth, not volume. Besides, the LCF is not for sale to customers.


    Redoing for the 747-400ERF:

    • Assume each disc weighs 16g, like a CD.
    • This gives us a box with a volume of 1,38 m^3 that contains 80000 discs, weighing 1280 kg, let's say 1300 kg including the box.
    • With a maximum payload of 112760 kg, that means 86,7 boxes, giving a fuel range of 9782 km. Note that if you want to go the full 14212 km, you'll have to throw off some weight to load more fuel.
    • This means about 6 936 000 discs giving the following jumbo packet sizes:
    CD = 4,5 petabytes
    DVD(DL) = 51 PB
    HD-DVD = 184,8 PB
    BD = 308,3 PB
    • Not including landing and takeoff times, the aircraft will travel 9782 km in about 10 hours, but let's add in 30 minutes for loading and unloading, giving the following practical bandwidths:
    CD = 124.8 GBps
    DVD(DL) = 1.38 TBps
    HD-DVD = 5 TBps
    BD = 8.35 TBps


    Now, according to wikipedia the Airbus A380F has a maximum range of 3800 km less, but has a maximum payload of about 37240 kg more, and would thus be better for bandwidth over normal distances as opposed to extreme long haul transmission. The calculations for this are left as an excercise to the reader.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...