Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Network

Blistering Data Transmission Record Clocks Over 1 Petabit Per Second (newatlas.com) 42

An anonymous reader quotes a report from New Atlas: Researchers in Japan have clocked a new speed record for data transmission -- a blistering 1.02 petabits per second (Pb/s). Better yet, the breakthrough was achieved using optical fiber cables that should be compatible with existing infrastructure. For reference, 1 petabit is equivalent to a million gigabits, meaning this new record is about 100,000 times faster than the absolute fastest home internet speeds available to consumers. Even NASA will "only" get 400 Gb/s when ESnet6 rolls out in 2023. At speeds of 1 Pb/s, you could theoretically broadcast 10 million channels per second of video at 8K resolution, according to the team.

The new record was set by researchers at Japan's National Institute of Information and Communications Technology (NICT), using several emerging technologies. First, the optical fiber contains four cores -- the glass tubes that transmit the signals -- instead of the usual one. The transmission bandwidth is extended to a record-breaking 20 THz, thanks to a technology known as wavelength division multiplexing (WDM). That bandwidth is made up of a total of 801 wavelength channels spread across three bands -- the commonly used C- and L-bands, as well as the experimental S-band. With the help of some other new optical amplification and signal modulation technologies, the team achieved the record-breaking speed of 1.02 Pb/s, sending data through 51.7 km (32.1 miles) of optical fiber cables.

This discussion has been archived. No new comments can be posted.

Blistering Data Transmission Record Clocks Over 1 Petabit Per Second

Comments Filter:
  • "10 million channels per second" What kind of unit of measure is that?
  • The summary fails to explain how any of this can use existing infrastructure.
    • The fact that the optical fibers used have 4 cores immediately renders all existing infrastructure such as undersea cables unsuitable. Still, a path forward for applications where such bandwidth would make economic sense.
      • https://www.datacenterknowledg... [datacenterknowledge.com]

        • That appears to be a cable of 20 fiber pairs, with each fiber in each pair still having a single core.

          • 20 fiber pairs means 5 sets of 4 fibers each. If one can use 4 fibers for 1pbps then a cable of 20 fibers could handle 5pbps. The math is easy. The solution uses existing fibers and frequencies.

            • by subk ( 551165 )
              You're missing the point altogether.. The new NICT record utilizes multi-CORE fiber, with multiple cores in each 125 micron cladding.
              • There are two things happening in the article; 4 fibers wrapped in the same size cladding as a single fiber (though the fibers are of a common type), and SDM or space-division multiplexing. The experiment uses both, but there is nothing to indicate that SDM would not work over traditional fibers that are not clad together, just that cladding them together makes a cable of the same size as existing cables with more fiber. The article does not say anything about the effects of any coupling between the 4 cla

                • They're not two things, they are one in the same. Space division multiplexing is the use of multi-core fiber. It's a nothing burger, multi-core fiber really doesn't buy you anything but the more efficient use of physical space as you can carry multiple cores in the space of a single strand. Yet the space in which a single strand occupies is not a constraint we face in metro or long haul fiber networks. The only benefit would be if fiber manufacturers were able to manufacture multi-core fiber for the sam
                  • What prevents SDM from working on 4 independent fibers vs 4 fibers clad together? Why is 4 fibers clad together part of the solution to higher bandwidth? Like you said, 4 clad fiber is about the physical size and says nothing about coupling/interference issues. It is the SDM signaling that allows for higher bandwidth, and it happens to require 4 fibers - the cladding bit is a red herring.

  • I have a hard time imagining the termination equipment that you would have to have to actually apply this technology. At some point you have to break out the channels into small enough parts that you can have practical electronics to source/sink digital signals from/to.

    • I imagine you'd usually, outside a massive backbone, terminate these into passive optical mux/demux equipment before it ever got near electronic routers or switches.

      https://www.precisionot.com/mu... [precisionot.com]
      https://www.sciencedirect.com/... [sciencedirect.com]
      https://www.lumentum.com/en/op... [lumentum.com]

      • But lots and lots of them. A petabit would feed as many as 10,000 breakout mainframes, each handling 100 gigabit. That's a lot of pretty high powered hardware.

        All coming from the same mux/demux unit? Can you have more than one on the same fiber media, or can you cascade them? Either way you have an incredible many-to-one concentration.

        • This is nothing new... it's how DWDM networks are built today. I've got muxponders which run at 250G, 500G, and 600G per wavelength today. Companies like Ciena, Cisco, Infinera, and Nokia make transponders and muxponders where the client circuits are 10G, 40G, 100G, and now 400G Ethernet and multiplex them on to a single trunk wavelength which can vary in speed from 100G to 800G with today's technology. The modulation and baudrate used will vary based upon the distances you are trying to reach and spectr
          • .. and it's not uncommon for service providers or content providers to have multi-terabit port channels between inter-city pairs on their backbone.

            Jeez. It feels like not that long ago that Fast Ethernet was considered a monumental achievement and where all the VC dollars were going. I feel so old now.

            Somebody please up-vote the quoted post.

  • Library of Congress per Second?

  • To say the least, there are a few (cough) additional technical hurdles to overcome first. However, at these speeds, I wonder if it could represent one of the technologies necessary to allow transmission of the exact state of lumps of matter with reconstruction on the other end to become feasible. It is difficult to imagine other applications where this kind of insane speed would be necessary (apart from denial of service attacks).

  • Multi-core fiber is the new cheat and it's somewhat pointless. The space which a strand of fiber consumes is rarely a constraint, so the only real benefit of multi-core fiber will be when they can be constructed for nearly the same price as single core fiber and the operational overhead of aligning cores is minimized. Furthermore all of the fiber in the ground is single core, so advertising transmission rate records for DWDM networks based upon fiber that doesn't exist is dubious.
  • 10 million channels in 8k resolution and the content still stinks. Even 10 million turds polished to shine are still turds.

  • This will be welcomed by connoisseurs of horse porn everywhere. But more seriously, we need this before we can advance to holographic communications which will require huge data rates.
  • How many of us would be happy with just 0.02 PPS from our provider? (I'm looking at you Spectrum.)

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...