(And it's Paolo, not Paulo; he's Italian, not Portuguese...)
What is the advantage of multi-mode then?
Basically what you said: the fiber's core is larger, allowing one to inject power more easily (lower insertion loss, wider tolerances on connectors, so you can have them installed by a less-qualified technician). It doesn't change the loss per kilometer, though, AFAIK. The nonlinearity should be lower, as a given optical power would be spread over a larger area; however, that matters only for high-power applications, or long-distance transmission where you basically can't use multi-mode anyway.
There is research on how to use multiple modes for separate channels, much like separate wavelengths. However, separating the modes at the output of the fiber is much more difficult.
We even do "multi-mode" optical (different color/wavelength lasers) all day long but only for short cable lengths.
I'm afraid you're mixing up frequency/wavelength modes with propagation modes. Most long-distance systems use several different wavelengths, that's what WDM is. But they use single-mode fibers, meaning that light at a given optical frequency (and polarization) can only propagate in a single way, thus at a given speed. Multi-mode fibers, with a wider core, let light propagate over different modes (different possible paths in the core for light rays, kind of), which plays havoc with the signal (pulses get echoes and whatnot), which is why they are used only for short distances.
The experiment described here uses OFDM, which in principle is akin to WDM but squeezing many wavelengths as close together as theoretically possible, too close to be separated by classical optical filters. Instead, you can separate them mathematically using an FFT, but that takes a lot of computing power. What the authors did is to implement FFT optically, which is very neat. It enables the use of OFDM at ultrahigh bit rates; and the details of OFDM are such that, used in the right way, it can be extremely resistant to signal degradation (look e.g. at Figure 4(c) in the Nature Photonics article, and think about how tightly a conventional system at that bit rate would have to manage dispersion).
What bugs me is that they describe their setup as performing better than plain coherent detection (Figure 5), which I have a hard time believing. Exactly how did they do the comparison, I wonder.
the professor seems to be contemplating the use of many optical modulators (each at 500GHz), each operating on a different fundamental wavelength to multiply the link bandwidth. Hence the prospect of petabit and exabit data rates from 500GHz modulation.
And that's the key problem: you can't just replace the 40-GHz modulators in a 50-channel x 40-Gbit/s fiber system, because the optical frequencies of the channels must be spaced widely enough that the channels won't overlap. These ultra-high-speed modulators might help do Tbit/s single channels better than all-optical solutions such as OTDM (which has worked in the lab for a decade, but fiendishly fragile and unstable), but won't change the total bandwidth, whether it's 50x40 Gbit/s or 4x500 Gbit/s that you'd fit in the couple of THz of the C band (wavelengths in the 1530-1560 nm range, corresponding to the bandwidth of conventional Erbium-doped fiber amplifiers).
To increase fibers' total capacity, you can go two ways: a larger bandwidth, or a higher spectral efficiency. To enlarge the bandwidth, you need new amplifiers; I mentioned EDFAs, but other types have been developed with much larger bandwidths, and you could theoretically cover the entire wavelength range where fibers transmit well, about 1200-1600 nm (60 THz wide). Of course, you'd have to replace currently-deployed long-distance fiber links, which usually have EDFAs every 100 km or so.
A cheaper way, also promising, is increasing the spectral efficiency: instead of modulating the light by switching it on and off (on-off keying, OOK), which basically yields a bandwidth about twice the modulation rate (so 40-Gbit/s OOK channels must be separated by 100 GHz), you can adjust the intensity and/or do tricks with the phase and polarization of the light (modulation formats such as PSK and QAM). Currently favored is PolMux-QPSK, which can fit 100 Gbit/s in the same bandwidth as a 10 Gbit/s OOK channel, and thus lets carriers upgrade their WDM systems progressively, wavelength by wavelength.
The price for high spectral efficiencies is that you need a much more complex receiver (keywords: "coherent optical systems"). But the payoff is potentially huge, because it has to include DSP, which in turn enables systems to implement much more advanced digital communications algorithms, making the links far more robust to signal degradations, increasing the transmission length.
a 1 watt laser is not going to damage your eyes even if you point it straight into your retina. [...] A 100 watt laser, on the other hand
I believe you forgot a couple of "milli"s. Laser pointers and optical communications sources are usually in the milliwatt range and, while not harmless, can easily be handled safely. A 1-watt laser will cut plastic and blind you even by indirect light.
not to be a douche on this, but what is my incentive?
The same as for not downloading music that is not being offered legitimately. (Not even for free; you'd actually be paying the pirate, and not the artist.)
Photonics to the rescue indeed; but I thought wave-synchronised light sources at this distance would be considered part of the lab-experiment grade equipment this was said to be doable without.
Right, and this was the big problem with coherent when it was first proposed for optical systems back in the 1980s.
Now, you just ensure that the local oscillator is within a few tens or hundreds of MHz of the signal carrier, which is not too difficult. A residual phase drift of several hundred Mrad/s sounds high, but compared to a few tens of Gbauds symbol rate, it is not that much and can be compensated in software.
Quick question: my understanding is that in wireless, we're at 4-5 bits/s/Hz. Why is that figure so much lower with fiber?
Because it's more complicated to reach for a high spectral efficiency. Until now, on fiber, it was possible to just increase the spectral bandwidth (increase the number of wavelengths in a single fiber, in fact). In wireless, on the contrary, the spectrum is much more regulated--if only because it is shared among everybody, whereas what happens in a fiber doesn't affect anything outside it. Thus the drive for a high spectral efficiency in radio.
Polarization in multimode fiber is out because the polarization tends to become random after it is transmitted through a long enough multimode fiber.
Oh, singlemode fiber isn't better in that regard, but yes, that's certainly SMF they're talking about, if only because that's what installed in current long-distance links. Also, you can indeed have polarization-maintaining SMF, but not over hundreds of kilometers. For what is actually done to multiplex over polarization, see my earlier post.
Come to think of it, could you then encode data in linear, elliptical, as well as circular polarization directions?
I don't see why not, though the encoding might be slightly more complicated. To answer your question about how to generate a PolMux signal, you take two lasers, which you modulate independently, then inject into a polarizing beam splitter. You can also change any polarization into another using quarter- or half-wavelength plates, or fiber-loop polarization controllers. The former use properties of certain crystals to rotate polarization axes; the latter are simply loops of fiber optics which you orient and warp (google "polarization controller").
Just some random thoughts that come up because the article isn't very technically detailed.
Indeed. I haven't even seen which principle they use for the announced system. I assume it's PolMux+DQPSK at 12.5Gbaud, like everybody else at this point. But do they actually use DSP, or did they remain analog for now?
Different wavelengths follow different paths down the fibre and will arrive with different latency and distortion; so multiple wavelengths carry concurrent frames, rather than concurrent bits;
Well, yes. There are "wavelength-striped" systems in laboratories, but only for short-distance links AFAICT.
Also, no production DSP will pull phase information out of optical frequencies; to do so reliably requires a sample rate of at least 4x the frequency, so your 1530nm signal would need to be sampled and processed at around 800,000 GHz (yes, the best part of 1 PHz. Per-channel). Good luck with that.
Electronics won't do for this, photonics to the rescue!
The article implies that it's easy to do, there was simply never a need before. I seriously doubt that it's a trivial thing to accomplish a four-fold increase in bandwidth on existing infrastructure.
It's not, as you have pointed out. My interpretation is that, on the contrary, phase and polarization diversity (which I'll lump into "coherent" optical transmissions) are hard enough to do that you'll try all the other possibilities first: DWDM, high symbol rates, differential-phase modulation... All these avenues have been exploited, now, so we have to bite the bullet and go coherent. However, on coherent systems, some problems actually become simpler.
Polarization has a habit of wandering around in fiber.
Quite so. Therefore, on a classical system, you use only polarization-independent devices. (Yes, erbium-doped amplifiers are essentially polarization-independent because you have many erbium ions in different configurations in the glass; Raman amplifiers are something else, but sending two pump beams along orthogonal polarizations should take care of it.)
For a coherent system, you want to separate polarizations whose axes have turned any which way. Have a look at Wikipedia's article on optical hybrids, especially figure1. You need four photoreceivers (two for each balanced detector), and reconstruct the actual signal by digital signal processing. And that's just for a single polarization; double this for polarization diversity and use a 2x2 MIMO technique.
That's why it's so expensive compared to a classical system: the coherent receiver is much more complex. Additionally, you need DSP and especially ADCs working at tens of gigasamples per second. This is only just now becoming possible.
Phase-encoding has similar problems. Dispersion, the fact that different frequencies travel at different velocities (this leads to prisms separating white light into rainbows), will distort the pulse shape and shift the modulation envelope with respect to the phase. You either need very low dispersion fibers, and they already need to use the best available, or have some fancy processing at a receiver or repeater.
Indeed. We are at the limit of the "best available" fibers (which are not zero-dispersion, actually, to alleviate nonlinear effects, but that's another story). Now we need the "fancy processing". And lo, when we use it, the dispersion problem becomes much more tractable! Currently, you need all these dispersion-compensating fibers every 100km, and they're not precise enough beyond 40Gbaud (thus 40Gbit/s for conventional systems). With coherent, dispersion is a purely linear channel characteristic, which you can correct straightforwardly in the spectral domain using FFTs. Then the limit becomes how much processing power you have at the receiver.
The article downplays how hard these problems are. It implies that the engineers simply didn't think it through the first time around, but that's far from the case. A huge amount of money and effort goes into more efficiently encoding information in fiber. There probably is no drop in solution, but very clever design in new repeaters and amplifiers might squeeze some bonus bandwidth into existing cable.
Well, yes, much effort has been devoted to the problem. After all, how many laboratories are competing for breaking transmission speed records and be rewarded by the prestige of a postdeadline paper at conferences such as OFC and ECOC
As for how much bandwidth can be squeezed into fibers, keep in mind that current systems have an efficiency around 0.2bit/s/Hz. There's at least an order of magnitude left for improvement; I don't have Essiambre's paper handy, but according to his simulations, I think the minimum bound for capacity is around 7-8bit/s/Hz.
Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.