Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Bell Labs Achieves 3.28Tbps Over Fiber 161

Dave-V writes: "Scientists at Bell Laboratories said they have set a world's record by transmitting 3.28 trillion bits of data per second over 300 kilometers of fiber optic cable. The research arm of Lucent Technologies said it was the industry's first demonstration of long distance, triple-terabit data transmission. Researchers achieved those speeds using Lucent's experimental optic fiber, called TrueWave. Bell Labs scientists said they used three 100-kilometer fiber spans to transmit 40 gigabits over each of the 40 wavelengths of light (colors) in the conventional C-band frequency range and 40 Gbits/s over each of the 42 channels in the long-wave L-band range."

The FoxNews article contains more details. With Iridum about to heat up in the worst way, and landlines jumping in capacity, maybe the future really does hold a fiber-optic link straight into every permanent structure on Earth.

This discussion has been archived. No new comments can be posted.

Bell Labs Achieves 3.28Tbps Over Fiber

Comments Filter:
  • No, I don't think it's adequate. There aren't enough negative points to give this guy. I wasn't born with enough middle fingers....
  • Anyone up for the DDOS attack to end all DDOS attacks?
  • Hard drives nuttin, my memory doesn't even go that fast. Neither does my L1 processor cache for that matter.
  • Not supprising. I'm dislexic, half the time you can spell the same word three of four diffrent ways and I won't even notice.

  • This may sem like a dumb question for some, and I know I've heard it, but I can't quite work out the reason.

    Are the distances that much greater, or is there something else that slows it down. Does air slow down the signal?
  • Actually no, as DSL is copper pair technology and this is fiber-optic. I'd settle for a fiber-optic line to the house instead of DSL, though :) except for the price :(
  • Cable is from the cable company, DSL is from the phone company. Not the same people. Both afraid the other will start competing in the business of the one, therefore both holding back in markets where they don't want to compete with each other (but the phone company is using underground sheilded twisted pair messengered with coax for all their new installs up to the subscriber network interface, just in case).
  • The would ease bottle necks on all networks but the tre bottle necks will be the cpu and the harddrive mostly the hardrive they should start looking into this technology for the parts in the computer



    http://theotherside.com/dvd/ [theotherside.com]
  • If you go look at the Bell Labs site [bell-labs.com] you can see that the papers on the subject reference erbium doping for the fibers, which IIRC is fairly common for extremely high speed applications.
    Sam TH
  • ISDN was a late '70s, early '80s technology. It wasn't aggressavly marketed, well, ever (by the telcos that is). It wasn't lighlty marketed until late in the '90s. It was very hard to buy in the early '90s (like it was hard for ISPs to buy it, and they were use to talking to telcos then).

    That is true for a number of reasons though. Many ISPs didn't have trouble getting ISDN, the problem was the lack of standards. Unlike Europe and Canada, the US has no single ISDN standard, which makes uniformity and support more difficult than it needs to be.

    Much like with digital wireless connectivity, our capitalism, although making for great advances in technology, screws the consumer a little bit, by creating a lack of standard.


    -Jer
    -Jer
  • Well, CMU does have Forum2000 [forum2000.org], which uses at least 32 terabytes.
  • Most of the fibre which is currently underground is laid using tubes. Replacing them is a matter of pulling the fibre out of the tube and blowing new fibre through the pipes. Besides, many new telcos use 'dark fibre' when expanding their networks these days.
  • The networking of that speed isn't really required for the average local area network where simultanious HD access would be commonplace. This technology IS needed on the internet backbone, but since the backbone consists of of nothing more than wires and routers, HD speed isn't really an issue.

    Not that I'm saying HD's shouldn't be faster anyways. :)

    -Restil
  • by virid ( 34014 )
    bah, i'd like to see the numbers on the bit error rate
  • On the other hand fiber should be able to transmit a signal in 0.2 seconds to any place in the world.

    Actually, the earth's circumference being 4e4km, and light travelling at 3e5km/s, that makes circumnavigation take 0.13 seconds. The other end of the earth is half that distance, requiring 0.067 seconds.

    Of course, there will be additional delays from routers and switches, and the fact that not all traffic will travel in great circles, so 0.2 seconds is probably more realistic.
    --
    Patrick Doyle

  • This is fantastic! If I read and understand this correctly, they have 100km fiber runs _without_ a repeater! That's truly excellent. Most of the cost for long distance runs [after the right-of-way] is in the repeaters and powering them, not in the mediaAnd they can run it at 40 GHz. That's 4 THz/km. Normally, fiber is limited by "smearing" over long lengths--the light pulses get spread out over the length of the media.

    (you mean THz * km: longer distances are more impressive). 40 Gbit/s per channel with 100 km repeater spans is not bad, but it is definitely not why they got postdeadline paper in OFC 2000 (Optical Fiber Communications Conference, paper PD23)! There is another OFC postdeadline paper about "320 Gbit/s single channel pseudo-linear transmission over 200 km of non-zero dispersion fiber" (also from Bell Labs/Lucent Technologies) -- there the amplifier spacing is 100 km as well. So that's 64 Tbit/s*km. Probably, the reason that that paper had two amplifier spans rather than three was that they could not get it working for three amplfier spans :-).

    Once you allow for amplifiers (which are more expensive than just fibre, but not too expensive), you can basically go unlimited distances, e.g. 10,000 km at 10 Gbit/s is almost run of the mill. And as long as it is just amplifiers, the telecomms companies don't mind putting in a few more, with a slightly smaller amplifier spacing, if that will make the system more robust. What they really hate are regenerators, where you demultiplex the signal, put it through a receiver, (probably do error correction), and then retransmit it again: that is horribly expensive. But amplifiers are relatively cheap.

    The smearing you are talking about is not a major problem anymore, with properly designed systems (and fibres): with dispersion management and dispersion compensation the net linear dispersion can be brought down to zero.

    Jeroen Nijhof

  • At 10 gig per lambda we can put 160 (or so) wavelengths through on dispersion shifted fiber.
    At 40 gig per lambda we can put 40 (or so) wavelengths through on dispersion shifted fiber.
    I'm assuming using the 1550 window here.
    I'm sorry I can't rememeber more about the relation between modualtion speed and # of wavelengths. Higher speed you have shorter pulses, so I am guessing you need more space to pick out these individual pulses. Also you can only launch so much energy down a fiber.
    Remember the article says three spans, which basically means they do a optical/eletrical/optical every 100 km. Which is quite reasonable, but it would be more impressive if they went farther on a span.
    The whole key to this article is the fiber they used. I am guessing they engineered every meter of it to make sure they have as perfect of fiber as you can make today (minimize dispersion, etc.).
    In conclusion, just because they have done this doesn't really mean alot in the real world. It will be a few years until you can buy fiber that will consistently do these speeds. Unless your Qwest or someone you now have OC-192 speeds going over 15 year-old fiber, that is the majority of the fiber in the world. We need solutions for old fiber at the moment. Its expensive to dig-up peoples back-yards.

  • So with that technology, can I FINALLY get some DSL service now? Im waiting......
  • actually, screw quake. how about participating in the battle for hoth or storming the beaches of wwii, neh?
    --
  • They've all been posted in the last half an hour or so. They will be moderated down to -1 soon.
    Cheers,

    Rick Kirkland
  • And if you don't know what WDM means, do some research before making another irrelevant post.

    Sure, you could use fiber for the interconnects within a computer. But why? The signal starts as electrical, and it needs to be processed by more electronics which are only a few inches away. The expense, heat, and latency of converting it to photons is absurd when compared to simply sending it over copper.

    Now when you start getting into external perhipherals, fiber becomes viable. The expense of making a cable and connectors that can withstand life outside your case, transmit lots of signal very quickly, and perhaps get a signal several meters down the room, fiber suddenly becomes a really good idea.

    However, even when we're talking about Fibre Channel SCSI, we're still referring to multimode, which is way different from the fiber that telcos use. I'm not going to go into the details, but it has to do with the way light pulses stretch out as they pass through the fiber.

    These researchers are pushing limits that have nothing to do with your garden-variety optical semaphore. The speed at which your SCSI controller might modulate light will result in pulses several meters long (do the math -- speed of light, frequency of signal) so it doesn't much matter what color they are, and you can send several colors down a fiber with no problem. Now try sending pulses so fast that each pulse is only a couple waves long. Suddenly it becomes more difficult to cram closely-spaced colors onto the same piece of glass.

    Are you beginning to get a sense of why multiterabit optical links aren't practical inside your computer?
  • Sure, most telco transport equipment lets you "drop-and-continue" a signal. The logistics would be difficult, satellites are really better for broadcasting, where latency isn't an issue.
  • You could take just one of the 82 color wavelengths (40 Gbps), and use it to broadcast a thousand 20 Mbps HDTV channels, plus four thousand 5 Mbps video channels.

    I wonder if a particular color wavelength(s) designated for broadcast could just branch-out over a whole country without having to be switched?
  • I am pretty sure that they only people on the planet who would be able to actually afford this connection would have to be millionaires. And make that per year, cause it'll prolly be that much PER year.
    No momey for food, rent, or a car, yet I can transfer the contents of the M$ terra server in a second.

    Seriously though? How the flying f*** were they able to test that speed? Someone have a couple terra servers across on each side of the country?
    Okay they prolly used a smaller size file, but in order to be completely accurate for speed you'd think they would have to use something a hell of alot bigger than a text file, or the contents of my 30 GIG drives for that matter. That would happen instantaneously.
    Never the less, like people have been saying above, anyone for a worldwide game of Quake?
  • It's my understanding that there ARE repeaters in the trans-oceanic runs. I believe they have ships that regularly pull up the cable and replace the batteries in the repeaters. They also have problems with sharks attacking the repeaters because they have a slight electro-magnetic field that sharks can detect.
  • If the anti trolling system does not work, are you insisting on trolling like I've never seen trolling before ?

    BTW Kiddy rape is just not on. Ever.

    I guess my point here is what is your point ?. Any dumbass can figure out that multiple logins means you get to post at +1 with anything you do. if you artificially raise your Karma up enough by posting like crazy in "older" stories it goes to +2. What have you proven by exploiting that ?

    what a waste of what appears a fairly bright mind.

    MRo
  • why do you reply ? do you enjoy abuse ?

    I'm starting to side with the troll on this one.

    let it go.
  • hahaha

    the home page is a classic...

    We call that one at work

    " THE RECEIVER "

    it's usually greeted with a scramble for the kill button.

    as for my "ignorance" as to the number of AC trolls. -I'm not- just got my/your/our wires crossed.

    Tnks for the amuseing morning.

    MRo
  • You don't have a clue do you ?

    Individually you can't use the entire bandwidth, but across a MAN or WAN with multidrop ADM mux's you could easily fill this. The network I'm currently monitoring has 4 OC-192 ( that's STM-64 for SDH purists ) rings and at least 50 STM-16 rings. Those are 10GIG systems BTW. You can't fill that in a second from your harddrive either, but it CAN support 120,000 simultanious phone conversations in a 1+1 ring protected topology.

    that's what is for... now.

    In the future using tech's like POS ( Packet Over Sonet ) ATM/FR or straight GIG-IP we could *easily* fill that sucker up.

    I guess to cut what would be a long and boring story short, don't think of yourself as an individual using this bandwidth, this thing is designed for carriers to interconnect cities ( or countries ) with one fibre. you don't have the processing power or IO bandwidth on a PC to ever hope to decode the number of channels associated inside one of these babies.

    I'm sorry to burst your bubble Grant, but using the correct gear ( Newbridge 36170 - 36190 for e.g. ) we can easily do this. we approach similar speeds ( but down seperate fibres ) today.

    MRo

  • Please pour corrosive hot grits down your pants.
  • sure a dollar buys twice as much microprocessor every 18 months, but bandwidth triples yearly through 2020.. quick math done: Microprocessing gets 64 times more affordable by 2010, but the Macrocosm will be 20,000 times more affordable.. hmmmm..
  • At the rate that Napster has been cutting people off short on downloads, their severs locking up and people using amazingly lagged Cable modems, I'd say your download will stay the same.
  • I'd make a slight modification to your chart, but I like it otherwise. 1988: ISDN 56k 1997: 56k modem 1998: 512-1024kbps xDSL 1998: 1024 kbps Cable As Stripes mentioned, portions of fat pipes (Tx, OC-x) etc became popular for business around 1988, and beginning around 1998 some provides such as UUnet and even locally Ameritech beganning providing these pipes residentially. How else would all those hacker shell-account ISP's for IRC users popped up? They are just families with a few linux boxes and a T1 or 2 running into their house.
  • ummmm think for a second! In fact, I will put this in language even you can understand.

    -many people on earth

    -people communicate

    -many people on earth communicate

    -need bandwidth for many people to communicate

    -not many people have a Backbone connected to their house

    -Max speed any one person would really get out of it is 10Mbps (if used as a backbone)

    -hard drives can keep up with that

    -Internet backbone needed for Millions of people communicating will be a lot

    -that is why they invented a 3.28 Tbps Fiber Network

    Hey, if it wasn't for dumb people I wouldn't be smart.

  • No question about, dedicated fiber is what is gonna be holding up most of the 'net within a decade or so.

    I bet that we'll get a multi-terabaud line going to every major city (and maybe 100-1000 gigabaud to minor cities and towns). From there, you run slower lines (still somewhere in the 1-10 gbps range) to every home, pretty much replacing the phone lines. Within the home things get tricky.

    Only rich people and computer techies will have fiber optic cables running through their house: it's much cheaper to have a wireless LAN, unless you don't mind cables all over the place (as opposed to building them into the walls). You just hook up your home cable to a master computer in your basement (or closet or whatever) and everything in your house hooks up to that. Current trends suggest we'll be able to get 20 mbps or so over the airwaves, maybe even more at short range like in a single home.

    You could get all the communications you usually get over that one cable: TV, radio, an internet connection, phone, email, maybe even things like delivery (don't want to wait for your copy of Quake IV to arrive? Just download it right from the site you bought it from, it can't take more than a second). Everything would perhaps be done by your CSP (communications service provider) or maybe ISP (informations service provider).

    Now, the reason I don't see the world going fully wireless (IE, only have that big fiber optic backbone between cities, and for everything else use airwaves) is that you don't get the speed (about a factor of 1000), reliability (In a big city especially, I bet all the metal frames holding up skyscrapers could make for nasty static), and privacy (it's pretty hard to tap fiber without someone noticing, especially if they're on the lookout, which might be an automatic feature of the CSP) of cable. Not that people will be handing in their wireless phones, of course. More likely, they'll scrap their normal phones completely. Like I said, most of the stuff on the personal level (everything that you actually come into contact with) will be wireless, unless you spring for a home fiber optic network. You'd use your digital phone (slash PDA slash mini-computer slash anything) in your house, on the way to work, and when you got there it'd tap into the company wireless network. It'd be nice if we could get a wireless standard (instead of more than one wireless standard) that would work anywhere, so that you could take your phone across the world and still check your mail.

    It looks like computers will have no trouble using that bandwidth (with an IBM 75 gig HDD, it'd actually take several seconds to fill it up with mp3s), although it's anyone's guess if they can actually take advantage of the data (will we still have programs that take up 100-200 mb? In that case, I'd have room for about 250-300 of them, with space to spare for files). Then again, what with my 38" 200 PPI 16:9 aspect ratio LCD, I might need all that space for the new graphics in the latest game (or the latest version of windows). I sure hope my computer comes with a chip to run all this stuff though...
  • As a great man once said:

    "HOOK IT TO MY VEINS!"
  • A: Most satellites are in geostationary orbit, so that they have a 24 hour orbital period. (Meaning it will stay over the same point on earth in it's orbit). It's 35,786 kms up.

    Speed of light = 3e8 m/s,
    35K x 2 = 70K. 70/300 = .23 seconds to go up and back. Not counting switching.

  • The economists often talk about second man's advantage as you don't have to discover each and every part of the technological invention again and again but you can concentrate on the best one of them... I hope this answers your question.
  • This is interesting. I've seen quite a few posts about how this will affect gaming, but not much else.

    Not that this is a bad thing.

    Gaming is an *excellent* benchmark for network bandwidth capabilities, and should be treated as such. If you can download stuff okay, but you get your ass fragged every 4 seconds in Q3A, maybe you should evaluate your connection.

    Gaming isnt for everyone. But its a great way to stress test your line.

  • Raman amplification should not be confused with EDFA's (Erbium Doped Fiber Amplifiers). EDFA's use relatively short lengths of fiber ( 100M) doped with the rare earth element Erbium. These VERY expensive specialty fibers are contained entirely within the head-end equipment driving the fiber and in the repeaters along the way.
    Raman amplification uses non-linearities and energy soakage effects of the long lengths of ordinary "outside plant" fibers to create optical gain. This works at least to some degree on all common single-mode fibers (the type used for long-distance transmission). The effect has been known for some time, but there were difficuties (apparently resolved by Bell Labs) in using this method in high-density systems.
    TrueWave is not, by the way, an experimental fiber. There is a fairly large installed base of this class of fibers (Non Zero Dispersion Shifted).
  • I think Nortel got 9 terrabits a second a few months ago, but it was over a 10km long line and not a 100km one.

  • >That's idiotic. Vulgarity and obscenity are not >so much in the words themselves, but how you use >them. For instance, a "ram" is an animal. An >"ass" is an animal. But if I say I want to ram >your ass, that means something entirely >different. That means either I want to buttfuck >you, or hit your donkey with my Ford truck.

    Actually, to be more descriptively accurate, it would be "or hit your donkey with my Dodge truck." Dodge builds the Ram.

    Carry on.
  • All I want for Christmas is Bell Labs triple-terabit data transmission. Just imagine what I could do with it... *rubs his chin and looks up* Download every Linux distro in under a min, download lots and lots of mp3s, accquiring supreme dominance of the world!

    Billy Transue
    bill-transue@NOcoolmailSPAM.net
  • if only hard drives went that fast, now....

    Physical movement is too slow for most things, lets work on that and not terabit networking :)
  • The repeaters are quite cheap when compared to the expense of laying the fiber. Upgrading the max speed of fiber like this is quite awesome to see - if they can keep the rate of bandwidth increase up, the might never have to lay more fiber on their backbones again!
  • Maybe if we get some of these hooked up, at least someone might survive the slashdot effect.
  • I'm just as excited about this as everyone else is, but when you stop to think about it, what's the point? Let's do the math:

    3.28 Tbps = .41 TBps = 410 GBps

    410 Gigabytes per second. My present hard drive only holds 12.1 Gigabytes. So this thing could transmit the contents of my computer 33.88 times. And that's only if the harddrive could be read that fast (which it obviously can't).

    So my question is what on Earth are we going to be sending at speeds like 3.28 Terabits per second? Even if we were to split up this little bundle and give each computer in a building one of those strands (for one wavelength), 40 Gbps is still huge. I don't think there's a practical medium for data storage of vast amounts of data that can be read at speeds even close to that.

    So, we probably can't read the data that fast, but let's suppose we could. In theory, we could send this data at 3.28 Tbps. Now what are we going to do with it at the receiving end? Can anyone's processor even deal with data at speeds like that? (Especially while running an OS and who knows what else.) And how are we going to store it? Again, we need incredibly fast data storage to make this work.

    In short, with present technology, this system could not truly run at 3.28 Tbps anyway. Processing and data storage speeds would slow it down. The 3.28 Tbps seems more symbolic than anything else...

    Wow, I hate being the realistic geek...
  • I agree. I'm not even going to try to argue with you. However, I don't think even connecting cities with something like this is going to come into widespread use anytime soon. There's too much other technology that has to come into existance before this can become practical for even the largest network. For now, I think we have to stick to what we've been using.

    Of course something like this will be great in the future. I don't think anyone is disagreeing with that. But right now, I think this technology requires too much more before it can become useful. The technology is in it's infancy and a little ahead of its time. Nothing came of DaVinci's flying machine.

    And I wasn't trying to suggest that I would be using something like this on my PC. It was simply an example of how incredibly large 3.28 Terabits is.

    In short, my point was not that this is a useless technology. My point was that this technology is a bit impractical with today's resources.
  • That's one small step for man, one giant step for the porn industry.

    But seriously, I like these "toy shows". Just because this isn't something you can get right now, give it a bit. Chances are, in a few years, you'll be buying either this technology or something that is based off this technology.

    Also, there is another good part to this news. It's the technology breakthrough that is cool here. It's a case of "if we can do this in a test environment, how would we better be able to use this technology in a more practical, useful manner". It's like cars. Much of the technology you see standard in our cars today were designed to be used in race cars and aircraft. Basically, if we can make things faster, why can't we keep the speed the same but make them cheaper and more efficient with the same technology.

    kwsNI

  • Well okay I look at the huge amount of bandwidth you can get from the new lines that are coming out, but what proccessor can handle that much speed? I know that the pIII "was made around the internet" but I think this is too much bandwidth for even that.

    So what good would this serve for the rest of the world? Okay so backbones could be instantly connected throughout the world. We'd see less seek time. Maybe even be able to sync DNS entries faster. But the cost of this alone would be huge. And what happens when a squirrel decides to knaw a lil hole in the fiber? You just lost a heap of bandwidth right there. Fiber is great, but it's also fragile. You need to insulate and pad it quite a bit to make it worthwhile. And considering we still rely on satellite for most international networking I don't see this helping any of us for at least another 5 - 10 years.

  • at least then you could cut down on negative moderation, and spend more time picking out the gems to help point out what has higher priority over the average post (score 2 and above)
    Just read at level 2 and higher, then. That's what I have saved as my default, and if you like score 2 and above, you should too. (See the "Save" checkbox up there next to the menus and the Reply and Change buttons? Click that before you hit Change.)

    That's about the best you can do. You occasionally *will* miss out on good stuff that hasn't gotten moderated up yet, and obvious parody that's really funny that got moderated down as a troll or flamebait, but the SNR is oh-so-much-better.

    --

  • I was just pointing out that both use fiber, and as long as they keep adding more and more fiber to their systems, it's going to add up to more residential broadband.
  • I wasn't born with enough middle fingers... I used to use that as a sig all the time. :) But that was back when MM was more popular...
  • Well, this will bring DSL/Cable closer to most people actually. Time Warner is constantly expanding their HFC networks (in the fiber area) to make cable available to more neighborhoods. The closer they bring fiber to your neighborhood, the higher speeds you'll be able to achieve from DSL/Cable (or maybe you'll get it at all!). Do they have switching equpiment that's able to keep up with this?
  • haha..bravo. couldn't have said it better myself
  • True. I guess i've started yet another flame war. Sorry 'bout that. Thanks for the advice, I've come to more or less the same conclusion that you have. I'm sure the trollers out there have a different oppinion, but hey, everyone's got their opinion, eh Jesus?
  • good point, i hadn't considered that.
  • Sigh, born and raised in Seattle, had you bothered to read any of my previous posts (listed on same page as profile).
  • Apparently In San Francisco, they've run out of OC3 lines. Too many DSL and cable customers in the area, times general over-population = need for high speed fibre optics. How long till we see these puppies in mainstream? Or are we going to see somthing piggy-backing this technology in the next few weeks? ...there always is somthing just around the corner.

    Hadlock

  • somebody take care of this atrocity...put the moderation to good use?
  • Although what ends up happening is that sombody owns the real-estate that the fibre lays under (namely next to rail-road tracks), and they control that part of the "backbone" for that region. Once the fibre is laid, they can charge rent on it to whomever wants to go direct from point A to point B forever...namely the big names in supporting the 'net.
  • Before I get flamed, I'd like to point out that yes, when you run out of bandwidth, you will want to expand your bandwidth. No doubt they are working constantly to upgrade the current situation. The above post's info comes straight from the mouth of the head of the SW bell DSL testing labs, for whatever that's worth. Also, it looks like the ISPs have more than enough DSL capability, they're just now starting to use the 3/4 of their unused capability of 12,000 user capable routers, it just looks like they need the customers and OC3's (and those new terabit fibre lines! : )

    Hadlock

  • somebody with moderation points...go for it

    sigh..

    lamers

  • sorry if i'm whining here, but i was online and saw only 5 posts for a recent article, and posted a comment, and have been following it through the first 26 or so posts...about 3 of them so far have been gay porn related. Do these just end up getting moderated down, or do these just get "edited out"? By the time I end up reading most of a thread, it's got a good 200 posts. Maybe sombody could fill me in, the faq doesn't really explain what happens with the spam.
  • it's adequate, but (as all things), there's still some room for improvment. i haven't tested it out much, but i'm pretty sure that you can filter posts under a certian score...why not have an option to filter out the posts with say...3-5 or more obscenities? at least then you could cut down on negative moderation, and spend more time picking out the gems to help point out what has higher priority over the average post (score 2 and above)
  • ...it won't make much difference to a lot of people, such as most British netheads - in Britain, an ISDN line costs about the equivalent of $150 per month, plus calls, and if you're lucky then cable modems might be available in your area soon! ADSL? Whats that??? :o(

    Not much fun living in a low-bandwidth country with a piss-poor telecoms monopoly...
  • I have been preaching this same gospel all over the web. But no one listens to me. FIBER IS GOING TO EVERY HOME! If you want to talk about this topic email me at atrox@mad.scientist.com
  • I think Cisco has terabit optical routers in the works. Nortel has an all optical switch that could handle this and 5 trunks just like it. I'm sure since it's Lucent's baby they have something for it too.


    Consider this: These signals from a switch's standpoint are not multiplexed. They enter the switch as 40mb trunks. Not a problem for a modern switch.


    The hard part is a repeater every 100km. Can't sink this cable under water you need a repeater and power source every 100km. Your data travels halfway around the world? It gets recieved, buffered and retransmitted 200 times during the trip. Great throughput high latency. The beauty of fibre had been low latency "it sounds like you are right next door" and no noise/interference/path loss "pin drop"

  • You are right.. The first undersea fiber, called TAT8, was placed in 1988. It has regenerators every 79 km. Compared to TAT1, the first transatlantic communications cable (copper), TAT8 is a real workhorse. TAT8 can handle in 2 days the same traffic carried by TAT1 in 22 years of operation. TAT8 contains 6 fiber strands. 2 pair are lit and one pair is dormant. Distance required between repeaters is a function of the light source used. For a LAN type system, modems and repeaters using conventional LED's can achieve distances up to 2 km. for commercial bandwidth and range requirements, lasers are used. There are apparently fiber doping processes that can entirely eliminate repeaters on transcontinental cable runs when used in conjunction with certain frequency lasers. However since the existing fiber infrastructure was incredibly expensive to install most current technologies are geared toward getting more out of what is already in place.
  • With that kind of bandwidth you could get streaming media the quality of DVDs. Also you could trade other copyrighted material as fast as your hard drive can pump it out. I think the advantages (speed) outweigh the disadvantages (piracy). The speed could be an advantage though, imagine having movie rental stores online that streamed the movie to you.

  • you keep fighting for your, right to whatever the hell you feel they've violated, or sexually abused when you where young, i personally don't give a damn. Im just sick of reading tasteless posts from sick people like you, did you ever ever see a sex-related banner on /.? why don't you try posting, stuff on yer so... sane and beautiful or whatever gay sites. Cos quite frankly, i don't want to know, and im sure im speaking for everybody else as well here, is we don't want to hear it...
  • I think it would be alot slower than that. You are assuming that light is traveling at the same speed it would in a vacuum, but how fast would it travel through this new fiber?
  • Ive got one thing to say... DAMN!
  • the point is that they can do it - should we stop looking into the 'unknown' because we've found enough already?
  • by Anonymous Coward
    And just how are you supposed to switch this much data? They need to make major inroads on switching before this is practical (affordable) for most telcos. Not that this isn't cool inthe strictest geek kinda way!
  • The cables break after while due to continental drift. I think trans-atlantic cables generally last around 20 years and then they are duds.

    The very first few trans-atlantic cables were dredged up for repairs, but I don't believe that this is done any more.

    I too have heard of shark problems, esp. goblin sharks, but I imagine modern cables are proof against this.
  • I believe they have ships that regularly pull up the cable and replace the batteries in the repeaters

    Incorrect. The cable that gets put into the ocean is a very complex cable. At the heart is the fiber, but there is also high voltage running down the wire which the repeaters use to power themselves. Throw on a bunch more cladding, some more reinforcing steel, antother couple thousand volts, more cladding, more steel, armour and a rubber outer shell and I think you're done.

    And you thought that Gobstoppers were layered! :-)

    I also am pretty sure that the cable rests at the bottom of the ocean, quite some way for a ship to be pulling it up.

    They also have problems with sharks attacking the repeaters because they have a slight electro-magnetic field that sharks can detect.

    I haven't heard of this but wouldn't really worry too much about it.

  • That is true for a number of reasons though. Many ISPs didn't have trouble getting ISDN, the problem was the lack of standards. Unlike Europe and Canada, the US has no single ISDN standard, which makes uniformity and support more difficult than it needs to be.

    Having worked at a national ISP at the time I would have to say lack of standards was not the big problem. Most telcos would let you pick which ISDN options to have on a line (there are 100s, maybe 1000s). If you order the line from the telco (as opposed to the customer showing up with a line allready) ISDN's "lack of standards" (really more of a lack of ability to throw out anything in the standard and to instead just enumerate all possable choices) was no big deal.

    The two problems (as I remember them) were getting orderes filled (being quoted multi-month lead times, and then having them slip was not uncommon), and totally diffrent price plans across the country (it is hard for a nation wide ISP to have a nation wide price if the serice it is baised on is flat rate in Amaritech land, and per minute in NYNEX land). The ISDN PRIs (T1s) were even worse then the BRIs (2B+1D-channel 128Kbit home end).

    Much like with digital wireless connectivity, our capitalism, although making for great advances in technology, screws the consumer a little bit, by creating a lack of standard.

    I'm whole unconvinced that digital wireless in hte USA has been screwed by lack of standard so much as the licencing method used by the FCC. Find a socalist or comunist country that has a Metricomm-like service. While GSM is very nice, I like my SCH-3500 for datacomm far better then my Nokia-9000i. I don't feel screwed by CDMA. I do feel screwed by Sprint Spectrum, but that's a diffrent issue.

  • Actually, it will make it /less/ difficult. Remember how expensive plain old 10BT ethernet used to be? Remember how much cheaper it got when 100BT became the norm? Terabit technology will only serve to make gigabit, or whatever, the cheaper norm. Sure, the links will be slower, but hey, it's better than nothing.

    --
  • Pardon my ignorance, and not too many flames please.

    My gripe comes from the fact that TERAbit technology like this only makes it more difficult for the developing nations to catch up with the rest of the developed world.

    I have every respect that technological advancement is wonderful, and I am glad that I can reap the benefits of this, but the problem in my eyes is that underdeveloped nations will get stuck on (relatively slower) links while the rich nations and people get richer, making them more and more advanced, compounding the situation.

    Now this is a problem with all things, but with Internet technlogies, I feel this has a far more important position because of the nature of the Internet, to give everyone a "fair go".

    I don't have an answer for all this but is there any way to make sure this issue is addressed early nad before (if it is not already) too late ?

    Cheers,
    denmaster
  • The speed of light in fiber is about 69% of the speed of light in a vacuum, roughly 100 ms to travel 20,000 km.
  • Just replace your CATV service with 1,000 channels of uncompressed HDTV at 1.5 gigabit/sec per channel. That would use about 50% of the bandwidth.

    I've read papers that argue that compression will no longer make economic sense as bandwidth becomes really cheap.

  • Something that many people overlook in investigating their download speeds and ping times is that backbones of the internet that your packets are traveling across affect it greatly. If your provider's connections link to a particular backbone that is saturated in areas, or dropping packets to certain destinations, your overall access times will be much slower to those places. Hopefully, this invention can be implemented in such things as Internet2 and other projects underway like IPV6, and the gaming protocol (the name fails me at the moment) will help alleviate the horrible ping times that gamers receive to far-away places. The world will be a better place when the only barrier for playing games is the language! :)
  • Simply put: Newer technology is often cheaper.

    Laying a lot of fiber lines is trival in cost to laying any amount of fat copper lines. Those things were about as wide as Palm Pilot!

    There is a trickle down effect and there will always be someone "at the top of the heap". But remember that the old G4 machine that will be donated to a developing country a few years from now would have been a supercomputer to anyone a couple of decades ago.

    The tech gap is not the problem. Tech's cheap. Education is the problem. Intellectual haves and have nots is the growing gap. And problems with such gaps exist in developed countries too.
    --
  • ...and yet, Quake is still jumpy...
  • This is fantastic! If I read and understand this correctly, they have 100km fiber runs _without_ a repeater! That's truly excellent. Most of the cost for long distance runs [after the right-of-way] is in the repeaters and powering them, not in the media.

    And they can run it at 40 GHz. That's 4 THz/km. Normally, fiber is limited by "smearing" over long lengths--the light pulses get spread out over the length of the media. Common fiber that is running around campuses and biz-sites is good for something like 1 GHz/km--a one km run can be gigabit, but a 10 km run has to drop to 100 MHz.

    The 40 channels per fiber is also impressive, but nothing like 100 km between repeaters or 4 THz/km.
  • Your university only has 24 terabyte of diskspace? Wow! I know we're in the triple-digits of terabytes drivespace, and that's excluding students. Informedia alone has a few terabytes of diskspace.

  • A quote: [tpub.com]
    The speed of light depends on the medium through which the light travels. In empty space, the speed is 186,000 (1.86 X 105) miles per second. It is almost the same in air. In water, it slows down to approximately 140,000 (1.4 X 105) miles per second. In glass, the speed of light is 124,000 (1.24 X 105) miles per second. In other words, the speed of light decreases as the density of the substance through which the light passes increases.

    124/186=0.666 0.666*300.000km/s=200,000km/s So it would take 0.10 secs for light to get to any point on earth. That still leaves about 0.1 secs for processing of various sorts.
  • Think of the pr0n! The pr0n! I could download all of alt.binaries.naughty.bits in a matter of seconds!

  • What a coincidence! wow!

    This morning i wanted to know how cool and big your University was! Man, i'm impressed, you answered me the same day!

    There must be a god!

    phobos% cat .sig
  • High bandwidth for fixed-location machines is all very well, but when will we see something similar for the increasing number of wireless devices which are proliferating themselves in our lives?Text-mode for WAP isn't really the killer application that we want/need it to be.

    I can see the possibilty of satellites of the future using banks of lasers to communicate across those regions of space between planets, and down to base stations, in a similar manner to this but without the fibre-optic cable in the way. Combine a GPS reciever with with your device, allow it to see the sky (or an intemediate relay) and bingo! The station aims a beam at your device, and you get instant connectivity at a reasonable transfer rate.

    Of course the idea takes some thinking about and working round some of the more obvious problems (such as line of sight) but on the whole it would make for a much faster wireless system then is in place at the moment...

    --
  • by sjames ( 1099 ) on Saturday March 18, 2000 @09:10AM (#1194297) Homepage Journal

    I'm just as excited about this as everyone else is, but when you stop to think about it, what's the point? Let's do the math:

    This is a backbone technology, not feeder. Think in terms of a great many nodes feeding into switches. The local switches are connected by 40Gbps connections. Multiple local domains are agregated into the 3Tbps backbone for long haul to the next major city.

    If the system is used for a SAN, the inter-city connection would be used to have an offsite mirror for disaster recovery. It would probably serve many customers rather than just one (1 customer needing 3Tbps would be a HUGE customer). Certainly, no single disk drive could move that fast, but consider one 40Gbps channel into a switch serving 60 file servers each with a large RAID.

  • by stripes ( 3681 ) on Saturday March 18, 2000 @04:53AM (#1194298) Homepage Journal

    ISDN was a late '70s, early '80s technology. It wasn't aggressavly marketed, well, ever (by the telcos that is). It wasn't lighlty marketed until late in the '90s. It was very hard to buy in the early '90s (like it was hard for ISPs to buy it, and they were use to talking to telcos then).

    The more intresting table would be for when T1s, T3s, fractonal T3s, OC1, OC3, OC12... were actually available from a telco. Not when they were "designed" but when they could be bought. Unfortunitly the closest I can come to putting a date on any of those numbers is "frac T3s in the late '80s", and I'm not even positave about that one.

    Bandwidth has been growing a lot lately, but that's unsupprising, research into it has been better funded lately. An intresting issue is what you need to route (or even switch!) data moving that fast. Juniper has nice products, but this is a hell of a lot of bandwidth. Fortunitly (and unfortunitly) it is on a lot of diffrent colors, and you could optically split them and send them to diffrent boxes to route/switch... but that only buys you so much, and it costs a lot too.

  • by Money__ ( 87045 ) on Saturday March 18, 2000 @04:34AM (#1194299)
    I would contrast your assesment with the following data.

    1982: 1200 bps
    1986: 2400 bps
    1991: 9600 bps
    1992: 14.4 Kbps
    1996: 28.8 Kbps
    1998: 50.0 Kbps
    2000: 128.0 Kbps(DLS)
    _________________________

  • by Count Spatula ( 103735 ) <f_springerNO@SPAMhotmail.com> on Saturday March 18, 2000 @03:11AM (#1194300)
    how many mp3s per second on Napster is this?
  • by Raindeer ( 104129 ) on Saturday March 18, 2000 @12:44AM (#1194301) Homepage Journal
    According to researchers, the experiment used both DWDM -- a technology that combines multiple wavelengths onto a single fiber -- and distributed Raman amplification -- a technique that allows optical fiber to amplify the signals traveling through it.

    This is absolutely not my field, but isn't distributed Raman amplification a way by which the fiber has been 'doped' with some molecule in its structure by which a beam of light gets amplified, so you need less repeaters for a certain distance to carry the same amount of data.

    The repeaters are quite cheap when compared to the expense of laying the fiber. Upgrading the max speed of fiber like this is quite awesome to see - if they can keep the rate of bandwidth increase up, the might never have to lay more fiber on their backbones again!

    The problem is that most of the fibre that is in the ground now is not capable of amplifying the light so you need more repeaters built into the network to reach these amounts of bandwith.

  • by Raindeer ( 104129 ) on Saturday March 18, 2000 @10:19AM (#1194302) Homepage Journal
    Sorry, but you're incorrect. What one can see happening in Third World countries is that their telecommunications systems are getting more modern then in Western countries. For years they hardly had a telecom structure and now when they are finally implementing it, they choose for fiber instead of copper, because it is actually alot cheaper to install. Vietnam is a good example of that. Moore's law also had its effect on telecommunications.

  • by Naze ( 123717 ) on Saturday March 18, 2000 @12:36AM (#1194303)
    On the other hand, you also have the earlier Bell Labs article [slashdot.org]; according to that, they managed 160 billion bits on a single wavelength; this, on the other hand, is likely more of a public use of the method. In comparison, however, it doesn't size up; tat 160 billion bits per wavelength, it would have only taken 20 wavelengths to manage this throughput, and I'm reading the article as them having 4 times that many wavelengths. Perhaps, at a certain point, they merely cannot distinguish that much data on cluttered wavelengths yet? Seems like a disappointment, after the hopes of 160 billion bits across 1000 wavelengths. Then again, maybe we'll be seeing Bell Labs breaking yet another record sometime soon.
  • by Raindeer ( 104129 ) on Saturday March 18, 2000 @12:25AM (#1194304) Homepage Journal
    It is great that they have shown the possibility to send this amount of data over a network. It would basically mean sending the entire contents of all harddisks (2000x) on my universities campusnetwork in about 1 minute. But when are we going to see this technology in service? It seems that not only do we need new repeaters, but also souped up glassfiber. Those are large investments and it may take some time too to get the prototype to become a real world model.

    With Iridum about to heat up in the worst way, and landlines jumping in capacity, maybe the future really does hold a fiber-optic link straight into every permanent structure on Earth

    My personal opinion is that fiber is definitely the way we are going to go espescially for long distance data transfer. The problem with satellite technology is the lag in the signal and the problem with wireless is that it has too low a bandwith. On the other hand fiber should be able to transmit a signal in 0.2 seconds to any place in the world. So a system where the last mile is covered by wireless and a backbone of fiber seems to be the most plausible way. Interesting little tidbit is that 0.2 seconds is also the maximum lag in a telephone conversation, before people judge it as unnatural.

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...