Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Nortel gets 6.4 Terabits on a Single Fibre 107

GFD writes " Nortel claims to be able to do 80 gbits/sec in a single wavelength. Using their current top of the line DWDM equipment which handles 80 wavelengths on a single fibre they get 6.4 terabits per second. What's scary about this is that future DWDM products are claimed to aim for 400 wavelengths per fibre. That fibre would be able to carry over 21 million T1 channels! "
This discussion has been archived. No new comments can be posted.

Nortel gets 6.4 Terabits on a Single Fibre

Comments Filter:
  • Maybe I can convince the University to move our campus backbone to this. Sure would improve the pr0n^h^h^hq3test^h^h^hmail server.
  • 21 million TV channels! That's a whole lifetime of channel surfin'! :-)
  • by Szoup ( 61508 )
    I think the key sentence in the piece is the one that 'claims':

    Nortel showed 80-Gbit transmission over 480 km of dispersion-managed fiber manufactured by Corning, in a single span without regeneration.

    I'm starting to feel lightheaded...
  • Okey, i know this is a off topic, and I know I'm going to sound like a moron, but could someone explain to me why people refer to quantaties of storage they refer to byes, (kilobyte, megabyte) but when talking about transmission rates they talk in bits (10 megabits, 80 gbits). Can anyone enlighten me?
  • ... that a backhoe can cause even more damage to the network than ever before.

  • by Anonymous Coward
    When you're talking about the technical characteristics of a channel, its bit rate is usually more intimately related to (and calculable from) the carrier or center frequency and any relevant modulation properties. For instance, 10BaseT Ethernet is a 10-megabit (theoretical) channel that uses Manchester coding spectrally centered at 10 MHz. More advanced forms of modulation introduce the concept of "symbol rate," where groups of more than one bit (but usually less than a byte) can be transmitted per baseband cycle.

    Using "byte rate" would introduce an unnecessary multiplier into the equation, one which doesn't have anything to do with the quantities being calculated. The physical transport layer is seldom concerned with abstractions like 'bytes.'
  • At this rate, most of our copper wires (Even inside the machine) would be replaced with fibre optic cables and everthing would be done with light. Crystal CPU's come to mind :) Then what? Would we have then reached the final/ultimate speed limit? I think not.

    Before that happens, we need to concentrate on our algorythems and develop better compression. Sure people are getting rid of compression just cause there is mode bandwith. Nowdays you cant even watch a web broadcast without having a 56k modem, soon it would be a 256K DSL line. Just cause we have faster lines and better computers doenst mean we should stop developing better compression and tighter programs. I for one would welcome a totally opensource fractal compression algorythem and encoding/decoding routines for audtio/video and other media. That would be a nice day.

    Anyway, at this speed we dont need to install OSes :) We can just run everthing from one centralized server (yes Linus, can you host it for us?) :>
    --
  • We're talking hundereds of gigs a sec over a line like that. Kinda puts DSL to shame...
  • I remember reading a Scientific American article years ago which pointed out that the intrisic bandwidth of a piece of glass is truly monumental. The limitations are all in the lasers and the electronics that drive them.

    The article said that it it would theoretically be possible to hook large numbers of people (whole cities?) up to effectively one piece of fiber and give everyone their own wavelength.

    The key pieces of technology missing were, IIRC, lasers which can retune themselves very quickly, and purely optical amplifiers (got to eliminate the electronics totally).

    Interesting idea I though. They called it the "fibersphere" I believe.

    Bruce

  • It won't be long now before we'll be able to describle all the matter in the Universe over a single communication channel in the blink of an eye. Of course, this doesn't help with the problem of taking the measurement... I want a few of these, would be great for a beowulf cluster.
  • Even if it can't quite get up to 80Gbits, thats still a big improvement over what it is right now. Now if only they can develope a way to hook it directly to my vain.
  • What have we all failed to consider?

    Think of all the PORN you could download!

    Sorry... I just thought it needed to be said ;)
    Hemos seemed to have forgotten to mention it when he posted...

    Jeff

  • SCSI and parallel ports aren't, but USB is, I think FireWire is, modems are, etc. Since comm is bit at a time, bit speeds make sense.

    Storage is almost byte oriented. Actually, disk drives are sector / block oriented, but that differs from drive to drive, making byte the common denominator.

    --
  • by hpa ( 7948 ) on Tuesday October 12, 1999 @09:43PM (#1618186) Homepage
    Actually, if you talk to a memory designer, they will talk about bits, not bytes. The sizes of RAM parts are all in bits; same when packaged onto a DIMM (which may be 8Mx64 for a 64 megabyte DIMM). However, the computer it is put into will be sold as having 64 megabyte of RAM.

    It is really only computer (as opposed to electronics/optonics) people who uses bytes. At the hardware level, after all, everything is bits, after all.

    On transmission systems it is even more confusing, since low-level framing protocols (below the software-visible level) are often bit- rather than byte-oriented; on RS-232, for example, it typically takes 10 bit intervals to transmit an 8-bit byte. Therefore, an RS232 serial port talking at 57600 bps can transmit 5760 bytes per second, not 7200 as you might think.

    It gets worse. Certain modulation schemes transmit multiple bits per state transition. If you have a 32-point QAM constellation, for example, you will transmit 5 bits per state transition (5 bps per baud.) This gets awkward really quickly if you in addition have to worry about 8 bits per byte!
  • *attempts to quantify the bandwidth in an understandable mesaurement..*

    *keeps trying..*

    Brain Panic!
    Overflow while using 30486203968203962bit integer!

  • by urgleburgle ( 91062 ) on Tuesday October 12, 1999 @09:46PM (#1618188)
    ph43drus drooled:
    >Think of all the PORN you could download!

    The bandwithrequirements of one person surfing for porn is not a problem even with current technology.

    The truly mindblowing possibility is HOSTING a pornsite. You could have every man, woman, child and dog in the world wanking off to pictures from YOUR site, and still have bandwidth to spare.

  • "That fibre would be able to carry over 21 million T1 channels!"

    So 15 fibers would be enough to carry one T1 channel per person in the USA. Distribution is left as an exercise for the reader.

  • Anyway, at this speed we dont need to install OSes :) We can just run everthing from one centralized server (yes Linus, can you host it for us?) :>

    Yark. I don't want my P0rn to be stored on a centralized server. ;)

    I want my own, local harddrive. I don't want a naughty sysadmin to be able to peer into my naughtiness. ;>


    --
  • The key pieces of technology missing were, IIRC, lasers which can retune themselves very quickly, and purely optical amplifiers (got to eliminate the electronics totally).

    All-optical amplifiers have been a reality for almost a decade, and are probably a given in newly installed long-haul fibers. In fact, the Nortel article talks about optical switches (most likely electrooptical), which is a much more complex technology. For a long time it has been necessary to convert the signals to electric in order to route them, which gets annoying really fast when you're somewhere in the terabit region. Optical switches will actually allow packets to be routed without convert the signals to electric first.

  • If this potential bandwidth increment means that telephone companies can carry even more telephone lines for a lower price and still charge us the same rates I totally disagree with this kind of technology advances.
    Telephone companies should realize it's time to offer higher bandwidth services for lower prices, they will still be making money while bringing benefits to their clients (us) instead of just getting every time richer with the 3rd world services they currently offer charging 1st world rates.
  • by kooma ( 92065 ) on Tuesday October 12, 1999 @10:06PM (#1618194)
    Little white lies in benchmarking are not only found in NT vs. Linux setups but also in optical networking. Usually these high-speed records in modern DWDM models are achieved by sending _the same data_ on all (in this case) 80 channels. By using the same data on all channels the most problematic error sources (optical crosstalk between channels and such) are minimized and the signal-noise ratio is therefore way better.

    In other words, they do transmit 80*80gbits/sec of data, but the actual information they transmit is (merely?) 80gbits/sec. So, unfortunately these setups are far from the real thing, but maybe someday...

    Oh, and the high-bandwidth transmitters and receivers do work better in stable conditions such as a research laboratory. Try that in an actual environment and be surprised. Lasers and PIN-diodes are temperature-dependent and chirp becomes a problem in high-bandwith lasing. (Chirp = a change in laser wavelength when laser diode's current changes.) The bright side is, that at least these things are under intense research.

    No apologies for any misspellings, this ain't my best language. :)

  • The idea that simply describing a thing imparts it with reality is likely a form of egotism peculiar to species that spend more time describing than doing.
  • by mroeder ( 71228 ) on Tuesday October 12, 1999 @10:20PM (#1618197) Homepage
    At this years Hannover show, I saw Fujitsu's latest speed demon, the 320Gig. WFDM, I remember being very impressed. being a Telco Engineer. However as I was looking (drooling) over this new sparkling toy the following occured to me.

    a) These things do get faster every year or so. No biggy there. So what. CPU's speed up Harddrives get bigger/faster cheaper. Is anyone really surprised by this ? The numbers ar big and very fast - but it *was* a lab.
    b) These things are horrendously expensive to put into the ground. The only people that can actually afford to use them are Telco's (think Baby Bells AT&T and MCI Worldcom/Sprint )

    So what does this gain us. Nothing. We won't see any increase in internet backbone because of this. The Telcos' have to much capital investiture to pull all those gloriously expensive CBR services out and replace them with something innovative like POS ( Packet over Sonet ) or ATM.

    The ITU has only really ratified OC-192 ( STM-64 ) 9953.28 Mbit/s ( 10Gig ) fairly recently. These new multipliers are not yet part of the standard. Traditionally we have seen steps of 4X over the last signal speed.

    STM1 = 155 M
    STM4 = 622 M
    STM16 = 2488 M
    STM64 = 9953 M
    etc.

    I'm not sure that these actually fit cleanly into the current SONET / SDH infrusture. Which is fine for point to point comunications, within the one company or Country. But it you needed to interconnect one of these to another vendor's bit of kit, unless they follow the same technology tree with FWDM 80 Gig Steps, it would mean that you would have to step this bad boy down to a level in the standards to actually get something out of it. That could mean a lot ( a real lot ) of 10 Gig boxes stepping down across lines of demarkation ( or POI's )

    And just on the side. Nortel's track record of just keeping 1X ( Their name for an STM1 box) running is really not so good. How are they going to support 6 Tera if they *really* can't get the basics right ??

    MRo




  • and we thought a guy with a backhoe could do some damage before...
    ---------------------------------------
    The art of flying is throwing yourself at the ground...
    ... and missing.
  • Make a mistake and your talking 80kBIT/s
    instead of 10.
  • Would we have then reached the final/ultimate speed limit?

    Before that happens, we need to concentrate on our algorythems and develop better compression. Sure people are getting rid of compression just cause there is mode bandwith.

    You're doing the same thing that a lot of the general populace does, and getting latency mixed up with bandwidth.

    Latency is how long it takes for an individual packet of data to get from one place to another.

    Bandwidth is the total amount of data you can get from one place to another.

    A little comparison: if you had a large plane that had a top flight speed of about 300 mph (mach 0.4) and could carry 1000 passengers and you also had a jet fighter that could travel at just over mach 4 (3000 mph) and transport a single passenger. Most people would agree that the jet fighter was "faster" in a very real sense than the large plane (by a factor of 10). However, with two cities 1000 miles apart, (ignoring time spent loading, unloading, refueling, etc.) the large plane could transport 2000 passengers in 10 hours (3.3 hours per one-way trip) while the jet fighter could transport 15 passengers. With vehicles, carrying capacity (bandwidth) and speed (low latency) don't get confused. Yet, somehow, when you replace planes with modems, the average consumer gets confused and thinks that speed means something completely different than it means in any other context. Speed is how long it takes to get from here to there (miles per hour, for instance).

    Very luckily, however, for big expensive products that aren't aimed at the average consumer, latency is considered very important.

    When you compress data that is being sent live, you actually have to slow things down in order to do it. (look above at explanation of what speed means, if you're unclear already) This is because you can't effectively compress a single bit or a single byte, so in order to compress you'd hold onto the data for a little bit before sending it off.

    With your average consumer modem, compression slows things down by 15ms or however long it takes to receive a large enough block to send from the user (whichever happens first). With a normal home modem, though, you've already got something like 100ms that's wasted going across that link, so in most instances another 15ms isn't much, and is a good tradeoff for the slight boost in bandwidth.

    When you've got a DSL line, however, you've got much lower latency than a normal modem would get, so something like 15ms tacked onto it would be doubling your latency. Double (or worse) latency in exchange for a small increase in bandwidth simply isn't worth it. It would just slow down your overall experience. (The only thing where you might want high bandwidth more than low latency is, basically, if you're downloading a lot of large files (like porn or software), and those are usually already compressed (JPEG, GNU zip, ZIP, etc.))

    Improving switching and routing speed is much more important and useful. Adding compression to high-speed lines is a bad idea.

    Also, electrical impulses already travel at about 2/3rds the speed of light -- outside of your CPU the speed of light over the speed of electric impulses isn't too much of an issue...
  • It won't be long now before we'll be able to describe all the matter in the Universe over a single communiation channel in the blink of an eye.

    I think Cantor might disagree with that assesment [:>).

    ... but, if you could do it, the size of 'compressed' kernels would definitely drop a lot! Not that it would matter, given the transmission rate...
  • I don't know of any country in Europe who's telcos offer prices that are half as attractive as what you guys have in the states. Cheap long distance calling, free local calls, quick new-line installation - these are things most European companies and private individuals dream about. Did you know that price of an inter-city connection in Poland is over $0.50/minute even though salaries here are 10x lower that in the US? Or that BT, the (ex?)British State Teleco doesn't differentiate between local and long distance calls, so in effect all calls are long distance? Or that most German internet users I know pay between 3-4DM per hour of on-line time, and that's just the connection fee!

    One of the main reasons that the internet has exploded in America like it has is because your telcos are offering 1st world services at 3rd world rates. Imagine how much harder it would be to justify an extra $150 phone bill (like a lot of Poles do) just to be online reguarly. Or to have a telco that just last month started differenting rates for nighttime (22-6) calls (50% less.) Not a pretty picture, is it?

    jay
    -- .sig free since '88
  • by the_tsi ( 19767 ) on Tuesday October 12, 1999 @11:19PM (#1618203)
    The scary thing is that the thing you typed is exactly what I saw... I've become all Matrix-like in my viewing of control characters. Ugh. Time to step away from the computer and get some sleep.

    -Chris
  • Telephone lines are known to be the most expensive in all the world, but how about geting one or more E1s for a whole building and share the rate and bandwidth between all the people living there? It seems that if high-bandwidth prices start getting lower every building (or group of buildings) can have it's own PBX (or router or both) and let the telephone companies in charge of only high-bandwidth services which have to be cheap.
  • Yeah, but my Catepillar can get 6.4 Tb on a single backhoe. Maybe more, if they put em all in the same conduit. Better'n pingfloods ;)
  • Lab tests may be one thing, but I know for a fact that the multimode fiber strung all over our campus has recently been discovered as almost worthless as we do 100mb and gigabit testing.

    So what if their lab equipment can get 80 different wavelengths over a fiber? If we can only run one beam over a piece of fiber 1000 yards long, how are they going to jam them into a strand 1000 miles long? Hmpf.

    SHOW ME THE MONEY.
  • i'm wondering... can they trigger an optical switch w/ light?? think...optical transistor (including optical base)



  • Sorry, couldn't help it.

    Someone said:

    "What's scary about this is that future DWDM products are claimed to aim for 400 wavelengths per fibre. That fibre would be able to carry over 21 million T1 channels!"

    and I would like to know what part of the above can be classified as "SCARY" ?!

    I know it may be a figure of speech, but new methods that push up the bandwidth of a pipe isn't something to be scared about.

    We are not luddites, or are we?


  • There's so much of this in the ground already it's not even funny. If the telcos were to light it all tomorrow, with currently utilized gear, the US would have nearly THREE TIMES the bandwidth that's available now.

    Yeah. The mega-bandwidth that this would provide makes me drool. But the cost of implementing it would be horrendous.

    And we all KNOW Ma Bell ain't gonna just eat the bill for it. Well.... We don't know. But we could make some guesses and stand a really good chance of being spot on. =)

    $5 for a 2 minute call to my next door neighbor!!!!!!!


    Chas - The one, the only.
    THANK GOD!!!

  • American rates may look wonderful to residents of other countries but that doesn't mean that the telcos aren't ripping off their customers in the USA.

    The costs of providing T1 and T3 service have plummeted but the rates have stayed the same or even increased. Rather than price their products at cost plus a reasonable profit margin, the telcos price their products based on their perceived "value". This means that many services are grossly overpriced. It also tends to inhibit the introduction of services and technology that would cannibalize cash cows like T1 lines.

  • I found those articles "telecosm" in forbes: www.forbes.com/asap/gilder however, the page appears to be gone. They allude to the publishing of the series, so it may have been taken down to boost sales of the book, published by Simon & Schuster. Its a very interesting series on the future of telecommunications, Dark Fiber, and bandwidth. It might even be worth buying the book if you're interested in this subject.
  • A byte is usually, but not always, eight bits.

    An octet is defined as eight bits.

    Many data transmissions use variable length words, bit stuffing or transmission unit lengths that are not divisible by 8. Take a look at HDLC for an example.

    It is much simpler to use bits or symbols when measuring the rate of a channel.

  • Or that BT, the (ex?)British State Teleco doesn't differentiate between local and long distance calls, so in effect all calls are long distance?

    What the hell are you talking about?! We've always had local calls. Every time I dial into my ISP it's a local call. You should check the facts before making sweeping statements. Not that I'm defending BT. They still charge too much for piss poor service.
  • You missed my point :) I was speaking in a lighter tone of tonge btw. And I do understand latency..

    I was refering to the fact that we were reaching a limit in speed. This limit is somthing like the current transistor miniturazation limit we have. Such being the case, the only way we would send data out would be with good compression (and i'm sure by then there would be goo realtime compression software and the likes) did i not metion fractal?

    I also said in this ideal world that everthing should be composed of light. Yes, mixsing those slow transistors in a light based medium would slow things a lot. That is one reason why we should look into alternative storage devices and the likes.

    Now that we are reaching a limit as to what speed copper/silicon would preform... i think it is high time people went into these other fields and found something faster.. something based on light.. or else we'd be stuck with processors that would never speed beyond a certain point.

    Later..
    --
  • Isn't the speed of light constant - all electric and light impulses/photons etc. travel at the same speed?
    - NeuralAbyss

    ~^~~~^~~~^~~~^~~~^~~~~^^^~~~~~~~~~~~~~~~
    Real programmers don't comment their code.

  • So maby that Mindcraft 400Mb/s Linux vs NT webserver-test will have some validity after all... 10 years from now that is!


    LINUX stands for: Linux Inux Nux Ux X
  • By using the same data on all channels the most problematic error sources (optical crosstalk between channels and such) are minimized and the signal-noise ratio is therefore way better. In other words, they do transmit 80*80gbits/sec of data, but the actual information they transmit is (merely?) 80gbits/sec. So, unfortunately these setups are far from the real thing, but maybe someday...

    What optical crosstalk? If that's a problem just paint the damn fibres black.


    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Assuming a wavelength window of 1500nm to 1580nm, theres an optical frequency range of some 10 Tb/s (assuming my quick calculation is correct).

    Accordingly there will need to be fancy coding methods before 6.4Gb/s and above will be reached (eg. methods that trade bandwidth for signal/noise such as used in Modems).

    Actually thats all pretty accademic anyway, since the limits for non-linear interactions between the wavelengths will be reached much sooner. Causing at best loss of signal, at worse distruction of the fibre....

    Its all vey good to say 80 channels in one breath, and 80 Gb/s per channel in the next. Its an invalid assumption however to assume they can be combined since the effect of modulating one channel results in wavelength "sidebands" that affect the adjacent channels.

    This is from my experiences years ago, but I dont think the laws of physics have changed since then...

    Cheers
    Dave P
  • With this much bandwith , you would never have to leave your home, forget going to the movies you can watch the latest release on your home pc

    Everyone in the world could all watch the final episode of the Xfiles at the same time, and no damm american would spoil the ending for us poor behind the time Australians !

    Also with this much Bandwith some issues need to be raised ! Could the /. effect take one of these lines down ?

  • Uhhhh
    They use the same fiber for all 40 or 80 channels. The light used is basically 80 different wavelengths...and crosstalk can occur between the wavelengths.

  • Indeed, as far back as 2400bps modems, the difference between bits per second differed from baud. If I recall correctly, the QPSK scheme 2400bps uses sends two bits per transition, and so is 1200 baud, 2400 bps.

    One interesting thing to note is the fact that most modulation schemes for modern modem hardware mostly eliminate the stop-bit/start-bit/parity overhead that RS232 has. This started with the 2400bps error correcting modems Way Back When. On these modems, ZModem download rates as high as ~275 bytes/second were not uncommon, since the modems disassociated the RS232 signal from the audio signalling. In contrast, the old-fashioned Bell 110 "0 - 600 baud" modems were little more than a voltage-controlled oscillator hooked to the Tx line and PLLs hooked to the Rx line and Carrier Detect line.

    Back to the original topic: Transports (such as Ethernet, T1s, etc.) are specified in Bits per Second largely because they are a bit-oriented medium, and they generally have little protocol overhead associated with them. (Although T1s do implement "bit robbing" for their control channel, thus reducing their advertised capacity to 1.536Mbps rather than 1.544Mbps.) Higher-order operations such as file transfers are reported in Bytes per Second, since (1) files are usually collections of bytes, and (2) the protocol stack provides a byte-oriented interface to the file transfer program. Somewhere in the stack (usually very, very near the bottom, at the Physical layer), the bytes become bits (and vice versa), but by that time you've inherited all of the protocol overhead of whatever protocols you're running (anything from ZModem to TCP/IP to SNA to whatever) and so bits vs. bytes starts looking alot like apples vs. oranges.

    --Joe
    --
  • Ack... HDLC bitstuffing... it's a nightmare. Also, it makes your channel's bandwidth vary by up to 20%. Bleh...

    (For the uninitiated, HDLC framing requires a 0 be inserted in the bit-stream after five consecutive 1s, to allow for out-of-band messages and timing recovery. (Six consecutive 1s have their own special, out-of-band meaning.) In the worst case, a bit stream of all 1s expands by a ratio of 6/5.)

    --Joe
    --
  • The speed of light in a vaccuum is constant. Electric current running down a wire is in anything but a vacuum, however. The rate at which a electrical signal propogates in a given wire has to do with the series inductance of the wire and the parallel capacitance of the wire, per unit of length. (Parallel to the shielding, that is. Unshielded wires are often treated as having shielding at "infinity" that is tied to ground.) The ratio of these quantities forms the "impedance" of the wire, which is usually specified in Ohms, and is invariant with respect to length. That's why, regardless of its length, RG-6 Coax (cable TV wire) is 75ohm.

    (Note that I'm talking about ideal wires here. The series resistance of real wires also mucks up the propogation rate somewhat. Not only that, but also it delays up signals at different frequencies differently, leading to something known as "skew", if I recall correctly. Whee. Transmission line theory was never my strong point, but I'll never forget the basic lessons it taught.)

    --Joe
    --
  • The scary part is that it would take just one construction worker a hoe to bring down 21 million T1 lines. I seriously hope they're doing work on protecting these cables they plan on building.
  • hrm...i saw it too. maybe rob and co. are sneaking fnords in on us.
  • Thanks to your local Baby Bell, you still won't be able to do more than 54,000 bps from your house.
  • Well it turns out that T1's are very profitable to the telcos and they're in no hurry to deploy broadband to the general populace. If I were a small to mid sized company, I'd be questioning the necessity of having a T1 with local loop charges of $600 or so a month when DSL costs around $20 a month in the areas where it's available and can run just as fast as the T1. I don't think the telcos want people thinking that way.

    Of course, the thing to do is to form a new company that is willing to get in there and sell. The demand is there, SOMEONE is going to supply it. That's what companies like Covad are doing now (If it weren't for Covad, I probably wouldn't have seen broadband in my area for another 2 years, if ever. US West can bite me.)

    Have you seen that Qwest commercial with the hotel? "All rooms have every movie ever made in every language any time, day or night." I dig the concept, but we aren't quite there yet.

  • Put them inside a gas line. If backhoe monkey boy still cuts the cable, at least he gets 'sploded in the process. After a while, Darwin should take over.
  • A little math here... As we all know, lambda times frequency equals 'c' (speed of light in medium) : c = lambda x f If they use wavelengths around 1.5 (or 1.3) micrometers, then the frequency is about 2x10^14 Hz (2 teraHertz).

    We transport 6.4 terabits/s on a 2 teraHertz link... That is the magic of WDM... but the power of each individual wavelength carried must be low, to avoid non-linear effects (xtalk).
  • Not correct.

    My baby bell has been multiplexing my line with others so where I used to get 48kb and 50 kb connections on a V.90 modem I now never get more than 33kb. I'm guessing they don't want to pay for more copper.

    Did my phone bill drop to 33/48 of the old price? I DON'T THINK SO.

    The nice phone rep trying to be helpful told me that "there seems to be no problem with your line" and that, well, they were only obligated to provide 9600 b service under their agreement with our regulator$.

    Call me spoiled rotten...

  • There is a minimum latency, and that will be the time it takes light to go from one location to the other along a path. In networking, we can view this as the time it takes light to go the distence of the networking cable connecting point A to point B. (This does not count for subspace negative matter wormhole theories and what not--by today's technology).

    However the time (latency) is usually much longer than this time, due to many things, such as media conversion (ex: copper to fiber) time, and routing. One thing that is usually a big bottleneck for most of the world is that ISP's make money by overselling their connections. If an ISP has a T3 uplink, then they will make profit by selling 15-20 T1's, or the bandwith equivellent in a modem pool. If all 20 T1's were using their full bandwith load, then the T3 would be swamped, and the router should line up the packets and send them out as it can, putting the extras it can't send ASAP in it's memory (if it's memory becomes full then the packet is usually dropped). This is reflected in your ping time, or latency since the ping packet had to wait in a line to get past the ISP's router.

    Since most common dialup ISP's connect to a larger domain-wide ISP which gets its bandwith from a backbone ISP or bandwidth reseller, that's 3 routers that it may have to filter thorugh, causing high latency.

    With large-pipe OC (optical cable) then cost of supplying bandwith goes down because of supply and demand (there is now more supply) and ISP's can afford more bandwith for their existing customers without cutting profits (so now you would have say an OC-3 for the 20 T1's which even leaves some breathing room on the cable, I think). Now with the big cable (and hopefully a better router to handle the full OC-3) there will not be a waiting line of packets coming to the router and latency will get closer to theoritical minimum (the speed of light between A and B from above)


    Of course the ISP will probably expand and oversell the OC-3, but that's another story :)

    (FYI this is what makes some ISP's better than others: they either don't oversell their lines by as much, have fewer "middleman ISPs", and/or have better hardware (ex: having the router in the case above, with extra memory, etc.)

  • I think that's what the optical molecule article was about that was on /. awhile back was about. Don't know the link, you'll probably have to search.

    Basically, photon hits special molecule giving it energy. Energy causes either reorientation or structural change (like denaturing a protien with light instead of hydronium ions from an acid). The new form/position is optically translucent, whereas before it was opague. Not sure what switched it back to 'off'(opague), it may have just reemmited a photon after a while or whatever.
  • People are flexing their technical knowledge at you and not answering your question. Data carriers always talk about bits because bits are the most fundmental unit of information. No matter how that information is grouped into larger structures, you can always reduce it to bits, the lowest unit of information.

    Sometimes a stream of data is NOT bytes. Consider sending a series of octal digits. Each octal digit is three bits. If you send that data out in a stream of 6 octal digits, you have 18 bits of data. Now, if, at the other end, you happen to be reading in bytes, you get two bytes and two bits left over. The transport medium still carried 18-bits.

    Modern computers defintely have a bias towards 8-bit bytes. That's because Latin alphabets and symbols can be well represented by 7 or 8 bits. Prior to the adoption of EBCDIC and ASCII coding, a five bit protocol was used for radio teletype equipment. They used an encoding scheme called Baudot. So, what are you sending? 3-bit octal numbers? 5-bit Baudot RTTY characters? 7-bit ASCII, or 8-bit extended ASCII?

    Doesn't matter to the data carrier. It's all bits.

    There's another source for the prejudice. There are two major styles of data interface. Serial and Parallel. In paralell, you have one data line per bit in the "word" (a word being the "chunk" size of the data -- again, typically 8-bits, either 7-bit ASCII with parity, or 8-bit extended ASCII) and then you have control lines (STROBE) that signal when a "word" is ready to read on the lines. Paralell interfaces tend to be local, because running many wires over long distances is obviously more difficult and expensive than running one (or a pair).

    This leads to "serial" communications. Serial communications uses one wire to carry data, one bit at a time. The first such protocols were the radio teletypes I mentioned earlier (RTTY).

    An antenna sticking in the air carries one signal, and that signal has three states. One state is idle, not doing anything. The others are called MARK and SPACE, or 0 and 1.

    The teletypes had a rotating cam that would move past five levers. A lever could be set (pressed) or clear (not pressed). The seetings of the lever would determine which hammer would be pressed forward when the carriage would move at the end of the cam's rotation, thus determining which letter (or symbol) would be printed on the paper.

    So, a sending teletype operator would press a local key, this would set the levers and make the matching hammer hit the local paper. The local cam would rotate, reading the lever settings, and would send a MARK (for a set lever) or a SPACE (for a cleared lever). The receiving teletype's cam would be likewise rotating, and it would set and clear levers as the MARKs and SPACEs were received.

    You should be seeing an obvious problem here. What if the cams were not in the same position? The wrong levers would be set and the wrong characters would be printed. They first tried to solve this problem with "synchronous" protocols. A sender would send a specific pattern of marks and spaces as fast as it could. The other end would speed up or slow down it's cam until it came to "top" at the beginning of the synch pattern. Then they would start with the data. Trouble is, this system tended to drift, and the text would become gibberish requiring a re-synch.

    The next invention helped solve that. Called "asynchronous" serial communication, it added the concept of a "start bit" and a stop bit. The idea was that each character would begin with a start bit (a MARK) and end with one or more stop bits (SPACEs). The receivers cam would be locked at the top and when a start bit was received, it would release, go around once, and lock until it saw the next start bit. This isn't really "asynchronous," in fact it is re-synching on every letter!

    This basic protocol is still in use today right in your serial port and modem. You send one start bit, an 8-bit character, and a stop bit. That's why a 2400 baud modem can send 240 cps instead of the 300 cps it ought to be able to send if it were 8 bits per character.

    I chose 2400 baud, because as other posts point out, protocols get a great deal more complicated at higher speeds.

    Still, the RTTY and the rotating cam idea are the reason serial communications exist, and they are the basic origins for their mechanics.

    The high speed technologies discussed here are not simple serial asynchronous systems, but complex frame-based systems. Nonetheless, those frame elements are there to do the same sort of jobs that start-bits and stop-bits were meant to do: Allow a bunch of hardware to watch a "wiggling" electrical signal and figure out how to draw the intended information from it.

    All of this adds up to why they talk about bits per second. The data carrier does not care how the bits are oragnized into units of meaning. No matter how they are grouped, they are just streams of bits.

    Phew! Sorry for such a long post...
  • They use the same fiber for all 40 or 80 channels. The light used is basically 80 different wavelengths...and crosstalk can occur between the wavelengths.

    In conventional copper datacomm, interference between cables is crosstalk, while interference within cables is jabber. I suspect the original poster was not used to fibre, where a single cable can carry multiple channels. I've made the same mistake myself.
  • If you begin to approach the bandwidth limits of a fiberoptic cable, why do you think compression would help? No processor could handle the data stream faster than the limit of the faster carrier medium, so compression would always have to take longer than transmission (because compression take some measureable amount of time). The only reason compression improves "speed" now is because the internal bus of computers is considerably faster than the external communications medium. Make them equal, or mkae communications medium faster and compression will necessarily result in lower throughput.

    TANSTAAFL.
  • SCSI -- not sure now -- it's parallel, but are transfer rates specified in bits or bytes? Now that I think of it, I can't remember! :-)

    Parallel ports also do bytes not bits, but again -- I can't recall ever seeing a transfer rate spec.

    Reel to reel tape densities were in bits per inch, but they usually wrote a character at a time (7 track, 6 + parity).

    I seem to remember some of the holographic storage specualations discussing bits not bytes, but then media densities usually are reported in bits per square hoojiwich.

    Bottom line is, I don't know.

    --
  • You know RealVideo will still look like crap.
  • The main problem that we are facing in getting high bandwidth connections to everyone is not very high bandwidth backbones. It's population distribution. Sure 15 of these fibers would give the backbone needed for every person in the US, but we'd still have to put a T1 from every person to the backbone. Think about how many powerlines there are. That's about what we're going to have to replicate to get internet access everyone.
  • hrmmm....i dunno how that article slipped by me. must have been durring my move to (and waiting for my internet connection at) UC Davis.

    My major is EE, but every once in a while, a new technology or discovery such as this one comes around, and i have to pick from a larger list of attractive fields. electronics...nano tech...quantum computing...optical computing...i want to learn it all. :D
  • Parallellism.

    Of course, the latency issue remains.

    The bandwidth can be taken care of however. All that is needed is the ability to optically multiplex and demultiplex the bitstream before interfacing to electronics. In most cases, the signal has originated in the electrical domain anyway (how 'bout inside a PC somewhere), so that's the place to compress/decompress, before it's mixed with the other electical signals.
  • I must say that the fact that a Canadian company is such a leading edge networking power is a great source of pride. But there is an interesting article in the Globe and Mail today and apparently Nortel now drives one-fifth of our stock market. [theglobeandmail.com] That's big.
  • Only problem is that at that speed, my hard drive wouldent be able to keep up with my connection!
  • I just want to be sure I understand you. You see compression as useful any time you have multiple threads of data that must be passed through a single interface? In other words, where even if everything is operating at nearly maximum signaling rate, we still have more bandwidth in the "processor box" than we have in the data communications interface because there is more than one processor moving data at the maximum signaling rate?

    I guess I'd have to admit that does offer an exception to my purely theortical objection. Holy dog chow, though! I hope never to see world where we all need that kind of bandwidth.

    I can imagine myself needing several Gigs, but I can't imagine personally needing that kind of bandwidth (although, obviously when we have Gigs in our homes, someone is going to need Terabits). While I'm conceeding points, I also remember my first 10M hard drive (in my CP/M 1.4 days) and thinking "I'll never fill this thing!"

    How much bandiwdth would a transporter use, anyways? ;-)
  • Ok, where can you get 1.5Mbps DSL for $20 a month? (url please) Here 386kbps SDSL is >$70 a month (www.fsr.net/services/adsl.asp), or for T1 downstream >$300! Yes ~1/5 to ~3/4 the bandwidth for 1/9 to maybe 1/2 the price (if ISP charges are $50)... but hardly the scenario you're proposing.
  • Well, 1 meg xDSL is $40 a month, here in Ottawa, using the Nortel Networks 1-Meg-Modem. Very cool service, indeed. Also goes by the name of Sympatico HSE (high speed edition) and is sold by Bell.

    We also have another competing company, @home, whose cable modem service is technically faster, $40 also a month, however, it is about 5~6K/s on an average day because they won't spend new money on servers even though the service is expanding at an incredible rate. Money grubbing !%#@#ers... :P
    You might as well just get a 56k modem and save the extra $20 a month.

    As expected, I live on the outskirts, and neither is available to me. *sob.* Still pretty good for the 'back-woods' of Canada.

    Hrm.... also kinda weird is the people who act amazed when I get connect speeds of 49333 on my 56k modems! New Yorkers and Torontonians feel lucky to get 28800 some days. ;) I thought that was kind of cute. :)


  • Just like anyone can start there own
    Webcam Porn Show
  • I read somewhere that to perfectly duplicate full realworld vision your file size would be something like 50 MByte for every Second of video.

    That's not counting Sound - touch - smell - taste - kinesthetics.

    So in order to live up to most of those Sci Fi novels most Slashdotters read, we need much bandwidth/speed as I can get.

  • And just on the side. Nortel's track record of just keeping 1X ( Their name for an STM1 box) running is really not so good. How are they going to support 6 Tera if they *really* can't get the basics right ??

    Nortel's low-speed stuff wasn't so good before, because back 10 years ago when the SONET boxes race was on, Nortel decided to concentrate on high-speed first (OC48), and develop downwards. Everyone else was developing low-speed first (OC3), then upwards.

    Thus, 5 years ago for Nortel, in terms of quality, OC48 was the best, followed by OC12, then by OC3.

    I've heard that with the newer OC3 express, things are much better. Not to mention Nortel's OC192 box being the current market leader.

1 + 1 = 3, for large values of 1.

Working...