Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

The Illusion of Spectrum Scarcity 144

Codeine writes "Presentations to the Technical Advisory Council (TAC) of the FCC by Vanu Bose "Software Radio: Enabling Dynamic Spectrum Management" and by David Reed "How wireless networks scale: the illusion of spectrum scarcity." Counterintuitive results from multiuser information theory, network architectures, and physics: Multipath increases capacity, Repeating increases capacity, Motion increases capacity, Repeating reduces energy (safety), Distributed computation increases battery life, Channel sharing decreases latency and jitter. Highly recommended presentation suggesting that the cost of spectrum management by "exclusive property rights" mandated by the State outweighs the advantages we could obtain from a new model that acknowledges physics and the 70 years of receiver development since the regulatory model was adopted at the time of the sinking of the Titanic."
This discussion has been archived. No new comments can be posted.

The Illusion of Spectrum Scarcity

Comments Filter:
  • Not more capacity! (Score:3, Interesting)

    by peterdaly ( 123554 ) <petedaly@NoSpaM.ix.netcom.com> on Sunday June 02, 2002 @09:13AM (#3626077)
    Like we need to encourage people to use more capacity! I have more waves buzzing around me already than I know what to do with! I can feel my nuts being sterilized as we speak...err, maybe I should take my Dell lAttitude with 802.11b off my lap.

    Yeah, that's better.

    On a serious note we really need this, I want technologies that can let my 802.11b network at home work without interfearing with my cordless phone and 2.4gig audio/video transitter and reciever. Right now they all fight for the same spectrum and all lose in someway or another.

    -Pete
    • Indeed (Score:3, Insightful)

      by Subcarrier ( 262294 )
      I want technologies that can let my 802.11b network at home work without interfearing with my cordless phone and 2.4gig audio/video transitter and reciever.

      Strangely enough, these are all on unlicenced bands. Sounds like we still need the regulatory bodies to keep the spectrum in some semblance of order.

      This is not to say that we shouldn't look into the technologies (quite the opposite). We're just simply not there yet. It would be good to set aside some spectrum for this, though, as a playground for developing new transmission techniques and receiver designs.
      • oh no not a regulatory body. paperwork being pushed around by a bunch of beauracrats. wonderful
        it might solve the problem, but the solution might be worse
      • I't interesting in this country (USA) that people are both envious of the Japanese and European systems yet totally against regulation. That's the "edge" they have, there's not as much bickering amongst industry because the gov't took the initiative for the good of the public. Maybe it's not always bad?
    • Acutally, the article refers to ways in which networks of devices that not only communicate with a base station (i.e. traditional cell phone) but with each other, even repeating traffic for each other (i.e. a mobile version of the internet!) could increase total capacity while lowering power output.

      Better routing (rather, using routing at all) will make all the difference.
  • Good Story, but.... (Score:4, Informative)

    by Grumpman ( 64344 ) on Sunday June 02, 2002 @09:14AM (#3626081)
    it was better the first time [slashdot.org].
  • underestimating the seriousnes of the article...
    I think.
  • political illusions (Score:5, Informative)

    by Alien54 ( 180860 ) on Sunday June 02, 2002 @09:17AM (#3626087) Journal
    There is this article from 1997 indicating about the same thing [networkcomputing.com], that spectrum scarcity is more politcal than anything else. but of course, at that time people were not as focused on wireless as they are now.
    FCC Report and Order 96-102 - Dubbing it the Unlicensed National Information Infrastructure (U-NII) band, a recently issued FCC Report and Order opened up a hefty 300 MHz of bandwidth to all comers, with an unusually small number of strings attached (see www.fcc.gov). To put things in perspective, this is 2.5 times the total bandwidth allocated to Personal Communication Services (PC S), which brought in over 20 billion dollars at auction. That this much spectrum could be doled out for nothing is a fairly strong indication that spectrum scarcity is largely a political illusion--a fact likely to come back to haunt those deep-pocket real estate speculators who thought they were buying the last vacant lots in town. This seemingly inconsistent approach to spectrum management has kindled an interesting debate among advocates of spectrum privatization, not to mention continued wailing by die-hard statists who still believe the airwaves belong to "the people."
    Mind you, this was in 1997.
    • by GigsVT ( 208848 )
      This is a "well, duh" sort of thing. PCS is lower frequency.

      From 0-300Mhz contains nearly all widely used ham radio bands, most fire, TV, radio, government, railroad, shortwave radio, and just about everything most people associate with "radio".

      That 300Mhz is a lot more important than 4.5Ghz-4.8Ghz, just because it is lower frequency. It's apples and oranges to compare PCS to a high microwave allocation.
      • That 300Mhz is a lot more important than 4.5Ghz-4.8Ghz, just because it is lower frequency. It's apples and oranges to compare PCS to a high microwave allocation.

        I don't know that this is true. in that the 300 mhz of bandwidth is still 300 mhz of bandwidth.

        That said, the original article I cited has this info:

        The rules and regulations handed down by the FCC are surprisingly simple. Three 100-MHz wide bands were each designated with a different maximum-allowable transmit power. These are 5.15 to 5.25 GHz with a maximum power of 200-milliwatt EIRP, 5.25 to 5.35 GHz with a maximum power of 1-watt EIRP, and 5.725 to 5.825 GHz with a maxim um power of a quite respectable 4-watt EIRP. (EIRP stands for Effective Isotropic Radiated Power, which means that antenna gain is included.)

        Please note that channels are not defined as a percentage of the total frequency, but are defined as the bandwidth needed for a specific application. A TV Video channel is much wider than an Audio channel because of the off the much wider bandwidth needed to handle video data. It is so much range of data signal communicated on or at a specific frequency.

        You could very easily have AM radio in the gighertz band. 44khz band width (CD audio. etc) on a frequency of 4 giga hetrz. But it would be rather line of site, among other technical issues.

        take a look at FM Radio. Frequency modulation only varies enough frequency enough to carrier the Audio as well as specialty signals like stereo information, etc. This makes an FM channel wider than AM (56khtz wide) but very small compared to a gigahertz range.

        so there are a lot of channels there. This is why you see FM radio stationsd at 100.1, 100.3, 100.5, 100.7, 100.9, etc - Each of these are a single FM channel.

        • I really don't know why you explained all that stuff, I'm a ham radio operator and well familiar with it.

          Radio from 0-300Mhz is very much more important than any other range of bands. Long distance propagation almost exclusively happens below 30Mhz (with exceptions). Those 30Mhz are hundreds of times more important than the 30Mhz between 5.00Ghz and 5.03Ghz.

          That is why I say it's a fallacy to compare previous bandwidth allocations with the current microwave allocations, it's apples and oranges.
          • I really don't know why you explained all that stuff, I'm a ham radio operator and well familiar with it.

            Sort of presumes that I could tell you were a ham operator from your comment.

            unless you included the info in your sig line, this was not immediately apparent. Also sometimes explanations are given for the benefit of spectators.

            Yes transparency of various things varies depending on frequency. But this does not negate the info on bandwidth. The technical difficulty in maintaining a higher precision signal in the giga hertz and high ranges is important as well.

            The Microwave and the infrared spectrums have a wide area of overlap. While the FCC has regulations covering up to about 100 gigahertz [fcc.gov], it is always good to keep perspective by noting that visible light has a frequency of 300,000 gigahertz.

        • >I don't know that this is true. in that the 300 mhz of bandwidth is still 300 mhz of bandwidth.

          But not all parts of the spectrum behave equally... The propagation of each frequency is quite different (see my anonymous post listing the max. distances and the different ways the waves propagate), and you absolutely prefer the shortwave range for some things. The antennae are different.

          >You could very easily have AM radio in the gighertz band. 44khz band width (CD audio. etc) on a frequency of 4 giga hetrz. But it would be rather line of site, among other technical issues.

          Plus the receptor would be much more expensive. You'd have to make a superheterodine repector with a number of intermediate frequencies, much more than the low-freq converter and the diode you need to make a simple AM radio...
          Anyway, you would not be able to put the AM signals as close one to another in the 4GHz as in the MF band...
          Plus the attenuation by fog and rain must be considered. This doesn't mix well with amplitude modulation...
          • Plus the receptor would be much more expensive. You'd have to make a superheterodine repector with a number of intermediate frequencies, much more than the low-freq converter and the diode you need to make a simple AM radio... .
            Dude, you are so full of shit. Superhet is nothing special - without it even a HF receiver (receptor?) would be very expensive. The diode in an AM receiver is the "low-frequency converter" - it rectifies the envelope of the carrier. Plus, I don't know if you checked your local radio shack catalogue, but the extra inductors and transistors for an FM receiver won't set you back more than $1.
  • by Anonymous Coward
    There will always be centralized overall spectral management. Communications may be able to be given blocks which are decentrallized but they are not the only users of the spectrum. For example radio astronomers are currently allocated particular bands for operation. Their observations won't be possible if J random cellphone is pouring energy into their band. Also radar systems of various types don't benefit from having increased nosie floors in their operating bands. GPS signals also don't benefit from increased noise floors, you would loose lock on the satellites more frequently.
  • Titanic (Score:2, Informative)

    by oldmacdonald ( 80995 )
    a new model that acknowledges physics and the 70
    years of receiver development since the regulatory
    model was adopted at the time of the sinking of
    the Titanic.

    The Titanic sunk in 1912, that's 90 years.
  • ....Captain Scarlet would never abandon us to the Mysterons!
  • Slightly off-topic, perhaps, but the current limits of the radio spectrum are transient and purely technical. By definition, so is the need for government regulation.

    I am no specialist in the area, but for all practical purposes signal-transmission "on the air" are limited only by the technology we use for transmission and reception. The need for regulation is strictly derived from the practically available technology at any given time.

    Currently, transception(?) capacity at any given frequency range is dictated by the frequency bell-curve nature of any radiosignal (i.e. "channels" per range), and data density over time (i.e. bits per second per channel).

    In theory we could cram an almost infinite number of bits into an almost infinitely small timeframe into an almost infinitely small frequency-range.

    But not today... hence all this clueless babble.

    The limits has changed in the past, and they will change again in the future. A lot! Take heed of this, Powers That Be.
    • But the question is, "are our regulations and laws out of date?"

      I say they aren't. Compared to something like copyright law and the Internet, radio is coming along nicely. There are a few lagging areas, like freely available microwave spectrum for fixed point to point Internet, but that is sorting itself out too with the availability of the various 802.11x bands.

      There are bigger fish to fry. When radio regs become a problem, we should fight to change them, until then, concentrate on much more important things.
    • Woah. I suggest you check out Claude Shannon's information theory.

      Basically, the minimal theoretical bandwidth of a signal is the number of bits of information per second the signal carries per second, in hertz.

      All ways of modulating a carrier cause other spectral characteristics to appear-- call them "sidebands" or whatever. And filtering them out results in pure sinewave (and thus no information) on the receiver side.

      These limitations, being physical in nature, are unlikely to be broken anytime soon. That being said, there are plenty of ways to extract additional "bandwidth" with directional transmissions, minimal output power, etc, thus allowing services to share spectrum.
      • Yes, but different dynamics come into play when you're sending information not just from point A to distant point B but between a large number of devices, all of which can talk to each other (or to nearby units and to a base station) and pass messages around. You can transmit the message in a bunch of low-power hops instead of one high power beam for example, or utilize a wired backbone at some point - the idea is just to design the networks smarter than just 1 Cell tower / Many Cell Phones, which has clear limitations.
    • > Slightly off-topic, perhaps, but the current limits of the radio spectrum are transient and purely technical.

      You're just plain wrong, sorry.

      >In theory we could cram an almost infinite number of bits into an almost infinitely small timeframe into an almost infinitely small frequency-range.

      No.
      Read Shannon's papers.
      The capacity (bits per sec) is given by the bandwidth you use and the signal-to-noise ratio you have. What you can do is SPLIT the medium (ie "add more links") and this way the net total capacity of the network increases. But the capacity of each link is given by Shannon's law and this is a physical limit.

      C (bits/s) = W(Hz) * log2 ( 1 + SNR )

      You can't change that. This is NOT a transient or technological limit. We're already very close to it in some digital modulations.

    • First of all, it's Zeno, not "Zenon".
      Second of all, read Claude Shannon.
      You might be more correct if you're considering the range of a given set of transmissions (physical locality).
  • by mwillems ( 266506 ) on Sunday June 02, 2002 @09:51AM (#3626157) Homepage
    This is a philosophical discussion, but let's also look at the technology.

    There are reasons to control. As a licensed radio ham (VA3MVW) I can assure you that if everyone were allowed to broadcast on shortwave ( 30 MHz) we'd have chaos. A kid in Brazil who uses $15 in parts to create a 10W shortwave transmitter can make an entire band unusable in all of Europe. Shortwave covers the world and there is very little bandwith - all of shortwave is only 30 MHz.

    The reason things are getting easier now is twofild: technology and physics. Technology, because we can now transmit on GHz frequencies - unheard of just a few years ago. And physics: if you go up in frequency, bandwidth becomes almost infinitely available, antennas become shorter, and range becomes shorter (so less interference).

    In other words, good reasons to control low frequencies and good reasons to allow much on wide bands of high frequencies. Which it seems to me is exactly the way it is happening.

    Michael
    • then it relays signals a short range to its neighbors...and doesn't broadcast all over the world. Spectrum at HF _is_ a scare resource because it bounces all over. But at line of sight frequencies, if radios have relaying and forwarding capability, then the total capacity grows with the density of radios.

      Imagine every cellphone as repeater and network router able to forward several connections and software able to manage such a dynamic network. Then each connection only has RF signals that spread out around the path between all the routers. This means less radio signals falling on places that don't want to receive the signal.
    • by Anonymous Coward
      There are reasons to control. As a licensed radio ham (VA3MVW) I can assure you that if everyone were allowed to broadcast on shortwave ( 30 MHz) we'd have chaos. A kid in Brazil who uses $15 in parts to create a 10W shortwave transmitter can make an entire band unusable in all of Europe. Shortwave covers the world and there is very little bandwith - all of shortwave is only 30 MHz.
      That's an argument for regulation of the endpoints, not for regulation of who can participate. Why shouldn't the kid in Brazil be allowed to access the shortwave band as long as he uses the right equipment?
      The reason things are getting easier now is twofild: technology and physics. Technology, because we can now transmit on GHz frequencies - unheard of just a few years ago. And physics: if you go up in frequency, bandwidth becomes almost infinitely available, antennas become shorter, and range becomes shorter (so less interference).
      That's only part of the story. Information capacity is not the same as physical bandwidth. Sure, information capacity increases as physical bandwidth extends to higher frequencies. But, information capacity also increases without extending physical bandwidth to higher frequencies. You can achieve higher capacity just by further subdividing existing frequency bands. How far you can go in subdividing bandwidth is limited only by the ability of endpoints to distinguish frequency ranges. That's why it isn't good public policy to license fixed-width frequency bands to individual owners. Fixed-width frequency bands improve in value as the endpoints become better at distinguishing smaller bands. That increase in capacity should go into the public domain rather than into the pockets of a few media companies.
      • The kid in Brazil is allowed to access the shortwave band as long as he uses the right equipment. He gets his license and he follows the rules established by everyone.
        • The kid in Brazil is allowed to access the shortwave band as long as he uses the right equipment. He gets his license and he follows the rules established by everyone.

          s/everyone/entertainment industry lobbyists/

          • The entertainment industry doesn't care much about the shortwave spectrum.

            Oh, and the rules on shortwave are mainly set by the ITU - An international organization. Corporate influences rarely reach this far.
  • by Andy Dodd ( 701 ) <atd7@@@cornell...edu> on Sunday June 02, 2002 @10:33AM (#3626237) Homepage
    There was a similar article posted on Slashdot a week or so ago.

    Yes, advances in technology have greatly increased spectrum efficiency, to the point where we are nearly at Shannon's theoretical limit. But so far, there is nothing at all that indicates we have any way whatsoever of passing those theoretical limits.

    Yes, cellular techniques can greatly increase capacity. But the question is - Is the complexity worth the added cost? For some systems, such as the cellular telephone system, the answer is yes. But for others (such as broadcasting), the answer is most definately no. (This may change soon - If we ever get flatrate 3G services, there's a good chance that could replace broadcasting. But that is a LONG way away.)

    And let's not forget the huge installed base invested in the old technology. Throwing that all into the junkyard is not worth using newer and more efficient (but much more expensive) technologies.

    One of the earlier posters (a ham, like myself) made a number of very good points too. Even with "infinite" spectrum, the FCC has to exist to regulate the airwaves somewhat to prevent interference between stations, especially malicious interference. Someone said it would be nice if their cordless phone didn't kill their WLAN equipment - How would you like it if your neighbor's WLAN equipment was wiping out your cellular calls, and you had no legal recourse whatsoever against him? That's what the FCC is here for.

    Anyone who argues that the spectrum is infinite is talking BS. The spectrum itself is infinite, but the USABLE part is not. There are physical limits to which frequencies we can and cannot use. Those limits are expanding rapidly, but resources are still finite.

    A final point - The increased complexity of cellular systems means reduced reliability. Their reliability is extremely high, but still, it is more likely to fail than other technologies, such as point-to-point radio, which will always have its place even though cellular phones are beginning to replace two-ways in many areas. 9/11 is an example - Despite being a theoretically higher-capacity system than "low-tech" NBFM two-way radio, the cellular system in NYC was quickly rendered useless by a combination of infrastructure damage and overloading. For at least a month and a half (I don't remember the exact time period), amateur radio (ham) operators provided a significant portion of the emergency communications capacity near the former Twin Towers.
  • by Henry V .009 ( 518000 ) on Sunday June 02, 2002 @10:44AM (#3626265) Journal
    I think that bandwidth could be used a lot more efficently. Right now we are treating the spectrum like the analog medium it is. But a digital treatment is more justified. If we were to break everything up into packets, use reapeaters what not, we could achieve a far more efficent utilization of the airwaves. Nearly all bandwidth is allocated to something. But at the same time, most of it is unused at one instant. Using packets like the internet does could do a far better better job of utilization.

    HOWEVER, it would require more control, not less. The government would need to mandate all radio equipment manufactors meet new standards (much more rigorous than they do now). All legacy equipment would need to be replaced. New laws would need to be drafted to regulate the medium better.

    But so much more is possible. We're using an abundant natural resource like cavemen, and we could do better.
    • The manufacturers are slowly headed the right direction with digital standards like APCO Project 25 [apcointl.org]. Almost all major radio players now have or will have soon a P-25 offering.
  • Everyone uses the new technology model to handle radio waves.

    If one person follows the old pattern, he can seriously degrade if not destroy an entire band capacity by throwing what the new model considers garbage into the stream.

    Error-correction works only so far.
  • ... because they couldn't pass the 5 WPM! ^_^

    (He-he-he! ^_^) 73!
    • 5 WPM???

      Bah, not even that hard. I have a Tech-class license due to laziness. :)

      Right now, that's what - a 55 question multiple choice exam that normal 7 year olds can pass (and have done so numerous times?)

      I'll get my Extra one of these days. (General? Bah, why settle for that when the diff. is one more multiple-guess exam these days?)
  • Sounds like you could have the ultimate finger-pointing nobody-is-responsible multivendor nightmare. Everything works fine for the first couple of years when there isn't much of the stuff around... and then a few more years down the road nothing quite works because the spectrum has been polluted...

    and the "cause" is twenty thousand different devices in your vicinity, two thousand of which aren't quite up to standard?

  • Two (or more) radio transmitters on the same frequency within range of the same receiver will interfere with each other to the extent that usually one of them will not be heard well (or at all). The idea of "software radio" changes nothing unless every transmitter conforms to the same sets of rules and knows exactly where all the other transmitters are and what they are doing.

    Even at microwave frequencies someone with a baby monitor on all the time at 2.4gHz will likely cause you problems with your WiFi network if it's close enough; or between you and the main antenna. One unmanaged device would be enough to create problems for everyone in its vicinity even using the software radio methods.

    Government regulation of radio frequency spectrum was designed to minimize interference and create "bands" where users could reasonably expect the service they want to be located. Otherwise you would have to search through 10gHz of spectrum to find NBC news. Their concept of "software radio" only works if these radios know every source of possible interference in a geographical area and moves in the right way to avoid it. Who determines which way is the right way seems to me to be important and I'd much rather have a government entity do it.

    In addition, the implementation of this system would pretty much require that all the other transmitters be confiscated and destroyed to keep them from mucking up the works.
    • Government regulation of radio frequency spectrum was designed to minimize interference and create "bands" where users could reasonably expect the service they want to be located. Otherwise you would have to search through 10gHz of spectrum to find NBC news.

      Likewise, government regulation of Internet addresses was designed to minimize interference and create "bands" where users could reasonably expect the service they want to be located. Otherwise you would have to search through 4 billion IP addresses to find MSNBC.

      Their concept of "software radio" only works if these radios know every source of possible interference in a geographical area and moves in the right way to avoid it. Who determines which way is the right way seems to me to be important and I'd much rather have a government entity do it.

      Sounds logical. After further research into packet radio protocols is completed, I propose government-regulated location service on a dedicated location band and then a band for simply broadcasting packets.

  • Sirens (Score:3, Insightful)

    by Baldrson ( 78598 ) on Sunday June 02, 2002 @11:58AM (#3626495) Homepage Journal
    With the line-of-sight high-frequency technologies people are discussing, I don't see any reason they shouldn't be handled similarly to the way way sound is regulated.

    If I set up a 138db WW II vintage air raid siren [geocities.com] in my back yard for fun and start testing it out -- in all likelihood I'll be dealt with by the local authorities who will be called in by just about everyone in a 1km radius.

    On the other hand, if I'm talking to my neighbor over the back fence and some Feds showed up to stop our "noise" the local authorities (presuming this is a jurisdiction that doesn't receive a lot of Federal subsidies) would likely arrest them.

  • Even without increased capacity, there are ways to share the airwaves without having anyone own them.

    Sorry, but the idea of the government -- or a company -- controlling or having the rights to a certain frequency is about as obnoxious as the government saying they own all the air in the US.

    The very same technology that regulates printing in LAN's at universities can regulate the airwaves. Two people send a request to a printer to print a document at the same time; the printer doesn't know which to process first, so it waits a random number of milliseconds (different # for each terminal) and then sends a repeat request; whichever one gets back first is printed first. Another way to do it would be to have the printer just randomly pick one. An alternate, and superior way, would be for the printer to print the shorter document first.

    Similar algorithms could govern who is using any particular frequency at any particular time.

    Furthermore, let us not forget that we don't have to deregulate the entire spectrum in one swoop. We could deregulate half of it first and let the technologies for controlling access to that half perfect.

    The point is, everyone should have access to the airwaves. It should not be based on how much money you have. No one has any right to claim they own the air or the airwaves, just as no one has the right to claim they own their air: that's bullshit.
    • actually printing the work that needs to be complete in 20 minutes time before the latest chainmail email would be more efficent

      if its a long piece it should redirect to the nearest printer

      Our uni stores the documents on the local print server, and the first person physically to the printer prints first. You can go to another printer and print your copy if you dont want to wait too.
    • (* An alternate, and superior way, would be for the printer to print the shorter document first. *)

      This may make it so longer printouts have to wait several hours. I agree that if 2 people request printing at the same time, then print the shorter one first. However, the longer one should not wait forever if there are a lot of shorter ones. There should be a upper limit to the wait time.

      BTW, this sounds kind of like elevator and hard-drive optimization algorithm philosophies. "Shorter first" does not always work well.

    • The AX.25 amateur packet radio protocol works in a very similar fashion when multiple people attempt to transmit at the same time. Each waits a random number of milliseconds before retransmitting and whoever goes first gets through. But I don't think this is adequate for all types of data, or voice for example.

      Also, regulating air is not such a daft idea. Even here on Earth with our bountiful air, we need pollution controls and such. Consider a possible Moon colony. Regulating air would become a life-or-death matter.



      • Firstly, I'm not suggesting we do it all at once. We can deregulate part of it first, then the rest later once things are worked out.

        Secondly, there's a difference between the government regulating air-pollution and the governmnet saying "you can't breathe the air". In effect, what the government is doing here is saying "you can't breathe the air, unless you can pay alot of money for it". That's wrong.

        At the very least, the priviledge to use the airwaves should not be decided by who can pay the most.

        Everyone should have an opportunity to use the airwaves. The scheme proposed by Lawrence Lessig is what I'm thinking of here.
      • You're overestimating AX.25. It should have used random waits, but its retransmission is basically a fixed backoff, derived from landline X.25 (LAP-B), which is entirely wrong. Randomness was proposed later and may have been implemented now and then, but the classical TNCs were notoriously prone to congestion collapse.

        Indeed the "digipeating" model of AX.25 is perhaps the best example of how easy it is to get exactly the opposite results from what Reed posits. AX.25 digipeating is truly awful. Been there, done that, gave it up in the '80s.

        Reed's proposal is a whole lot smarter, but the devil's in the details.

    • >The very same technology that regulates printing in LAN's at universities can regulate the airwaves.

      No it cannot. There's a reason for a single Ethernet segment having a maximum size... It's the way the MAC (medium access control) works.

      Say you've got two hosts A and B, on opposite ends of your LAN (from the geographical POV), which want to send a frame to another one in the middle, C. MAC works this way: the sending host first listens to the ether (the medium), if it hears something, it just waits till it isn't used anymore (or rather a random amount of time); if not, it sends its packet. This seems to work fairly well, but, consider this: A sends a packet. It travels at c0/sqrt(epsilon) through the medium, which on a twisted pair cable is about 1.5*10^8m/s Before the signal gets to B, the latter decides to send another thing to C; it doesn't hear anything in the ether, and so sends its packet. Then C ends up receiving the sum of A and B's signals, that is, garbage. There's a collision and no transmission was successful, you've lost bandwidth and time. After a while, both A and B realize they didn't send their packet, and the thing must be restarted again, each host waiting for a random period of time before retry.

      >Two people send a request to a printer to print a document at the same time; the printer doesn't know which to process first

      There's a wrong assumption there: if both people send the request at the same time, using the same frequencies, the printer won't understand any of the requests, unless you use CDMA or another kind of multiplex access (and you're not using FDMA since you don't want this to be regulated nor TDMA, since they transmit at the same time). The sum of two valid signals is garbage, unless specifically designed not to be so.

      The collision rate grows with the maximum propagation delay between two hosts, and the number of hosts (and the traffic their generate).
      It also grows with the time it takes to transmit a single packet.

      Imagine how hardly collisions would hinder the performance of a wireless network being used at the same time by everybody in, say, a city. Plus why on earth would you want to have say a full 1GHz of spectrum (that is, several gigabits per sec depending of the digital modulation you use) to transmit the data of *a single user*?

      Plus there's the issue of the ways each frequency propagates. As I said in another post, VLF gives worldwide coverage, LF and MF propagate via ground wave (although MF can also use ionosphere refraction), HF uses the ionosphere. VHF and UHF go on "regular" (spherical) waves, microwaves are line of sight. Each requires a different antenna, transmission power, etc...
    • The point is, everyone should have access to the airwaves. It should not be based on how much money you have. No one has any right to claim they own the air or the airwaves, just as no one has the right to claim they own their air: that's bullshit.

      This is sort of like saying "Forget traffic laws, let anyone who wants to get a vehicle and drive it any way they want." Sounds great until someone drives a tank across your front lawn. There might be laws against trespass, but the damage has already been done by the time the tank tread prints are in the grass.

      Spectrum regulation isn't some cheesy artifact the government dreamed up to make your life miserable. Among other things, it means you can make radios that tune between 530 and 1700 kHz instead of having to guess where the broadcast band might be. It keeps people from plopping down TV operations right in the middle of a band used for medical telemetry.

      I'm not saying the currect system is perfect or anything, but there are valid reasons why some of it (especially the lower areas where broadcasters can be heard across the country or around the globe) still needs to be.
  • It seems clear many posts are off the mark.

    There were two main subjects. Software radio and how networking affects spectrum capacity. Note that this has little or nothing to do with UWB (ultrawideband).

    (1) Software radio: This technology is still expensive, but costs are dropping rapidly. Normal radios are hardware designed for specific tasks, work at a specific frequency band, use fixed modulation schemes, and fixed energy levels. A software radio does all the work with a CPU. Just load up a new program and all aspects of the device are upgradeable. One device can work as a digital or analog cellphone using US or european protocall, or any future protocall. It can be reprogramed as a CB, TV, Walkie-talkie, HAM radio, beeper, intercom, 802.11, or bluetooth device. Heck, you could leave it on your dashboard as a police-radar detector. New protocalls can be downloaded on-the-fly. You can then upgrade the system without replacing $billions of obsolete hardware. Bandwidth can also be dynamically allocated were it is needed. Much radio capacity currently goes to waste - it's like reserving 15% of your bandwith for browsing, 10% for streaming audio, 20% for video, 20% for games, 5% for email, 15% for FTP, etc. Current regulations are an obstacle to software radio.

    (2) Second was an analysis of the obsolete paradigm of treating radio spectrum as "property". This was based on a fundamental result that data capacity is equal to bandwidth, and that bandwidth is limited. The more devices in the system, the less data capacity each device can get. Try to use 1000 cellphones (or wireless laptops) in one place and the system dies. This is a result of analyzing a simple point-to-point or broadcast system. New systems working as a network throw the old rules out the window. With the proper protocalls each device added to the system can increase total capacity enough so that with more devices in the system, each device still gets the same data capacity. Data capacity per device is no longer a limited resource. It is also based on an obsolete interpertation of interference. In current radios, when two signals at the same frequency arrive at the same place there is interference and the information is lost. This is merely a flaw of current designs. Using "smart" antennas multiple signals at the same frequence can be received without interference. It turns out that multi-path "interference" can actually increases capacity, as does motion. It also allows lower power levels to be used. These results fly in the face of traditional electrical engineering, but they are solid physics/mathematical results. (Watch the presentaion [fcc.gov] before you argue that I'm wrong.)

    In the next serveral years we may be in for a radical change in the way radio is used and regulated. These changes will enable "always-on" wearable networked computing.

    -
    • It turns out that multi-path "interference" can actually increases capacity

      This makes intuitive sense. Consider two radios, A and B, communicating from either end of a long hallway; suppose A is sending data to B. It makes sense that the signal reaching B will be stronger here than if the two radios were outdoors, since the hallway acts as a waveguide, thus directing more energy to the receiver. Unfortunately, up to now radios could not take advantage of the extra energy, because it arrives in the form of multiple signals reflected off the walls, sometimes causing destructive interference. But now, thanks to the recent invention of space-time coding, this can be avoided.

      It seems many people misinterpreted the quoted statement above as saying that same frequency signals coming from two different senders, carrying different data streams, will not interfere-- this is not true!
      • same frequency signals coming from two different senders, carrying different data streams, will not interfere-- this is not true!

        Actually it is true - if you use multiple antennas. You can mathematically extract the original multiple signals by looking at subtle differences in the interference at each one.

        -
    • You don't need smart antennas or anything expensive at all.
      There are already mesh routing products like Nokia Rooftop that
      achieve the 'multiplying bandwidth' phenomenon.

      The mathematics of mesh networks and swarmcast demonstrate an interesting phenomenon that the more nodes who stick their antenna into the cloud, the more routes appear and there is a virtuous circle of improving performance. This principle is supported by Nokia papers on the 802.16 workgroup's site. http://wirelessman.org/ "Mesh coverage & robustness improve exponentially as subscribers are added" http://wirelessman.org/tga/contrib/S80216a-02_30.p df

      Instead of heat death, from packet congestion you get a virtuous cycle of greater capacity because more paths are available. Unregulated, and all but unregulatable. Just like oral speech and visual eyesight-- except having unlimited range.

      There is a voracious, out-of-control design and chipmaking industry, realizing this vision which will happen with shocking suddenness, as hardware manufacturers create the transceivers and home-owners and apartment dwellers just stick them on the roofs. You will buy these at Walmart and in drug stores for $50 in about 12 months from now,
      Todd Boyle www.gldialtone.com
  • "Multipath increases capacity, Repeating increases capacity, Motion increases capacity, Repeating reduces energy (safety), Distributed computation increases battery life, Channel sharing decreases latency and jitter."

    People seem to forget technology is the great equalizer when it comes to limited resources. It's why we won't run out of oil in 2010 and why crowding won't remain a problem. It's using what you have more effciently, not basing your results on a static idiology when the world you live is in a dynamic progression.
  • Repeating is a method to relay a signal where a direct path does not exist. The idea of inserting repeaters into a path simply to reduce emission levels where a direct path does exist is not going to reduce the energy required to establish communication.

    I guess if you wanted to look hard for a benefit you could say that the field strength will be less at each transmit location. Maybe that's a good thing. Certainly the transmit power and antenna system requirements will be less at each location which would make the equipment last longer and make it much smaller.

    But actually reduce the energy? Come on!

    • Intuitively, sure, repeating is wasteful. In current practice, sure, repeating is wasteful. In practical use, repeating may or may not end up being wasteful, but Reed's theory does have some validity.

      He posits lots of repeaters with small range. Indeed, the reference to Shepard's thesis would only be valid if it is this clue: Shepard showed that UWB signals at 60 GHz would fade out so quickly that a zillion of them would add up to very little, because a zillion minus a very few would be in range of any observer. Shepard did not posit, as some have falsely stated, that wide-range UWB signals can overlap infinitely.

      So let's get to Reed's idea. Take a lot of repeaters with low power. You get the signal across the chain of repeaters. Now let's view these as a chain of circles on a map. If the circles are small enough, the chain will look like a narrow line. If the circles are larger, the chain will look like a wide line. The narrower the line, the more lines can exist without overlap, and thus lots of low-powered repeaters (narrow line followed by the signal) provide more net capacity.

      This only works if there are no bottlenecks in the topology, if there are adequate repeaters to meet traffic demand, and if the nodes on the network all cooperate. Those conditions are going to be hard to achieve in practice.

      Still, he is right to point out how obsolete the existing regulatory framework is. There's no Gilderesque free lunch, but there is plenty of room for improvement.
      • I wasn't going to comment on his opinion of regulations. Oh how I wasn't going to go there. He seems to prefer some sort of anarchy.

        You're right that the present framework is obsolete. I see the FCC making baby steps to find better solutions. I sit on the 700 MHz planning committe for Region 12 and view that as one of thier baby steps.

        The biggest problem I see with the FCC and their present regulatory system is the lack of enforcement. Well, that and the fact that during the Clinton years they really seemed to be driven by the big money. But it's all fine and dandy to lay out the regulations laws, but if you don't have the engineers on staff to monitor and request legal action against those who don't abide......well, the regulations mean squat.

    • No. If there's less energy at each location then there is less energy in total.

      Doubling distance actually requires increasing the power by atleast 4x if you don't use a repeater; due to the inverse square law.

      Putting it through an intermediate repeater means only double the power is needed to go twice the distance.

  • Am I missing something here? Or are these guys just loopy?

    This article and the one from last week seem to be saying the same thing: that since it's politically inconvenient for spectrum to be limited, the authors will just declare physics to be null and void and there will no longer be such a thing as wave interference.

    Sure, if you can convince everybody to destroy all their old equipment and replace it with new equipment that uses software scanning you can get more virtual bandwidth out of the same spectrum. But it's not going to be infinite and a few jerks with a few kilowatts of transmit power are going to be able to cause a bunch of problems with this scheme. And considering how much luck there was getting much consensus among shortwave users about trivia such as dropping morse code requirements for licensing, how much cooperation does anyone anticipate on something as blue-sky as this mess?
  • That way, they get more $$ for it when they auction it off.

    The 'steward' of the public's airwaves has become nothing more then a money grubbing whore, whose main existance is to fill the treasury's coffers.
  • Yes, the Squadron of Orange Geese, oops... The fleet of roaming digipeaters is more efficient than the point-to-point connections requiring a lot of power or the cellular network requiring the ground-based backbone. But:

    The roaming digipeaters use the power which is much more expensive than the cellular base power: the battery power of tiny pocket devices. It means that my cellphone will, say, work during 1 hour instead of 8 but the collaborating cellphones will provide the absolute coverage without gaps. I am not sure it's worthy the battery.

    And the second. The business model of the cellular as well as wireless Internet providers is to spend their money for the equipment and to collect fees. So they can invest to the cellular networks. The fleet of roaming repeaters may be technologically efficient but IMO there is no incentive for services provided with such devices, which means that the self-supported community without the big business support will never buy enough devices to drive prices low. Moreover, the self-supported community is the competitor for the traditional cellular systems and as such will be suppressed.

    As an illustration: There is a voice-over-IP technology. There is 802.11 technology. Show me the 802.11 voice-over-IP pocket phone with builtin repeater. I fear such a device will never be able to compete.
  • Hopefully I can help clear up some of the extreme misunderstanding of this topic. I am not an expert but I think I understand what is going on better than average.

    1) Data capacity is being measured in bit-meters/second because the important question is how quickly can I transfer data between point a and point b. His claim is that technology exists that can make this total capacity grow linearly with the number of participants (and hence essentially no interference occurs between unrelated connections).

    The traditional technique to move data from point a to point b is to broadcast at point a with power enough to reach point b on a single fixed frequency slice. imagine point a and b on a map, draw a circle around point a with point b on the circumference. All of the area of the circle has been polluted with the signal. Instead we can use low powered repeaters and have a chain of small circles. Most of the area is unpolluted. Together with spread spectrum, and clever processing with multiple antennas we can pack a lot of information into the available physical and bandwidth space.

    The claim is not that our current stupid allocation of radio broadcast stations can be used by everyone at the same time, but that there does exist a technology that will work.

    The technology he is talking about is to use low powered high frequency software controlled radios. Each station would be a receiver and relaying transmitter.

    Using some system like ipv6 or something cleverer, data is routed through a tight path from radio to radio until it reaches its destination. Because of the nature of radio (and the inverse square law) transmitting this way uses far less total power, and interferes with far less of the world (because the path is a series of tight little circles instead of one enormous circle of radius = distance between endpoints).

    The addition of land based cable or optical routing repeaters could scale this even further.

    By using radios that can be controlled by software we can continuously improve the bandwidth allocation and routing technology and better sharing of the overall spectrum without the problem of legacy hardware.

    Together with intelligent processing of interference with multiple antennas and signal processing techniques, we can scale our wireless data carrying capacity (bit-meters/sec) several orders of magnitude over what we are currently able to use.

    If we build the system in layers, we can add application layer protocols like voice over inter-radio, and video over inter-radio, and soforth. If the protocol is smart it can also be linked to land based fiber networks and improved further.

    As it is with only a few people controlling fixed slices of bandwidth, I would guess that we might scale total wireless information capacity usable by individuals by 6 to 9 orders of magnitude using these techniques.

"Kill the Wabbit, Kill the Wabbit, Kill the Wabbit!" -- Looney Tunes, "What's Opera Doc?" (1957, Chuck Jones)

Working...