Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet The Almighty Buck

Pricing and Internet Architecture 225

Frisky070802 writes "The Politech list recently posted a pointer to a new paper (pdf) by UMN prof Andrew Odlyzko, which compares the telecom industry to the historical transportation industry (railroad, bridges, and such). One quote, from the conclusion, is particularly interesting: '... the networking industry [has] devoted inordinate efforts to technologies such as ATM and QoS, even though there was abundant evidence these were not going to succeed. One can go further and say that essentially all the major networking initiatives of the last decade, such as ATM, QoS, RSVP, multicasting, congestion pricing, active networks, and 3G, have turned out to be duds. Furthermore, they all failed not because the technical solutions that were developed were inadequate, but because they were not what users wanted.'"
This discussion has been archived. No new comments can be posted.

Pricing and Internet Architecture

Comments Filter:
  • 3G a dud? (Score:4, Informative)

    by SinaSa ( 709393 ) on Saturday January 03, 2004 @08:13PM (#7870006) Homepage
    In traditional /. style, I havn't read the article, so I don't know how this guy can already claim 3G as a dud. Here in Australia, Hutchison is doing fairly well, almost all of their handsets sold out within a couple of months of opening their "Three" stores and I'd say thats a pretty big indication of being exactly "what users wanted".
    • Re:3G a dud? (Score:2, Insightful)

      I don't understand this claim -

      The wireless industry, in particular, has often boasted that it managed to avoid the mistakes of the Internet by avoiding the open architecture and flat-rate pricing of the latter.

      Isn't it effectively flat-rate pricing when they give you X minutes for Y dollars a month? Most people pick a plan that gives them more minutes than they'll use, so they never incur the overage charges.

      I think for the majority of customers, it's effectively a flat-rate system.
      • Re:3G a dud? (Score:5, Informative)

        by rsborg ( 111459 ) on Saturday January 03, 2004 @08:52PM (#7870196) Homepage
        Isn't it effectively flat-rate pricing when they give you X minutes for Y dollars a month? Most people pick a plan that gives them more minutes than they'll use, so they never incur the overage charges.

        No, Flat rate, means there IS no overage. Flat rate is usually synonymous with "unlimited usage" (tho lots of ISPs have their own ideas about what "unlimited" means). Flat rate service is like your local calling plan from your RBOC (unlimited local calls for a fixed price), or for example, as the article states, cable/DSL providers who charge fixed monthly fees for effectively unlimited bytes. X minutes for Y dollars is by no means flat rate. Overage exists, and most mobile users have been burned by it.

      • Re:3G a dud? (Score:3, Insightful)

        by DavidinAla ( 639952 )
        You and other customers might see it that way, but the carrier is ONLY obligated to give you a certain number of minutes for that price. From a business point of view, the issue is whether the provider has limited his liability to having to give someone an unlimited supply at a flat rate. Just because you commit yourself to a higher number of minutes than you know you want doesn't mean it's a flat rate. It just means that you're willing to commit yourself to paying for more than you'll use -- in order to ha
        • Re:3G a dud? (Score:4, Interesting)

          by tengwar ( 600847 ) <slashdot AT vetinari DOT org> on Saturday January 03, 2004 @09:27PM (#7870312)
          unless there is a carrier I'm not aware of who provides unlimited service for one price.

          It's rare. One company I know of is Vodafone Sweden, for corporate customers who want to be able to predict their comms bill for the next year. It's one of the few cases where it makes sense for both the provider and the customer.

        • Perhaps, but my cell phone with more minutes than I ever use per month (by about 5 times in a typical month) is still cheaper than my local phone service is for unlimited local calls. (I live on a border area, we pay extra to be part of a nearby cities local calling area. Those closer in don't pay as much) I could go for a cheaper plan, but I like knowing I'll never go over, last time I was on a cheaper plan the cost the one time I went over was large, so I'm shy about doing it again.

        • Re:3G a dud? (Score:5, Interesting)

          by LinuxHam ( 52232 ) on Saturday January 03, 2004 @11:07PM (#7870670) Homepage Journal
          but cell phone pricing is NOT an example of flat-rate pricing -- unless there is a carrier I'm not aware of who provides unlimited service for one price

          You know, I was about to reply with "don't they ALL do it?" and I decided to check. I was wrong. Nextel does "unlimited everything" (24x7 cellular and nationwide 2-way radio) for $200/mo. There are 43,200 minutes in a 30-day month. Take out free Sat & Sun, and 7am-7pm Mon-Fri (4*12 hours, really), that leaves 28,800 "anytime minutes" per month.

          AT&T caps out at 6,300 mins/month. Verizon, 5,500. Cingular, 3,000. TMobile has a nice plan with 5,000 anytime minutes but with a three day weekend for just $129/mo. Looks like Nextel is the only carrier I could find stateside that offers truly unlimited usage plans.

          Thanks for making me look it up, that was interesting!
      • "Isn't it effectively flat-rate pricing when they give you X minutes for Y dollars a month?"

        Not when they economize by cutting convos short.
    • Re:3G a dud? (Score:4, Interesting)

      by Smitty825 ( 114634 ) on Saturday January 03, 2004 @08:19PM (#7870043) Homepage Journal
      Here in the US, it's just taking off. Currently, the pricing is too expensive for most people to take advantage of the 3G advantages.

      It is still advantageous for operators to roll out 3G networks. The usage of the spectrum is better, so more people can make higher quality calls using the same space as before.

      Also, ATM is very commonly used in cellular networks. I'm not sure how anyone could claim it is a dud...but, like the parent, I didn't read the article...
      • Re:3G a dud? (Score:5, Informative)

        by TheRaven64 ( 641858 ) on Saturday January 03, 2004 @08:36PM (#7870119) Journal
        I recently looked at getting a 3G phone here in the UK. The only operator (as far as I know) that offers them is imaginatively enough called `3'. All of their price plans are geared towards people who want to use video features, and make calls. Not only do they not offer any kind of plan aimed at people who just want to be able to send data, I couldn't even find out how much it would cost /MB. I can't help thinking that the biggest market for this kind of thing in the short term is going to be people who need to be able to connect to the 'net on the road, but so far these people have to use GSM/GPRS.
    • by dbIII ( 701233 ) on Saturday January 03, 2004 @08:38PM (#7870133)
      Here in Australia, Hutchison is doing fairly well
      Mobile call pricing in Australia is set very high, and the 3G phones have a price cap - once you go beyond a certain number of calls you don't have to pay any more charges. There have been serious technical issues with Hutchison's 3G network - lot's of dropouts, no-one getting any sound for hours in some exchange areas, people being able to recieve but no send etc. Whether this comes down to lack of resources, bad planning, poor implementation or the actual hardware isn't clear yet from the stories on the issue. Perhaps they just oversold the service and are getting serious congestion problems?

      They're also selling it like phone sex - the posters each have a photo of a person not wearing much and the line "Call me" in big letters - funny really.

      • I think you'll find that pretty much all cellular providers had similar problems to the ones you described when they started up. If you look at the 3g forums for australia, youll notice the complaints on things like that have dropped significantly, as time has gone on.
    • Re:3G a dud? (Score:2, Interesting)

      by Anonymous Coward
      The reason why it is successful is because it's based on CDMA, and provides more capacity for voice calls and works for users far from the base station, as opposed to TDMA based GSM, which has a maximum distance before transmission lag places you out of sync with the base station.

      3G (the European backed UMTS/Wideband CDMA flavour that Hutchison has rolled out) offers little else that users actually want. At its core, it is still a crummy data network dumped on top of a voice network, and voice calls alway
  • Railroads... (Score:5, Interesting)

    by Pig Hogger ( 10379 ) <.moc.liamg. .ta. .reggoh.gip.> on Saturday January 03, 2004 @08:15PM (#7870016) Journal
    I have always been interested in railroads, and as I see organizations thriving to work over large areas, I cannot fail to notice that they run in essentially the same problems railroads ran into 150 years ago when they found-out that they had to absolutely synchronize their operations over vast territories in order to simply avoid accidents...

    This is one reason, for example, why Standard Timezones were adopted by the railroads, then telegraphy used to coordinate operations.

    More than 100 years ago, there were elaborate protocols to insure that instructions were transmitted reliably and double-checked to insure that no error of communication occured.

    Of course, the technology used (telegraph keys and, later, telephone) was not as sophisticated as now, but the essential principles (fail-safe, reduntancy checks, retransmission protocols and whatnot) were there.

    It's always fun to watch young pups straight out of school try to solve a problem that was solved more than a century ago by the high-tech industry of the times: the railroads...

    • Pity,

      then comes Thatcherism and privatisation of key services and make such a mess of the British railway... costs are cut, people are sacked and every now and then an accident happens. British railways used to be an example of good competent service... ask any Briton about it nowadays?

      In Spain, this Bush's boots licking administration is planning to do the same. After sinking the health care, educational and justice systems underwater we only need trains to be crashing every now and then so some relative
    • Re:Railroads... (Score:5, Interesting)

      by Animats ( 122034 ) on Saturday January 03, 2004 @08:53PM (#7870201) Homepage
      There are two pre-electronics technologies that anyone designing reliable systems should understand in some detail - railroad signalling and telephone switching. Both were designed to be more reliable than their components. In the relay era, that was essential, because component reliability was mediocre by modern standards.

      The references you need to read are obscure, but exist. For railroad signalling, the technology was mature by 1930. An understanding of either General Railroad Signal or Union Switch and Signal relay-era technology is useful. Both companies produced good books describing their technologies in 1924. There's also "NXSYS", a simulator down to the relay level of New York City subway signalling technology. The key idea to take away from railroad signalling is what "fail-safe" really means and how it is consistently implemented.

      Telephony in the relay era is best understood by studying its most advanced form, Number 5 Crossbar. There are descriptions of the technology in "A Technical History of the Bell System". #5 Crossbar is a transaction-oriented system, in which units of different types do quick transactions to get the job done. Resources of a given type are interchangeable, so losing one unit just reduces call capacity. Resources include originating registers, markers, senders, trunks, translators, billing punches, and trouble recorders. The switch fabric itself is dumb; all the smarts are in the resources. Resources are never tied up for the duration of a call; they're seized from a pool, used for a fraction of a second to a few seconds, and released. That architecture is extremely reliable; no Bell System central office in the relay era was ever down for more than 30 minutes for any reason other than a natural disaster. The key idea to take away from telephony is how interchangeable resources were used to build up a system.

      • There are descriptions of the technology in "A Technical History of the Bell System".

        Can you list an ISBN? I can't come up with any reference to this book newer than 1990 and no exact matches at the online book stores.

    • Speaking of standards, Look at how all carriers are pushing SMS interoperability, while European countries have had it for ages. The US is just started enabling interoperability. Everyone knows how to do it, just didnt see a need, until now.
      • Think back to the online world of the early 90's. People with access to the Internet were mostly at universities and the corporations who had gotten onto the net. The online services weren't connected yet. It was not unusual for someone to provide multiple e-mail addresses on multiple service providers because they were separate worlds.

        The analogy with SMS interoperability is not a perfect one. However, there is one important similarity. The end-users want to talk to each other. The network effect be
    • It won't work in IT. (Score:4, Interesting)

      by Malcontent ( 40834 ) on Saturday January 03, 2004 @10:48PM (#7870608)
      What worked for railroads will not work for IT because the players have no interest in playing nice with each other. Each company wants to make their own proprietary version of everything and lock it up with patents and DRM.

      Why do we have umpteen different voice and movie recording codecs? Why do we need so many DVD formats? Why didn't MS just use ldap and kerberos instead of rolling their own versions of it?
      • by Pig Hogger ( 10379 ) <.moc.liamg. .ta. .reggoh.gip.> on Sunday January 04, 2004 @12:34AM (#7871068) Journal
        What worked for railroads will not work for IT because the players have no interest in playing nice with each other.
        They will. It's inevitable.

        120 years ago, the railroads didn't work together (different track gauge for each railroad; incompatible couplers, and in England, at one point, there were 3 incompatible brake systems) because they had no interest in playing nice with each other.

        Over time, the railroads who played nice with each other had an advantage over the ones who didn't, and legislation eventually did the rest, so, nowadays, railroads are 100% compatible with each other (to the point that engines from one road can be remote-controlled by engines from another road).

    • It is also instructive to note that the Railroads exist as a sad remnant of their former glory, due to being regulated in their innovation by government, and competition with a government run monopoly: roads.

      A network is a network is a network. They have the same issues no matter how different their method or payload. They are all subject to the same failings as well, and bureaucratic regulation will kill all of them.

      Innovation is what keeps networks alive, the ability for new players to enter the market
      • Re:Regulation (Score:5, Interesting)

        by Ironica ( 124657 ) <pixel@boo[ ]ck.org ['ndo' in gap]> on Sunday January 04, 2004 @12:00AM (#7870830) Journal
        Railroads exist as a sad remnant of their former glory, due to being regulated in their innovation by government, and competition with a government run monopoly: roads.

        Competition with a free road network did a lot to kill off rail in much of the US, but government regulation didn't kill them... it avoided killing people. If you want to talk about the urban streetcar systems, that's another story, but the "regulation" was what the streetcar operators agreed to in order to maintain a monopoly on a given route.

        Regulated travel and transportation is far safer than deregulated. Take a look at airplane accident statistics pre- and post-Regan deregulation. It's pretty horrifying (and firing all the experienced air traffic controllers didn't help one bit).

        Innovation is what keeps networks alive, the ability for new players to enter the market without hinderance is what allows the greatest innovation.

        And in many cases, it's only through government regulation that new players can enter those markets unhindered. See Sprint/MCI vs. Ma Bell, for instance. How much better did telecom innovation get in the US when the government stepped in and broke down the monopoly? How much has the Telecom Act of 1996 allowed smaller providers to come in and do what the big phone companies are prohibited from doing unless they open their networks?
  • Exactly... (Score:5, Funny)

    by Eric_Cartman_South_P ( 594330 ) on Saturday January 03, 2004 @08:17PM (#7870026)
    Furthermore, they all failed not because the technical solutions that were developed were inadequate, but because they were not what users wanted.

    Perfectly stated. All I want and care for from an ISP is a good stable connection. That's why I've been with AOL for six montH^@@!0%$*ATDT[NO CARRIER]

    • The no carrier joke is quickly becoming a standard one.

      Beowulf Natalie Grits Russia Overlord$&^#@*&#@*$@#%$@#[NO CARRIER]
      • That joke was old 20 years ago!
      • Re:Exactly... (Score:3, Informative)

        by xigxag ( 167441 )
        The no carrier joke is quickly becoming a standard one.

        You're kidding, right? The "NO CARRIER" joke has been around much longer than the Natalie Portman joke. Hell, it's been around longer than Natalie Portman!
        • It's worse than that. Remember the old Bitware software, for connecting to BBSs? if you were reading a message online (for you young'uns, meaning while connected to a BBS, not to the internet) and encountered the words "NO CARRIER" in the message body -- this dumb program would hang up the modem!

          Come to think of it, one has to wonder if this "feature" was a bug or a practical joke. :)

  • by UnderAttack ( 311872 ) on Saturday January 03, 2004 @08:17PM (#7870031) Homepage
    One issue is that companies do not tell users what they are actually buying. Users do not want to buy "GPSM" or "3G" or "ATM". They want a fast network for a good price. Somehow companies have to tell them just that.

    For example, here in the US 3G services are sold by AT&T as "MWave" and Sprint as "Vision". Neither vendor actually explains users why they want these services.

    On the other hand, Verizon is doing pretty well by just simply explaining users that they provide clearer calls /better coverage. Users don't care that part of the trick is 3G and such.
    • Price discrimination is just another form of censorship. It will be seen as damage and routed around. Those that try will fail.

    • On the other hand, Verizon is doing pretty well by just simply explaining users that they provide clearer calls /better coverage. Users don't care that part of the trick is 3G and such.

      Verizon also lies on its coverage maps, and marks all non-coverage areas as "Roaming". They also include 2G coverage on their 3G maps.

      Sprint claims they have the fastest network, which they dont.

      You have to watch every company, they are trying to get customers just like everyone else. I normally would say look at coverag
  • 3G (Score:5, Insightful)

    by Jeff DeMaagd ( 2015 ) on Saturday January 03, 2004 @08:18PM (#7870036) Homepage Journal
    I'd like 3G, I just won't pay $10 a month for internet service for my 100x100 pixel phone, and I'm not buying screen savers and ringers that expire in 90 or 120 days. I'll pay that much for screen savers and ringers that I can keep forever, $1 to $3 isn't too bad compared to the time it would take to make my own, but not for something that the "owner" thinks should just be a temporary thing.
    • Re:3G (Score:5, Funny)

      by Tim C ( 15259 ) on Saturday January 03, 2004 @09:46PM (#7870354)
      I'm not buying screen savers and ringers that expire in 90 or 120 days

      Your screensavers and tones expire?! I think I begin to see why Americans always seem to be so critical and sceptical of advances in mobile phone technology - expiring downloads, network-locked phones, etc; you guys are being screwed.
      • Now, now, don't take one person's comments and generalize them as applying to ALL stateside carriers. The Nextel downloadable rings and wallpapers don't expire. Now, I was bitten because I changed out my phone before I backed up my custom rings and wallpaper and couldn't redownload (bastards!). Now I know.

        I had them for over a year before I damaged my phone beyond repair.
    • You're with Sprint, aren't you. Not only won't they let me buy an unlimited time ringer, I can't download wallpaper from my computer!
  • by pummer ( 637413 ) <spam.pumm@org> on Saturday January 03, 2004 @08:19PM (#7870045) Homepage Journal
    My inside connections at Verizon tell me the company is preparing to offer DirecTV in 2005 to get themselves known in the TV business. Then, by 2010 when they roll out FTTP (Fiber To The Premises), they'll be able to offer television over that. Is this what consumers want from a communications company?
    • Uh, how exactly do they plan on offering DirecTV? DirecTV is now owned by NewsCorp, and they don't seem to have any incentive to do any deals with Verizon, or to sell the company they just bought.
      • DirecTV has an issue with penetration in urban areas: owing to difficulties with getting a clear view of the southern sky, many/most potential customers cannot use the service. Once you get into the suburbs and (especially) rural areas, you start seeing DirecTV dishes regularly.

        Thus, I suspect that Verizon (and other telcos) would, rather than roll their own digital television service (with the management headaches that entails), partner with DirecTV to run DTV through their fiber with DTV handling progr

  • by twoslice ( 457793 ) on Saturday January 03, 2004 @08:20PM (#7870049)
    From the article under the heading Open Systems and innovation:

    The power to price discriminate, especially for a monopolist, is like the power of taxation, something that can be used to destroy.

    Sounds a lot like Microsoft to me....

  • The interweb has revolutionized the way we gather information. It has become a cheap, simple, and reliable alternative to traditional systems such as ancient 'libraries'. Once, many years ago, people would actually travel from their homes or workplaces, to these 'libraries' and browse the 'library's' limited selection. Today, there are far fewer information borders. I see it as evolution. At one time, people would even write on pieces of paper and have them delivered to other people. This could often
  • by Klatoo55 ( 726789 ) on Saturday January 03, 2004 @08:22PM (#7870058) Homepage
    The problem with the networks that have failed is that they have not been able to improve on the status quo enough. A technology may be superior to the current standard, but it must overcome the laziness of the general public; they don't want to switch unless there is a clear and overwhelming advantage to be had by doing so. I refer to the rotary engine... The advantage of experience and existing support for a technology can overwhelm all but a clearly superior alternative.
    • by DavidinAla ( 639952 ) on Saturday January 03, 2004 @09:05PM (#7870252)
      What you describe isn't really about the "laziness of the public," but rather the laziness (or stupidity?) of the providers. It's not reasonable to expect the public to investigate the advantages of every new thing in every area and make educated decisions. People can only decide they want something when providers are explaining WHY they want something.

      For instance, some automakers push things in their ads which don't explain a benefit to the public. I've never known what "dual overhead cams" are. I've never known why I should care that a car has 24 valves. I've never known why I should care about "independent front and rear suspension." I'm sure there is a benefit to all of these, but fewer and fewer people want to be mechanics in order to buy cars. We want to know about specific benefits, not about lists of technology that we don't really understand. We shouldn't have to learn about auto mechanics in order to decide whether we want a certain feature.

      In the same way, consumers of IT products shouldn't have to know what 3G means, for instance. They just have to be sold on a network's ability to transmit a picture or whatever else it might mean to them. That's not laziness. That's simply reasonable in a world where no one can know everything.
  • ATM is a dud? (Score:3, Informative)

    by cavebear42 ( 734821 ) on Saturday January 03, 2004 @08:22PM (#7870059)
    DSL is a form of ATM. I don't know if I would call that a "dud". I agree that we were hoping to move all forms of notworking to ATM and that didn't pan out, but still is one of the widest forms of notworking currently in use.
    • Re:ATM is a dud? (Score:3, Informative)

      by sxpert ( 139117 )
      yeah, atm is used for DSL only to link the modem to the provider because the transport operator's network is atm based for some reason. ATM in itself is not useful in this context, and eats up to 10% of the available bandwith with pure non-necessary overhead
  • by skaap ( 681715 ) on Saturday January 03, 2004 @08:25PM (#7870069) Homepage Journal
    Learning from experience, users dont actually seem to know what they really want..

    First they decide that they need something, so it gets done,
    next they decide that isnt what they wanted. And now what was made is not good enough.

    This happens every day in the PC world where we're forced to deal with endusers.
    All of the above technologies were created through a demand for them, only to realise that they werent sufficient for what they wanted to achieve in the first place.
    • Users may not know what they want, but they won't buy what they don't want.

      Users are also getting more fickle. Most of them have been burned by lock-ins, and are starting to be aware of the concept of "compatibility". They've had Napster and like the concept.

      Look at it his way: You know when they've found themselves a "killer app", right? None of those dud technologies fits the bill.

      Vik :v)
  • by dmiller ( 581 ) <djm.mindrot@org> on Saturday January 03, 2004 @08:26PM (#7870072) Homepage
    QoS is far from being a dud - it is a critical part of any VoIP deployment and is now a part of any substantial core network engineering. QoS brokering between ASs (e.g. RSVP) has been a dud so far, but interdomain VoIP is still pretty young so there hasn't been much demand.

    What about architecture changes that have worked? IPsec, ECN, CIDR (and the many changes that came from that, e.g. BGP4) and MPLS? It is too easy to focus on things that failed and ignore the things the silently work.
    • excellent obervation. Phones will ALL be VoIP eventually, so QoS will only be becoming more and more necessary.
    • What has failed is the excessive boasts made about such things.

      ATM is not the answer. VPN is not the answer. There is no "The" answer. Each protocol has its place and use.

      Just because I don't like VPNs and QoS doesn't mean they're wrong, it means that I believe they're overused in situations where they don't solve anything. In this way I agree with the author.

      dmiller, you do have a very important point, to wit:

      It is too easy to focus on things that failed and ignore the things the silently work.

      Exact
    • by anti-NAT ( 709310 ) on Saturday January 03, 2004 @10:49PM (#7870613) Homepage

      I think you are saying that "QoS" is necessary to VoIP, because if VoIP is flakey, the end users won't use it.

      I then think you are really saying that VoIP is a latency sensitive application, so the network has to be engineered to meet the latency requirements of VoIP.

      The issue then is how you meet those latency requirements ?

      There are a couple of ways you can do that :

      • Ensure that there is enough capacity in the network such that it very rarely gets congested to the point where VoIPs latency requirements cannot be met.
      • Provision less capacity in the network, and then use various managed QoS mechanisms, such as CBQ, etc, to manage the congestion in the network when it occurs. Congestion in this network will occur much more often than the network with the additional capacity.

      So which solution do you choose ?

      As a rule, simplicity usually wins out. Maybe not in the first instance, but eventually, over time, things tend towards simplicity. Simplicity tends to be cheaper, and everybody aims for cheaper. There is always a demand in the market for cheaper, and commonly, the only way to achive cheaper is to go simpler.

      Costs of running a network are broken into two areas - Capital Expenses (ie. usually initial, setup costs), and Operational Expenses (ie. ongoing running costs).

      Comparing the above solutions, the one thing the second has that the first doesn't have is a lot of active bandwidth management and measuring. This can be very expensive to do, when you consider the number of devices and links within the network. It can also be very complicated, as it increases the number of protocols running in the network, and the number of people who need to be paid to watch and operate the network. The QoS solution is not the simpler of the two solutions. The second solution has higher operational expenses than the first.

      Comparing the two solutions using capital expenses, I'd suggest the initial set costs of the first solution would only be in the order of about 20% more than the second, accounted for by the additional bandwidth expenses incurred.

      The question to ask then is "how long will the 20% cheaper start up cost of the second solution be absorbed by the higher operational expenses of the second solution ?"

      My answer is "not all that long". Which indicates that the "throw bandwidth at it" solution, in the longer term, is both simpler and therefore will be cheaper.

      As further evidence, consider the Internet. There is very little QoS management on the Internet, with the exception of a recommendation of a default queuing alorithm - Random Early Detection [isi.edu]. The Internet solution is to "throw bandwidth at it". Yet most of the time Internet provides good enough "QoS" to allow people to make voice and video calls across it. Certainly good enough to sustain voice calls that are equivalent or better than mobile or cell voice calls eg GSM. Based on that evidence, you don't need to implement QoS technology inside the network to sustain the latency required for typical VoIP applications.

      In the Internet, simplicity has won.

      • Costs of running a network are broken into two areas - Capital Expenses (ie. usually initial, setup costs), and Operational Expenses (ie. ongoing running costs).

        Strangely enough, this is also the case in transportation. I haven't read the paper yet, though I definitely plan to... and I'll probably work it in as a reference in one of my assignments this or next quarter.

        Comparing the two solutions using capital expenses, I'd suggest the initial set costs of the first solution would only be in the order o
        • But I'd consider additional bandwidth to be an operational expense. Sure, the capital expenses are also going to be higher, but unless you *own* the bandwidth (i.e. you're a backbone provider) you'll have a monthly lease on it.

          And I do too, forgot about it, until I re-read my post, after it was posted.

          Still, the only "opex" cost associated with pure bandwidth, other than the expense of the bandwidth itself (which you are fundamentally getting your customers to directly pay for), is to have your accoun

      • In the Internet, simplicity has won.

        You're showing a very Western bias there...

        in New Zealand, for example, there isn't bandwidth to throw at the problem. Upstream providers want around $115 USD/month per 64kb channel for CIR bandwidth. Other expensive markets include parts of Russia, South America, Africa... all places with millions of Internet users. Active bandwidth measuring, traffic caps, and QoS will be with us for many years to come.
        • You're showing a very Western bias there...

          Well, Australia is probably considered a Western country :-)

          in New Zealand, for example, there isn't bandwidth to throw at the problem. Upstream providers want around $115 USD/month per 64kb channel for CIR bandwidth.

          I'd suggest a significant part of the reason for that relatively high cost is the requirement for an end-to-end CIR.

          The efficiency and economics of a packet switched network is based on the fundamental assumption of fair and equal sharing o

  • Wrong... (Score:5, Insightful)

    by Gwala ( 309968 ) <adam AT gwala DOT net> on Saturday January 03, 2004 @08:26PM (#7870073) Homepage
    and say that essentially all the major networking initiatives of the last decade

    Funny, becuase that's the opposite of what I see today. Networking/Telecommunications has never been bigger, and apart from a good portion of the net's underlying protocols, we are constantly surrounded by new networking initiatives that have been blindingly successfull. Since `94, the internet (as far as public use goes) has been a pretty successfull initiative. Let alone a lot of the behind-the-scenes initiatives, like enhancing transoceanic cabling.

    The author of that paper is incredibly vague in his paper -, it's easy to pop off 10 initiatives that failed bigtime (like sattelite phones), but becuase your so used to them, you never notice those that have been successfull (Eg CDMA/GSM, and 3G is popular outside the US). I would go so far as to say that most telecommunication's/network initiatives have been successfull in the last decade, becuase as a planet, we are growing increasingly dependent on communication.

    -Adam
  • by G4from128k ( 686170 ) on Saturday January 03, 2004 @08:27PM (#7870081)
    Its not that users don't want the telco's acronym soup of next-gen features, it's that they don't want to pay for those features. Providers are desperately seeking the fabled "killer app" that makes subscribers shell out another $29.95/mo. But consumers are tired of expanding monthly bills. And it doesn't help when companies slather on an encyclopedia of restrictions, fees, and service charges.
    • Most of the features the telco's add are things that are just not well suited to the small form factor of a cellphone (Text messaging is fine on the receiving end, but I don't want sega-thumb from having to push all the damn buttons to enter a reply).

      Whatever happened to the cellphone equivalent of the plain black telephone (occasionally available in hotline-red, ghastly-green, or jesus-what-shade-is-that-gray)?
      • by G4from128k ( 686170 ) on Saturday January 03, 2004 @09:10PM (#7870264)
        Most of the features the telco's add are things that are just not well suited to the small form factor of a cellphone

        Amen! The user interface for cellphones epitomizes the worst possible combination of design compromises -- trying to deliver a cognitively rich array of features in an inscrutably tiny screen space. Customers demand the smallest lightest possible handset and then are disappointed when the screen is unreadably small, the buttons are unusably close-packed, and the battery life (under real use) is pathetically short. Perhaps when eyeglass screens and virtual keybaords appear, then we will be able to enjoy full internet services in a visually large space.
    • by fermion ( 181285 ) on Saturday January 03, 2004 @09:26PM (#7870306) Homepage Journal
      I guess we me it is not the expanding bills, as those come with additional services. It is the long contracts and, at least for cell phones, the difficulty of upgrading.

      For the ISP, the problem is a long list of restrictions. You can do this, can't do that. You have to install this software, and we can redirect you. My favorite is that on SWB, Yahoo will take you a useless ad page, which youmay customize, rather than yahoo.com, which is actually useful.

      For the cell phone, I would use the new services, but it is so hard to upgrade. I have to buy a new phone, sign up for two years, and maybe even pay an activation fee. What the hell do they think? That after several years of staying with the same company I am going renew a plan and then quit after a few months. They have to create a reasonable path so that old customer can migrate to thier new services.

      • For the cell phone, I would use the new services, but it is so hard to upgrade. I have to buy a new phone, sign up for two years, and maybe even pay an activation fee. What the hell do they think? That after several years of staying with the same company I am going renew a plan and then quit after a few months. They have to create a reasonable path so that old customer can migrate to thier new services.

        Have never had a problem upgrading with Cingular. Every 12-18 months, I wander into a convenient Cingul
    • by rmarll ( 161697 ) on Saturday January 03, 2004 @09:37PM (#7870335) Journal
      At my last cell company, the *killer app* apparently was to write software to strip caller ID info from the data stream and charge me to turn off the filter.

      I currently have a mailbox with 3 available message spaces... Not that I mind so much, but does the exra space for a few more messages really cost 5.95 a month?

      I feel like someone has crapped in my well so they can sell me bottled water.
      • damn, your cell provider sucks.... mine is nice enough to provide me with caller id info if I answer a call that has blocking activated. I just have go to the web billing interface in a few hours and look at who called me. (this worked the last time I got a call from a number with blocking activated, i don't know if this still works)

  • IP networking destroyed ATM, but not before an entire industry could chance it out trying to market this stuff into progressively smaller markets.

    3G also suffers from the "not IP" dilemma, but also ecause it is not clear excatly what 3G is in a fragmented cell network.

  • gap people (Score:4, Interesting)

    by Anonymous Coward on Saturday January 03, 2004 @08:33PM (#7870108)
    "but because they were not what users wanted."

    *and* the users could get something they did want.

    possibly that doesn't need to be emphasized, but sometimes it does. to a degree the net is flexible and allows a number of ways to do things. if it was an oldschool lockdown situation, any of those failed technologies may have "succeeded". not because they were good solutions, but because they were the only ones available.

    don't like what your local pop40 station plays? tune in somafm or whatever. we didn't have that option before, and a lot more people listened to local just for the 1 in 20 songs they liked.

    the trick for user studies (there's got to be a better term than that, but it's better than consumer) is to be aware where people go when they don't use your system.

    ie, how many people don't have a land line telephone? every year a lot more people go to just cel and cable. but most of them are "new" customers fresh out of college, so the telcos don't see them in disconnection stats. there's lots of research holes like that one.

    unemployment figures are full of them. up here there's a guestimate 200,000+ that left school then never showed up as employed or on welfare. that's a hella lot of people the gov't doesn't know where they are, and don't put in our unemployment figures because they were never listed as working...
    • ie, how many people don't have a land line telephone? every year a lot more people go to just cel and cable. but most of them are "new" customers fresh out of college, so the telcos don't see them in disconnection stats. there's lots of research holes like that one.

      And then there's folks like me at my old apartment, who didn't use a landline, but the phone company still saw me as a customer because they could charge me the regular phone rate on top of my DSL charges. Couldn't figure out where to route my
  • strange (Score:4, Interesting)

    by segment ( 695309 ) <sil@@@politrix...org> on Saturday January 03, 2004 @08:37PM (#7870127) Homepage Journal
    ATM, ... have turned out to be duds. Furthermore, they all failed not because the technical solutions that were developed were inadequate, but because they were not what users wanted.'"

    Define "user" I know this guy is not referring to some average joe fiddling with ATM. Hell the average joe thinks a cell is where he's going to be if he uses Kazaa too long.

    interface ATM1/0.2 point-to-point
    description PVC to Kungfunix
    ip address 192.168.1.1 255.255.255.252
    no ip directed-broadcast
    ip access-group from_Kun in
    ip access-group to_Kun out
    atm pvc 3 0 33 aal5snap

    Oh yea I'm sure the average user is going to bypass DSL or cable and go straight for the big guns. Sure, run an ISP in their own house... User? Define

    • Sure, run an ISP in their own house... User? Define
      You just did.

      The user here is the ISP.

      Wasn't that hard, was it?
      • THAT I know, but it should have been clarified either in the write up, or by /. editors posting the story. Perception is a bitch, and the way I see it, the intro (/. intro anyway) makes it seem as if the average joe blow would know or even care about ATM, QOS, etc., hell the average non ISP linked person knows what CLEC's or ILEC's are.
  • ATM... (Score:3, Informative)

    by An Anonymous Hero ( 443895 ) on Saturday January 03, 2004 @08:42PM (#7870147)
    the networking industry [has] devoted inordinate efforts to technologies such as ATM and QoS

    The paper seems quite light on the subject ("ATM" only occurs twice)...

    but indeed Marconi sank billions of cash into it. [lightreading.com]

    Not everyone was happy ;-) [stanford.edu]

  • by trybywrench ( 584843 ) on Saturday January 03, 2004 @08:45PM (#7870163)
    ATM may not have ever reached the desktop but it is a very good backbone protocol. It was designed from the start for fast switching and has QOS features built into the protocol. Things like Gbit eithernet and 10gbit ethernet are available but ethernet was never designed as a WAN protocol and lacks features that ATM has.

    I would say over half of the tier1 ISP's are running ATM on their backbones. That would make ATM a very succesful technology in the Internet.
    • by anti-NAT ( 709310 ) on Saturday January 03, 2004 @11:09PM (#7870686) Homepage

      Cisco aren't [cisco.com]

      Juniper aren't either [juniper.net]

      Neither of them are because either

      • They can't build it, as the cell per second processing load is too high for current technology
      • They can't afford to build it, as the customer won't pay, as it will be too expensive, caused by the cost of coming up with a solution to the first point.

      They don't even go to OC48c or 2.5 Gigabits speeds with ATM.

      ATM is being phased out of carrier backbones because it is overly complicated, and therefore overly expensive for what carriers need. Packet Over Sonet/SDH (POS) or Ethernet is taking over.

      Just because a technology is being used doesn't make it successful, in particular when compared to its original design goals. It may only mean that there was not alternative at the time. As soon as something cheaper, yet as or more effective comes along (eg POS, 10Gbps Ethernet), the less effective technology will be replaced and / or avoided.

  • by G4from128k ( 686170 ) on Saturday January 03, 2004 @08:47PM (#7870171)
    The layered approach to internet infrastructure is a great technological solution for decoupling the physical mechanisms for moving data, the protocols for managing data movements, and the high-level applications that rely on that data. Layers create natural zones of standardization and enable any application to run on any network.

    But that technological architecture is a business model nightmare. All of the costs reside in the lowest physical layers. All those wires, fibers, amplifiers, and switches cost big bucks. Unfortunately, all of the value lies in the highest, application layers. Users want the application and don't care about the physical infrastructure. A layered architecture gaurantees that users don't have to care because the lower layers are interchangable and invisible.

    The result is cut-throat price competition among infrastructure service providers (and the associated miles of dark fiber, negative earnings, high debt, and bankruptcies). Meanwhile, the application providers reap the profits while the infrastructure providers can't justify the expense of solving the last mile problem.
    • So eliminate this cut-throat competition with the equivalent of the Open Source movement. Hmm, Open Communications movement?

      Anyway, hows about users get shown how to solve the last-mile problem among themselves with wireless connectivity? Wireless local networks for free...

      Vik :v)
  • Focus Groups (Score:5, Insightful)

    by halo8 ( 445515 ) on Saturday January 03, 2004 @08:50PM (#7870185)
    Reduce the problem to its simpilist equations

    the telcos THOUGHT we wanted it because the focus groups told them " sure id like that on my cell phone" soo.. instead of getting joe six pack, bob the dentist, and suzy suburb homemaker to do the focus group..
    Get...
    jim the out of work IT, bitter, sarcastic, REALISTIC guy, who will say "i just want a cell phone to be as cheap and useable to replace my homephone but ill pay a SMALL bit more to have it be portable"

    because as much as joe, bob, and suzy like these neat gadgets.. after the first week there not going to use them, and there not going to pay for them.
    • Providers:Ron Popeil as 3G:Hair In a Can?
    • So what you're saying is, Bob the Dentist et. al. were making the decisions on weather or not to impliment QoS or ATM for IT network design?

      That actually makes a lot of sense.

      "Bill, we need to impliment an entirly new standard over our backbone. Let's call up marketing and have them do a focus group of a cross section of America."

      "But the average Joe doesn't know anything about network architecture!"

      "You don't want to keep your job, do you Bill"
    • Re:Focus Groups (Score:4, Interesting)

      by Nessak ( 9218 ) on Saturday January 03, 2004 @10:19PM (#7870499) Homepage
      You are right. I was in a focus group a while ago (1999) for a Fiber-to-the-Home test by a large cable op. In the final group meeting they asked the 10 of us if they should continue the rollout. I was the most technical of the group.

      Everyone else was very enthusiastic about it. My response was that no one in their right mind would pay more for fiber and a trench in there front yard if the speeds would not be much faster then DSL/cable and the usage was just as restricted. For that reason I thought it was a bad idea. (It really wasn't all that impressive compared to my superfast cablevision cable ISP.)

      So yes, I agree. Either they really need to do better at focus groups or they need to ask people who have a clue about tech and know how much various services are really worth. (Fast internet on a small phone without getting a USB/RJ45 IP and paying $$$ is NOT worth it and will fail.)
  • by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Saturday January 03, 2004 @09:15PM (#7870279) Homepage Journal
    • First, pick a technology that has never been seriously deployed. It greatly helps if none of the readers have any practical, real-world experiences with the subject.
    • Second, always compare with an only marginally related industry or discipline. There has to be enough of a connection to convince the readership, but not enough of one to disprove your preconceived notions.
    • Third, Always tell the audience that they have The Right Thing. It makes them happy. It also makes the usual sponsors of such work (the ones who run the status quo) very happy, too. Happy enough to pay you, for example.

    Seriously, multicaasting is enabled on most of the majopr backbones, but none of the major ISPs supply it - even to broadband customers - at any price. UUnet is one of the few that does. Their links aren't cheap, and from all accounts it can be very hard to get multicasting enabled, simply because a good number of their front-line support people don't know anything about it.

    QoS is likewise serioulsy hindred. Oh, it's used in the field. The transatlantic link between the UK and the US has CBQ (Class Based Queues) enabled to maximise the throughput of important traffic, simply because there's so much.

    Britain's JANET network has a highly extensive network of web caches. The theory being that one of the biggest loads on the transatlantic link is web traffic, and that the same site is often accessed repeatedly (eg: for University coursework), so that the most efficient solution is to cache everything.

    While not strictly "QoS", caching can reduce access times for a web page at peak time from maybe an hour to down to 15 seconds, whilst also massively reducing the load on the network.

    RSVP is a different case. That is known to not scale well over very large, complex multicast networks. (Too much overhead.) However, it is great for local networks, and I'm sure that it will gradually filter its way into Universities and mid-sized corporations, where videoconferencing is useful but bandwidth issues make it impossible to do without some QoS.

    ATM is used by many xDSL companies, as it is a very efficient way to run a fixed-point to a fixed-point. To say it's not used is absurd and shows a degree of ignorance. It's also very popular in Europe, where people perhaps put a little more investment into infrastructure.

    Quick note: I'm a little irritated by hearing some American politician label maglev trains as "sexy science fiction" and "stupid". To me, it's part of a worrying trend I'm seeing in all too much of the US, where there is an apparent phobia of making any actual progress in anything. To me, progress is the certain bit. What happens to those who reject it - that's not so certain.

    How does this fit in? There's only so much bandwidth. Sure, Lucent is up to 3 Terabits per second, but with collapsing R&D funds and Lucent in enough of a financial mess, don't expect either a rollout, or a refinement, any time soon.

    That's the absolute upper cap. The real limit is much smaller. Backbone connections are probably not much more than four or five hundred gigabits per second. (That is to say, about the capacity of two or three hundred well-made Pentium IV-based PCs.)

    A relatively small Beowulf cluster could totally saturate a decent chunk of the Internet backbone. Most cluster-based computers, such as the Origin 3000 or the Altix 3000, with sufficient network links, could easiy max out the capacity of any part of the Internet, without much effort.

    Why isn't the technology used? Because the customer doesn't want it? The customer has never been offered it!! Very, very few customers even know about it!! And ISPs, in particular, are keen to keep it that way. There is much more money to be made from serving people badly, because the customer'll keep paying for improvements and/or support. The ISPs can gouge the more foolish for years


    • Britain's JANET network has a highly extensive network of web caches. The theory being that one of the biggest loads on the transatlantic link is web traffic, and that the same site is often accessed repeatedly (eg: for University coursework), so that the most efficient solution is to cache everything.

      Not any more it doesn't... webcache.ja.net passed away Dec 2002 [ja.net].
      • cheaper to just throw bandwidth at the problem, and then avoid the operational costs of futzing around with proxy servers, with their inherent disk space, OS patch, proxy software patch, hardware failure, etc. etc., problems.

        As commonly in life, in networking, complexity is the enemy.

    • by Ironica ( 124657 ) <pixel@boo[ ]ck.org ['ndo' in gap]> on Sunday January 04, 2004 @12:38AM (#7871092) Journal
      Second, always compare with an only marginally related industry or discipline. There has to be enough of a connection to convince the readership, but not enough of one to disprove your preconceived notions.

      Ok, I haven't RTFP yet, though I definitely plan to. Maybe he doesn't make the link particularly clear. But the analogies between transportation and telecommunication networks have long fascinated me, since shortly after I abruptly left the tech support field to get a Master's in Transportation Planning. (For example, if you consider that the basic traffic system has to be collision-avoidance based rather than collision-detection based, it explains a little about why transportation networks tend to be relatively inefficient and have pretty high overhead. You can't retransmit a car.)

      Quick note: I'm a little irritated by hearing some American politician label maglev trains as "sexy science fiction" and "stupid". To me, it's part of a worrying trend I'm seeing in all too much of the US, where there is an apparent phobia of making any actual progress in anything. To me, progress is the certain bit. What happens to those who reject it - that's not so certain.

      I did a search for this in the paper and didn't see it, so where is it from? I'm curious which politician that was, and which project they were talking about. Mostly because it sounds like they're quoting my advisor ;-).

      But the fact is, maglev in particular is a somewhat inappropriate technology. Over shorter distances, it's wasted; you spend the entire trip either accelerating or decelerating. Over longer distances, though, it's much more expensive and difficult to provide, not to mention it's hard to find a solid stretch of right-of-way that you can take over preemptively full-time. Maglevs pretty much have to be fully grade-separated, and building an elevated track is about 10x the cost of building it on the ground (generally speaking; I don't know if there are any special considerations with building elevated maglevs).

      It is a fun idea, but from everything I've seen it's not a practical component in our existing transportation infrastructure. It might be eventually, but at the moment, it's got a lot of issues.
  • by anti-NAT ( 709310 ) on Saturday January 03, 2004 @10:00PM (#7870404) Homepage

    Because I have glossed through it (a number of months ago), and none of the comments up until now show any evidence of people actually understanding Prof Odlyzko's arguments.

    The goal of ATM was to replace network stacks such as TCP/IP, as evidenced by all the different QoS options available (VBR, CBR, UBR etc), as well as all the AAL layers (1 - 5, I've heard a AAL6 might be coming). Switched Virtual Circuits were supposed to be the dominant way connections were set up.

    Why has it failed ? There are primarily two reasons :

    • It is primarily deployed, if not always deployed, by telco customers as Permanent Virtual Circuits. The Telco's love SVCs, as they can then charge per connection setup. Customers love PVCs, as it is then a fixed price service. The customer won. So the SVC mechanisms within ATM are somewhat redundant, as well as the SVC signaling mechanisms.
    • The dominant application of ATM is to run TCP/IP over it. This is a waste of resources, as TCP is providing a lot of the facilities ATM was intended to provide. ATM is incredibly over engineered for the most common service it is being used for, namely, link layer or layer 2 point to point, best effort connections.

    Another technical restriction ATM has is due to the 53 byte Cell size. As bit rates increase, the number of cells per second increase, which increases the number of cell headers per second the ATM device has to process, which then increases the computational requirements of the ATM device. This is putting huge demands on CPU/ASIC technology, such that it is becoming impossible to build an ATM interface that can operate fast enough. For example, you can already get 10Gbps SONET and Ethernet interfaces, but I'm not aware of any 10Gbps ATM interfaces. They may exist, but they are "late to market", and very expensive, when compared to alternative 10Gbps techologies.

    On a related note, the header per second processing issue is also going to be a problem with ethernet in the near future, which one of the reasons why jumbo / 9000 byte ethernet frames is slowly being adopted.

    Finally, a note to those who think ATM is successful just because it is being used. You really need to consider and compare the original goals of the technology verses how it is commonly been used. As ATM typically isn't used at all for what it was designed for, then it is a design failure, and an over engineered one at that.

    We all complain about how much our broadband Internet access costs. Unfortunately, ATM has contributed significantly to those high costs, because the vendors who have sold ATM want to re-coop all their R&D costs for most of the features of ATM that are never used, so they charge high prices for ATM technology. There are a few things ATM does that other technologies don't, and there haven't been any alternatives, so we have been stuck with ATM, and have been stuck paying for its over engineering.

  • by StandardCell ( 589682 ) on Saturday January 03, 2004 @10:05PM (#7870432)
    About 5 years ago, when I was working for a telecom consortium in Canada, one of the guys who was an expert for ATM was telling me that most deployments of ATM at the time were in purely synchronous mode due to the complexity of configuring the equipment to handle various types of traffic. Of course, what you ended up with is a very expensive switch with basically redundant capabilities.

    ATM had a lot of promise but it's really an unnecessary technology relative to the amount of bandwidth available. Tons of fiber still lies dark. SONET switches and Ethernet are basically all that's going in these days for medium and long haul. Even for synchronous traffic, fast asynchronous transport can make the asynchronous nature of the medium transparent.
  • Multicasting (Score:4, Insightful)

    by TheSync ( 5291 ) on Saturday January 03, 2004 @10:59PM (#7870641) Journal
    Multicasting, as a standard service, has not been seriously brought into the Internet because of the difficulty of billing between AS's. There has never been an effective agreement for this (unlike unicast flows). You can imagine the trouble of not knowing how many packets you send out of your network for each one that comes in. Plus ISPs did not want to canabalize their existing unicast customers who might spend less through multicasting.

    Moreover, multicast routing has never reached a level of technical competancy, in part because of the billing problem. No one ever really pushed Cisco to make things like PIM-Sparse Mode work properly, and as of 1-2 years ago, it still barely worked.

    This brings us to legacy equipment, like dial-in routers and DSLAMs that are not multicast enabled. To turn on mulitcast everywhere you would need to make it a useful service would require something aking to IPV6 switchover (which also, uh, isn't happening fast).

    Multicast is alive an well in intra-AS niches like satellite and DTV IP datacasting, as well as special large Internet customers on specific backbones.

  • I've never understood why multicasting isn't used, pushed even, by the media types. It would be so efficient to have audio or even video broadcasts over multicast that you could do it from home (with a typical broadband connection).

    Many radio stations (for example) have a way to "tune in" online, but it's always unicast, so with every slight increase in user-base, comes an equal increase in bandwidth.
  • Can anyone give a count of DSL users in North America?

    Here in Toronto, a while ago most ISPs were disconnected one morning, and the tech support said it was an ATM problem. At the backbone level, ISPs take their connections off ATM routers, just look at ciscos line of offering of ATM routers and switches. Theres a demand behind all that.

    QoS is used by many routers by many providers to improve VoIP, and by some providers to improve gaming and other lowlatency applications.

    This guy who wrote than ATM is a
  • by argoff ( 142580 ) on Sunday January 04, 2004 @01:25AM (#7871347)
    One big distinction between the information age and the railroad/canal/lighthouse examples is that there is a huge difference between information and other comodities. Unlike with physical comodities, information can be coppied without depriving the originator of that information and it is extremely easy to change form and type at any given instant. In addition it is always independent of the medium. For those reasons alone, the price discrimination, that he discussed at length (for content, at least) will not be workable in the information age unless you literally become a police state.

Things are not as simple as they seems at first. - Edward Thorp

Working...