Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking The Internet

Does the Internet Need a Major Capacity Upgrade? 357

wiggles writes "According to the Chicago Tribune, the recent surge of video sites such as Youtube and Google video are pushing the limits of the Internet's bandwidth, or soon will be. Pieter Poll, chief technology officer at Qwest Communications, says that traffic volumes are growing faster than computing power, meaning that engineers can no longer count on newer, faster computers to keep ahead of their capacity demands. Further, a recent report from Deloitte Consulting raised the possibility that 2007 would see Internet demand exceed capacity. Admittedly, this seems a bit sensationalist, but are we headed for a massive slowdown of the whole internet?"
This discussion has been archived. No new comments can be posted.

Does the Internet Need a Major Capacity Upgrade?

Comments Filter:
  • The answer is... (Score:3, Insightful)

    by markov_chain ( 202465 ) on Friday February 23, 2007 @07:20PM (#18129458)
    Yes!
  • by KingSkippus ( 799657 ) * on Friday February 23, 2007 @07:20PM (#18129472) Homepage Journal

    Qwest is one of the companies speaking out [com.com] against net neutrality. The CEO even went as far as to call it "really silly [chicagotribune.com]." Could it be that the CTO's comments are politically motivated?

    I, for one, think so.

  • by ADRA ( 37398 ) on Friday February 23, 2007 @07:25PM (#18129526)
    What a well prepared talk piece. I however take the other approach.

    If I'm offered 5Mits/s from my cable provider, that is an obligation for them to fill my order. If they can't fulfill my expectations, then they shouldn't have offered the service to begin with. If telco XYZ is getting bitten for overselling their lines that sure as hell isn't my problem as a consumer. What I do with my 5Mbits/s is my own business. I could use the internet to check my email (10kb), or surf the web a while (2MB), or download a YouTube video (200M?).

    Why should my internet operator, the guys protected up the ass by common carrier protections dictate my internet surfing activities?
  • by EllynGeek ( 824747 ) on Friday February 23, 2007 @07:29PM (#18129582)
    Right. Put a bounty on spammers, and in a few week's time problem solved.
  • by dada21 ( 163177 ) * <adam.dada@gmail.com> on Friday February 23, 2007 @07:29PM (#18129584) Homepage Journal

    If I'm offered 5Mits/s from my cable provider, that is an obligation for them to fill my order. If they can't fulfill my expectations, then they shouldn't have offered the service to begin with. If telco XYZ is getting bitten for overselling their lines that sure as hell isn't my problem as a consumer. What I do with my 5Mbits/s is my own business. I could use the internet to check my email (10kb), or surf the web a while (2MB), or download a YouTube video (200M?).


    You're correct -- but they weren't offering 5MBits always (if you read your contract/service agreement). If you wanted 5Mbit guaranteed always, no-holds-barred, you should have asked to modify the contract. They might charge you quite a bit more, though :)

    Why should my internet operator, the guys protected up the ass by common carrier protections dictate my internet surfing activities?

    I personally am against common carrier protections, but it is tort law that is screwed up so much that the elite mercantilists wrote their own law to protect themselves. If tort made sense (from a free market perspective, let's say), then we wouldn't need common carrier protections.
  • by Doc Ruby ( 173196 ) on Friday February 23, 2007 @07:38PM (#18129696) Homepage Journal
    Studies of actual traffic congestion mitigation techniques have consistently demonstrated that increasing capacity is a much cheaper and more reliable remedy than QoS on backbones. With extra benefits in the raw capacity. The "quality traffic to quality networks" would require a whole extra architectural layer to route through several different Internet links on realtime route quality decisions, rather than leverage the full capacity of the Net to route anywhere at any time on local congestion conditions or other overall strategies.

    These whines are in fact "special consideration" pressure for the telcos to get "Net Doublecharge". They don't need service tiers, but they can use them to demand distant endpoints pay protection money. If they can get the protection money from the government, their favorite source of subsidy and protection for over a century, they certainly will. Especially if they've already used up the capacity for private accounts (people) to pay them directly, which makes them look less competitive.
  • by zorkmid ( 115464 ) on Friday February 23, 2007 @07:49PM (#18129816)
    Raise prices until the underclass can't afford it. Then they'll drop off and stop clogging my intraweb tubes.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday February 23, 2007 @07:53PM (#18129872)
    "Net Neutrality" is the way to go.

    Once you start instituting "tiers" you take away ANY incentives to increase the available bandwidth.

    Instead, the "innovation" will go towards extracting the most revenue from the smallest pipes. And "innovation" is in quotes because it won't be real innovation. It will be accounting tricks and tier pricing.
  • by adamruck ( 638131 ) on Friday February 23, 2007 @07:56PM (#18129916)
    I don't understand why people keep equating T1's to fast internet. Your office has the equivalent of about 50x dialup connections for about 60 people. It doesn't take a veteran sysadmin to understand why that is a problem.
  • by CommunistHamster ( 949406 ) <communisthamster@gmail.com> on Friday February 23, 2007 @08:00PM (#18129964)
    Privatising, like the UK Rail industry, whose CEOs spent so little on track maintenance that trains crashed and people died?
  • by strangedays ( 129383 ) on Friday February 23, 2007 @08:00PM (#18129966)
    This appears to be yet another atroturfing attempt.
    See Slashdot post: "How Would You Deal With A Global Bandwidth Crisis?" Posted by Zonk on Thursday February 15, @06:19PM
    http://ask.slashdot.org/article.pl?sid=07/02/15/18 25230/ [slashdot.org]
    (please remove the silly extra space slash adds to the url above, just before 25230, it breaks the link)
    Clearly we are going to be treated to this bogus bandwidth crises bullshit approximately once a week, probably to collect some supportive comments for the need for more control/cost/etc.
    Please don't feed the trolls, or help them lay more Astroturf for Net Neutrality.
  • by mr_mischief ( 456295 ) on Friday February 23, 2007 @08:05PM (#18130000) Journal
    When he's concerned about bandwidth demand outstripping computing power, that's not a fiber count problem. That's a router problem. He's saying the routers aren't gaining capacity to route packets as quickly as the number of packets to route is rising.

    No amount of extra fiber will help if the routers can't keep up. Setting up more routers in the same interconnect centers will bring either bigger routing tables or higher latencies depending on how they're connected to one another. Setting up more interconnects which are more geographically dispersed and which route more directly between the endpoints will help, but that's a very expensive option. New buildings in new areas with new fiber running to them and new employees to man them simply cannot be made into a small investment.

    Mesh networks, P2P content distribution, caching at the local level, multicasting, and some other technical measures can all theoretically help, too. So can spreading out the data centers of the big media providers and routing their traffic more directly that way, but again centralization of data centers saves a lot of money.

    If demand is really growing too fast to handle (I have my doubts about the sky actually falling) one of the best ways to assure that bandwidth demands are met is to slow the increase in demand. The quickest and easiest way to slow increase in demand for something is to raise its price. That's an ugly thought for all of us on the broadband price war gravy train, but it's basic economics. Let's hope for a technological solution (or a group of them) instead, if it's really a problem and not just hype to hit our wallets in the first place.
  • by hedwards ( 940851 ) on Friday February 23, 2007 @08:26PM (#18130184)
    You do raise some interesting ideas, but wouldn't it make more sense just to fix the spam problem?

    Right now spam takes up an inexcusably large portion of the internet's capacity, with meaningless, useless, annoying
    tripe. (Well to be fair, spam taking up any portion of the capacity is appalling)

    The main issue I have with giving up the net neutrality is the question of who gets to decide what is
    high priority and what is low priority. If I got some say in how it was divvied up, that would be much
    less annoying than companies getting to take bribes for special service.

    Often times I set things to download overnight while I am sleeping, and as far as I am concerned if it takes 2 hours
    for the file to download or a full 8, it makes no difference, being able to have those files download more slowly
    and the ones while I am actually up and at the computer more quickly would be quite useful.
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday February 23, 2007 @08:28PM (#18130200) Homepage Journal
    Everything - from the replication of databases or file storage to the distribution of high-end video - is delivered on a point-to-point basis. This simply does not scale. It is inefficient, it is expensive, it is wasteful, it is.... so mindbogglingly primitive. Point-to-point was great, when you had telephone switchboard operators. In the days of scalable reliable multicast (SRM) and anycast, when the Internet backbone runs multicast protocols natively (there has been no "mbone" per-se since 1996), it is so unnecessary.

    Even if you limit yourself to replicating the distribution points with a protocol such as SRM or NORM (NACK-oriented Reliable Multicast), you eliminate huge chunks of totally unnecessary Internet traffic. However, there is no reason to limit yourself like that. The latency involved over long-distance Internet connections must exceed the interval time between requests for high-demand video content, so by simply waiting a little and collecting a batch of requests, you can transmit to the whole lot in a single go. No need for a PtP connection to each.

    Then there is the fact that video is not the only information that eats bandwidth for breakfast. Static content - PDFs and other large documents - also devour any surplus capacity. So all an ISP needs to do is run a copy of Squid on each incoming line. How hard is that? It takes - what - all of 10 minutes to configure securely and fire up. You then forget about it.

    There are people who would argue that it would impact banner ad revenue. Well, banner ad revenue is typically per-click, not per-view, so that is really a weak argument. Then there is the problem of copyright, as the cache is keeping a copy of text, images, etc. Well, so is your browser. Until a major website sues a Firefox user for copyright infringement for leaving their local cache enabled, it would seem that this is more paranoia than practical. As writers have noted for many centuries, we need fear nothing but fear itself. It is our fear of these solutions that are creating our existing problems. It seems the height of stupidity to create real problems for the sole purpose of avoiding problems that might be entirely fictional. "Better the devil you know" is a weak excuse when the devil you know is unacceptable.

  • by IBitOBear ( 410965 ) on Friday February 23, 2007 @08:37PM (#18130260) Homepage Journal
    If the net needs anything it needs Quality of Service routing at the customer access point.

    NO, I am _NOT_ talking about a non-neutral net. I think net neutrality is mandatory.

    What I am talking about is an end to TCP Retransmits in our lifetime. (Ok, that is overstating it a little 8-).

    At my home I put together a QOS gateway that throttled my _outgoing_ packets to the speed of my cable modem _and_ made sure that if it had to drop something it would _not_ drop outgoing "mostly ACK" packets. (e.g. outgoing TCP packets with little or no data payload get best delivery.)

    This action lowered my incomming packet count and got my effective download speed to closely approach the bandwidth tier I am paying for. This was a 3x to 4x improvement in throughput. This, when combined with the lower packet count, implies that previously I was wasting 2 out of every 3 packets due to unnecessary "recovery" (read useless retransmits).

    That cost must, then, have been paid at every step along every trip etc.

    Then I turned on HTTP Pipelining on all the (forefox) browsers in my house (etc).

    I suspect that if we could do something about the waste ratio, and generally speed up each transaction by squelching the noise and getting better effective throughput, "the intertubes" would be a lot clearer and the capacity wouldn't fall apart so readily.

    [aside]
    If we could (pie in the sky) get the porn and ewe-tube traffic onto the mbone with smart caching near the client so that each person didn't have to get each part "the whole way" from the provider even though everybody else is watching the same top-ten clips of the day, we could make more progress. This falls apart because it messes up the charging model for porn and advertising, and ewe-tube gawkers couldn't possibly stand waiting 2 to 6 seconds to join a synchronized swarm...
    [/aside]

    This is very like the whole thing where a guy with half-flat tires is standing around complaining about his gas mileage.

    Collision detect style arbitration falls apart when you saturate the link, and cable providers screwed themselves with the way most cable modems fail to buffer outgoing traffic. Penny wise and Pound foolish of them to make the default devices so cheap. Iterate as necessary for businesses and ISPs with their underpowered gateway machines terminating their PPOE (etc).

    As for the part where that failure to schedule packets at the most basic level will be turned into "demonstrable evidence" for the "need" non-neutral networks... That will be the "WMDs" of the net neutrality war.
  • by shog9 ( 154858 ) on Friday February 23, 2007 @08:38PM (#18130266)
    Funny, i coulda sworn that email was gonna bring the 'Net to a grinding halt. And then IM was gonna. And then MP3 downloads were gonna. And then file sharing was gonna.

    But hey, far be it from me to question the wisdom of our corporate overlords... if video sites are gonna destroy the 'Net, then We Must Pass Laws!!!1!
  • by rekoil ( 168689 ) on Friday February 23, 2007 @08:39PM (#18130274)
    Let me try to explain the problem from the ISP side (pardon me while I don Les Asbestos Underpantz)...

    What we're seeing is the hazards of changing oversubscription ratios. I'm sure this term is familiar to many of you, but for those who don't, it's the concept that ISPs know that on average, each customer will only use a certain portion of the bandwidth that's made available to them. As such, an ISP doesn't have to provision one megabit of backbone capacity for each megabit it sells to a consumer; it might only have to upgrade on a 1:10 or 1:50 upstream-to-downstream ratio. There's no way that an ISP could sell bandwidth at a reasonable price without oversubscribing at some point. Without oversubscription your 1.5Mbit DSL line would be $500 a month, not $50. Those in the business know I'm not exaggerating here, given the cost of service provider network equipment and fiber capacity (which continues to fall, but not nearly fast enough).

    What's causing the problem is that those ratios are changing, such that (for example) the 1:10 ratio an ISP built its business model around is now 1:5, thanks to YouTube, iTunes, Bittorrent, WoW, etc, not to mention 0wned machines spewing spam and DoS traffic, which is overtaxing its infrastructure and increasing costs. The ISP can't get away with raising prices, and obviously has to remain profitable, so congestion is the inevitable result.

    Some ISPs, most notably Comcast, have gotten quite aggressive at disconnecting what they perceive as "abusive" customers whose usage is higher than the norm. This is absolutely the wrong way to go about this problem, but feeling of being between the proverbial rock and a hard place is understandable. ISPs simply can't stay in business if customers actually use all the bandwidth they're given, and if we all built our networks such that everyone could, no consumer would pay for it.

    I think it was 1994 when AOL introduced its unlimited dialup service (prior to 1994 AOL billed dialup connection time by the hour). Because the user that before was spending an average of, say, 30 minutes a day online was now spending 3 hours a day connected, and because AOL woefully misforecast those ratios, it became next to impossible to connect to AOL for quite a while until they caught up with modem provisioning (That's when I got rid of my AOL account and got my first real ISP acccount, yay!). Looks like everything old is new again.
  • by timeOday ( 582209 ) on Friday February 23, 2007 @08:45PM (#18130320)

    Nationalizing it like the UK's health care, where they recently discovered that doctors were letting old people die rather than get treated because the doctors did better financially treating younger, healthier patients? No thanks.
    Check your facts. Longevity is longer in the UK than here and they pay far, far less. The free market is better for most things, but health care - judging by the statistics - doesn't seem to be one of them.

    Anyways, it sounds horrible to "ration" health care, but the fact is, nothing is infinite. Even if all of us did nothing but work in the health care industry, or pay 100% of our salaries to it, there would still be a limit. Where do you draw that line? Do you really want to give a $60K quadruple bypass to an 80 year old with a 50% chance of dying on the table and a 90% chance of living less than 2 years? The US insistence on rationing health care according to ability to pay instead of the expected benefit (measured in expected lifespan and quality of life) is exactly our problem.

    Oh and by the way, I would not support nationalizing the Internet. These little "oh no! We're going to run out of bandwidth!" articles come out two or three times per year ever since I can remember. I would support either regulation or deregulation (I'd have to look into it!) to make the residential market more competitive.

  • by troll -1 ( 956834 ) on Friday February 23, 2007 @08:49PM (#18130346)
    The solution is to nix net-neutrality legislation and allow the consumer and the producer to come to terms on need versus price.

    You're not by any chance a lobbyist for the non net neut advocates are you? i

    Net Neutrality is not a business concept, it's based on a theory in computer science that the most efficient and cheapest networks are those based on the principle that protocol operations (i.e. TCP/IP) should occur at the end-points of the network.

    See "End-to-end arguments in system design" by Jerome H. Saltzer, David P. Reed, and David D. Clark: http://web.mit.edu/Saltzer/www/publications/endtoe nd/endtoend.pdf [mit.edu]

    This principle was used by DARPA when it worked on Internet design and it's the reason TCP/IP communications have experienced massive growth.

    It's a principle supported by almost everyone except the backbone owners. Verizon's CEO has said many times that the pipes belong to him and if you're going to make a profit off them then he wants a cut too (referring to Google, Yahoo, Microsoft, et al who oppose Net Neut).

    Compare with mobile carriers who don't follow the principle of network neutraility where you pay more for cell phones that use a zero cost medium (the airways) than you do for the Internet which uses an expensive wired system. And where every service is separately billable. Is that the network of the future you're suggesting is better for us?

    I wouldn't be so opposed to your argument if I could be convinced the telcos weren't running a gnarly scheme to make my ISP bill look like my cell phone bill.

    The net has been so successful perhaps because it was designed and developed in large part, not by private companies, but by scientists an d engineers in an academic environment who were mostly employed by the government. Profit was not their goal. You want to give it over to the business folks because you think they can do a better job if they're involved in how the Internet continues to evolve?

    Be careful what you wish for. I'm not necessarily disagreeing with you. But what worries me the most about non net neut is that we're going to be giving companies a large hand in determining, not how the Internet will look in a few years, but ultimately we're going to be giving them a lot of power in influencing how it's developed later on down the road. I say we tread carefully.
  • by nick.ian.k ( 987094 ) on Friday February 23, 2007 @09:09PM (#18130478)

    The thing is that overselling is selling more than you've (well, the ISP, not *you*) got and it shouldn't be happening in the first place. Playing this game of "we'll see if we can upgrade as real live usage increases and if we don't, no big deal" is a joke. It's about as stupid as (put your reduction safety hats on, it may not map well!) floating checks: sure, it's pretty likely that check from person A is going to clear in time for the check you wrote to person B to go through alright, despite the present lack of sufficient funds in your own account...you've thought it out, played it on the outside, person B probably won't deposit your check for a week, and you deposited the one from person A 3 days before physically handing B your check. But when something goes wrong in the in-between of financial institutions, you get bitten in the ass with fees, and deservedly so: you should've been more careful about what you were doing in the first place.

    I guess the part that doesn't stop the ISPs from overselling bandwidth is that they don't face any real-live consequences most of the time. Most customers are complacent to sit back and take it. The ones who aren't often lack the choice of other providers (and that's discounting non-broadband options...suggesting switching to dial-up is a curmudgeonly suggestion for argumentative types to make and little else) or the capital to start their own ISP. Haven't seen any legal action taken yet, so...yeah. What incentive *do* they have to only sell what they've got or maintain capacity when they're often the only game in a particular part of town?

  • by TubeSteak ( 669689 ) on Friday February 23, 2007 @09:32PM (#18130622) Journal

    Compare with mobile carriers who don't follow the principle of network neutraility where you pay more for cell phones that use a zero cost medium (the airways) than you do for the Internet which uses an expensive wired system.
    1. Cell providers have to pay for a license to use those airwaves
    2. You don't think those cell phone towers put themselves up do you?
  • by mysticalreaper ( 93971 ) on Friday February 23, 2007 @09:45PM (#18130712)
    Thank you for the excellent explanation of how things work from the ISP side. However, i think you have betrayed the ISP by citing what you did when AOL started to suck:
    because AOL woefully misforecast those ratios, it became next to impossible to connect to AOL for quite a while until they caught up with modem provisioning (That's when I got rid of my AOL account and got my first real ISP acccount, yay!). Looks like everything old is new again.
    (emphasis mine)

    This is exactly the point. If qwest is starting to offer shitty service, it's proposterous to blame the customer, and then talk as if the internet itself is breaking cause of these damnable users.

    If company A is not capable of delivering a good product, i'm sure company B will have something you'd be more interested in.

    Following this logic, you come back to the situation that many of us in Canada and the US are faced with: Lack of competive choices for an ISP, resulting, in this case, with shitty service being blamed on the customer. I hope enourmously that no one in goverment is buying this tortured logic, and making policy decisions based on it.
  • by Anonymous Coward on Friday February 23, 2007 @11:15PM (#18131250)
    The problem with that argument is that broadband is not new. Its been around a while. In my area, around 10 years. I first got a Cable modem in 1997 and had around 512kbps/256kbps d/u at that time. Now, I get 5mb/384kbs with the same company (although its changed hands 3 times).

    Yes, they've upgraded their network but, if they still can't compensate for what they offer now, customer demand, and maxing network capacities, its their OWN DAMN FAULT. THEY UNDERESTIMATED THEIR BUSINESS!!!!!!!

    I worked at an ISP for 3 years, and we oversold at a ratio of 4:1. Granted our target base was mostly rural, but once users got a taste of broadband, it is now the norm/standard for them. This was 4 years ago.

    Between Federal funding initiatives, the little taxes you see on your bill for infrastructure upgrades, and outright profits the ISP's are making, if they STILL can't cover the current and upcoming service demands, TOUGH SHIT FOR THEM!

    I lost all sympathies for the ISP's around the time I saw firsthand how the big V* did business. I'm not convinced Internet access shouldn't be a public utility at this point.

    If you haven't noticed, The Corp's are fucking it up. What you are seeing now is only the beginning of what mayhem is coming. I'm hoping Google turns up all that Dark Fiber and pulls a fast one on all the big carriers.

    /I'm only a little bitter
  • by Anonymous Coward on Friday February 23, 2007 @11:56PM (#18131476)
    I'm posting AC because there's no point in registering to post once a year.

    It's not screwing your customers if that's how the business works. There's nothing crooked about it.

    1 broadband customer does not pay for a T1, but 15 do. Your break even point for all your other overhead depends on what that overhead is, and how many customers you have in total; the more customers you have the less it costs per customer to have them. I worked for a small ISP for 7 years and we always explained that we were selling T1 speeds shared with other customers on that circuit. They could expect a peak of 1.5mb, but it wasn't guaranteed - and that was accepted as normal by the customer. That's how broadband service is sold, and I don't know of any providers that guarantee max speeds for residential services. If you want a commercial SLA, you're going to pay out your nose for it.

    Tul
  • by damiangerous ( 218679 ) <1ndt7174ekq80001@sneakemail.com> on Saturday February 24, 2007 @12:47AM (#18131688)
    I could've been miss-informed, but I believe most if not all ISPs are considered common carrier.

    They are not. Only the telecommunications network itself is a common carrier. The DSL services layered on top of it (as well as cable, fiber, etc) are considered information services.

    If they weren't, every single illegal download that the RIAA could sue for could also be enacted against the ISP, since they 'allowed' the infringement to take place, or some such.

    That would be true, were not other legislation in place. The Communications Decency Act says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider", which protects ISPs from any libel torts committed by their users.

    The DMCA offers the "safe harbor" provision to ISPs, protecting them from liability for copyright violations by their users as long as they follow the notice and takedown procedures for complaints.

  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Saturday February 24, 2007 @01:16AM (#18131844) Homepage Journal
    Oh, I dunno. Multicast information delivery is significant enough that the entire Internet backbone now deploys PIMv2 multicast routing as standard. You think they did this out the kindness of their hearts? Hardly. You think they did this for revenue? As most ISPs receive multicast but never forward it to customers, I'd love to see what this revenue would be.

    Multicast information delivery systems have been tried? When? Where? Show me a single ISP that has delivered multicast to residential customers and a single non-trivial example of reliable multicast for information delivery. I consider the mbone utils of VIC and VAT to be trivial examples. Show me something real. A distributed database engine, a replication server that pushes a filesystem to all mirror sites simultaneously, an MPI implementation that uses reliable multicast for collective operations, a multicast SMTP server for sending mail to multiple destinations in a single transfer. If you can't show me the apps and you can't show me real-world residential users, then you can't possibly claim multicast information systems have ever been tried.

    (Oh, and I don't consider multicast mosaic to be a meaningful example, either. It was never widely distributed, by the time anyone knew it even existed the world had switched to other browsers, and those who developed it did not exactly go out of their way to tell anyone it was there or what it did. Even if they had, the big thing of the time was Netscape - Mosaic was essentially dead. Why would anyone use a browser that was inferior in virtually every respect, simply to use a distribution model that their ISP would have blocked anyway?)

  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Saturday February 24, 2007 @01:39AM (#18131942) Homepage Journal
    Tier 1 backbone providers (AT&T, Sprint, and so on) all have PIMv2 enabled on the backbones, probably in sparse mode, along with most tier 2 backbone providers. Dense mode (which is the same model DVMRP used) doesn't make any sense for the sorts of software people are actually using, so most people ignore it. The third PIMv2 method (bi-directional multicast) would however make the most sense if you were to have P2P applications make use of multicast.

    Modern IP multicast - ie: ignoring DVMRP and MOSPF - isn't too bad when it comes to distribution. Pruning and grafting of branches is now more-or-less solid ground. IGMPv3/MLPv2 support authentication extensions, source-specific multicast and other cool stuff. IPv6 multicast is intended to be interoperable with Infiniband multicast (though as the number of users of either is extremely limited, this one is of limited value for right now). As a raw transport, it's not shabby. Now, to get anything useful in the way of information systems done, you need to layer a reliable multicast transport on top of that. SRM and NORM are the big two players in this arena, with Open Source implementations of both. FLUTE was a potential big player, for file distribution, but there is a patent on multicast file sharing which has shut down most FLUTE implementations.

    (Possibly one reason P2P software doesn't include a multicast option is precisely because of that patent. It's bad enough being vilified by the **IA lawyertroids, but being sued for patent infringement as well would likely cause some serious problems. It's not through a lack of means. A lack of programmers familiar with multicast might also be a problem.)

  • by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Saturday February 24, 2007 @07:48AM (#18133168) Homepage

    The problem with caching is that most of the sites out there use dynamic content.
    But most data is still static (images, stylesheets, external javascript, video, pdfs, etc.) including most of the stuff that's actually high bandwidth, and all of that can be cached (both in your browser and in proxies) quite nicely. That's the way HTTP is designed to work. So, looking at the /. front page, I see that the dynamic content is around 19kB while the cacheable content is more than that in total.

    As a web designer, you should take care to ensure that as much of content is static as possible (you can do a lot with stylesheets and javascript to make the page appear more dynamic than it really is) with the added benefit of making everything look much slicker for repeat visitors. (Indeed, as far as I can see the real problem is the convoluted mess caused by ad servers that seem to insist on trying to both defeat caching and asynchronous page loading, but that's another tale...)
  • by asuffield ( 111848 ) <asuffield@suffields.me.uk> on Saturday February 24, 2007 @09:33AM (#18133546)

    There's no way that an ISP could sell bandwidth at a reasonable price without oversubscribing at some point.


    I disagree. ISPs are perfectly capable of selling bandwidth at a reasonable price. The problem is that they are currently selling unreasonable packages, where the price is far too low for the advertised capacity. That's not because they've set their prices too low, but because they wanted to advertise larger capacity - so they just made the numbers bigger by lying about them. The result is an ISP that just sucks - cutting costs everywhere they can, which gives us a "tech support" line that goes to an Indian callcenter where you get told that they aren't going to do anything about your problem, and a program of banning all the people who try to use the capacity they were sold.

    Without oversubscription your 1.5Mbit DSL line would be $500 a month, not $50. Those in the business know I'm not exaggerating here, given the cost of service provider network equipment and fiber capacity (which continues to fall, but not nearly fast enough).


    Closer to $200-$300, although it depends exactly what you're buying. If you use a leased line instead of DSL (higher reliability but higher operating costs) and include a real SLA, that'll easily push the price up over $500. The bandwidth itself is only about half that (although it's hard to find somebody who will sell you real bandwidth without a business-type SLA).

    ISPs simply can't stay in business if customers actually use all the bandwidth they're given, and if we all built our networks such that everyone could, no consumer would pay for it.


    So the solution is for the ISP to sell the product that they're actually providing. Don't sell "8Mbit DSL". Sell a service that's clearly labelled as "512kbit DSL, plus up to 10Gb per month of 8Mbit bursts", or whatever numbers you can arrange. People would be happy to buy a service like that. They aren't so happy about buying a service that's "8Mbit bursts but when we decide you're using too much we'll just cut you off and keep your money".

    Make real, sensible rules about what people can transfer, that aren't overcommitted. Implement them via traffic shaping and stick to them. Problem solved.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...