Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

'Selfish Routing' Slows the Internet 249

Smaz writes "Science Blog reports that a little love could speed things up on the Net. "Self-interest can deplete a common resource. It seems this also applies to the Internet and other computer networks, which are slowed by those who hurry the most. Fortunately, say computer scientists at Cornell University in Ithaca, N.Y. , there is a limit to how bad the slowdown can get. And after developing tools to measure how much the performance of a particular network suffers, they say, the way to get improved performance on the Internet is the same as the way to maintain air and water quality: altruism helps."
This discussion has been archived. No new comments can be posted.

'Selfish Routing' Slows the Internet

Comments Filter:
  • by Anonymous Coward on Friday February 14, 2003 @04:08PM (#5305256)
    If you've got to rely on the goodwill of others to get by, you're totally screwed.
    • by AntiNorm ( 155641 ) on Friday February 14, 2003 @04:50PM (#5305616)
      Really? [savekaryn.com]
    • What you're really relying on is the selfishness of the hardware. If the hardware itself did something different, then the people that bought them would live with that. Case in point is ethernet devices.

      Each of these has an altruistic collision avoidance method: when a collision happens, stop sending and wait a random amount of time before sending again. A selfish ethernet device would always immediately attempt to send under the assumption that the other device would be waiting, and it would get to go first. But of course, that's very bad for the network, so it's not done.

      The fact that we've got selfish routers is not a sign that they're selfish, per se, but that selfish routing is somewhere near the most effective a means of communication that they could think of at the time when they where invented.
    • My reaction to this story is 'well, duh'. If it costs you nothing extra, of course you will choose a route for your traffic without considering the effects on others. It's like the classic analogy of a train seat which has room enough for two: a third passenger coming along is likely to squeeze onto the end of the seat, squashing the other two, because *for him* it is better than standing.

      The answer is for routing costs to accurately reflect the contention for resources. If a particular route gets crowded, charge slightly more for sending packets down it. Routers can negotiate in real time to set prices and find the cheapest route for their data. Quality of service guarantees can be implemented by purchasing bandwidth (or options to use bandwidth) in advance.

      You won't eliminate selfish behaviour, the way to keep things running smoothly is to make sure people pay for the cost of the resources they use (and no more). Then it will be in their own interests to consider the effect on others, and to avoid overusing already congested routes.
  • by Qinopio ( 602437 ) on Friday February 14, 2003 @04:09PM (#5305257) Homepage
    another resource depleting mechanism known as "Slashdotting"
  • by creative_name ( 459764 ) <paulsNO@SPAMou.edu> on Friday February 14, 2003 @04:15PM (#5305312)
    ...that this isn't the guys at Cornell just trying to capture more bandwidth for themselves? Seems like a good idea to me.

    Me: Don't use as much bandwidth and everyone will go faster!
    World: Hey! That seems like a good idea.
    Me: (aside) Mwuhahahaha
  • by Florian Weimer ( 88405 ) <fw@deneb.enyo.de> on Friday February 14, 2003 @04:15PM (#5305316) Homepage
    Reasearch networks are particular well at this sports: For example, the German Research Network (DFN) has a strict anti-peering policy. GÉANT, a European research network, appears to accepts only links to a single research network operator in each member country.

    Of course, the most important aspect of such networks is that the bandwidth they offer is helpful in Dick Size Wars at supercomputing conferences, so it's not a terribly loss for the Internet at large.
  • by hackwrench ( 573697 ) <hackwrench@hotmail.com> on Friday February 14, 2003 @04:15PM (#5305317) Homepage Journal
    Somehow the only conclusion I could draw from the article is that using the network slows it down. Right, so could somebody explain what the article is trying to say?
    • Re:I'm confused (Score:3, Interesting)

      How does this get modded insightful?

      The article is not saying that using the Internet slows it down (that much is obvious). It's saying that with different routing techniques and the same level of use, it could go faster. So, using it slows it down, but so does building a bad infrastructure for it.
      • Re:I'm confused (Score:5, Insightful)

        by zackbar ( 649913 ) on Friday February 14, 2003 @04:46PM (#5305590)
        I'm confused too.

        The article states that computers test the routes, and pick the least congested route to use. Thus, it slows everything down for everyone.

        What should it do? Pick the MOST congested route?

        Either I'm just confused, the author didn't understand the situation correctly, or the whole thing is BS.
        • by Pharmboy ( 216950 )
          I'm confused too. The article states that computers test the routes, and pick the least congested route to use. Thus, it slows everything down for everyone.

          What should it do? Pick the MOST congested route? Either I'm just confused, the author didn't understand the situation correctly, or the whole thing is BS.


          Thank you. I was sitting there reading it, thinking "this sounds like a load of shit. Either I am a blithering idiot (entirely possible) or this article is worthless."

          Its sounds like a purely acedemic exercise that is being underwritten by someone with too much money, that has NO practical application.

          Glad to know I'm not alone in the confusion.
          • Re:I'm confused too! (Score:5, Informative)

            by Zork the Almighty ( 599344 ) on Friday February 14, 2003 @06:08PM (#5306110) Journal
            Actually, just think about it from a larger perspective. There are many independent routers out there, and they each decide how to route their traffic simultaneously. Now, imagine that the least congested path (#1) is only slightly better than other potential paths. The problem is that _everyone makes the same decision_ and chooses this one path for their traffic. The result is congestion on the one popular path everyone chose. If that was the only effect, nobody would really care - but here's the catch : at the next time interval the same thing is likely to happen again! Everyone chooses #2 on the list, since #1 is now toast. They all crash into each other.

            At the same time, I don't see how their suggestion really helps things that much. If everyone uses the same deterministic algorithm to choose a path, this sort of mass collision is still likely to happen (although it should happen less often with more complicated algorithms). I think that overall network performance would benefit from a little randomness in the routing algorithms. I'm not a CS, so there is probably already a random component that I don't know about.
            • No need for a deterministic algorithm:

              1. Choose the nth least congested path, where n is the statistical "sweet spot".
              2. Add randomness, so that your actual choices oscillate around n.
              3. Include logic to keep your random deviations from getting too far from n, where "too far" is "unacceptably sucky".
              4. For great justice!!!

          • Re:I'm confused too! (Score:4, Interesting)

            by JWSmythe ( 446288 ) <jwsmythe@noSPam.jwsmythe.com> on Friday February 14, 2003 @09:00PM (#5306894) Homepage Journal
            Good.. I was thinking we're idiots too.. Either that, or I need to start routing all my traffic down the most conjested pipes to watch it go faster. :)

            I've worked with our provider a bit with routing. We have mirrored servers in colo's around the country. If one city is conjested, we move traffic *AWAY* from the conjestion. Usually our traffic makes a difference for everyone else. I can have 500Mb/s added or removed from any given city within an hour, without flinching. Of course, before I do something like that, I put in a call first.. "Hey, can this city take 500Mb/s right now?"

            We wrote a program to take traceroutes from all the cities to various points, and plot them all onto a big network map, with ping times and the like.. We know which cities, peerings, or lines have problems at a glance..

            http://www.voyeurweb.com/network.12.23.2002-11h.pn g [voyeurweb.com]
            Warning: This picture is *BIG*. It's of our networks in Los Angeles, New York, Tampa, between each other, and to all of the root nameservers.. It makes a rather extensive map that is 11580x2669. It won't fit on your screen. Save it, and take it into your favorite image editing software to view it..

            This map is a little old (Dec 23, 2002 at 11am), but it gives a good impression of what the networks immediately around our servers looked like, and how they interact with each other.. Shitty networks stand out in red.. I definately wouldn't want to MORE of my traffic that way. Sometimes we don't have a choice. If your ISP uses a shitty provider, we have to send it that way..

        • You're confused. The problem comes when a computer picks a route that looks like it is the most congested route based on the usual metrics.

          As you doubtless know, you can't simply say "This link will carry this much traffic and it is currently carrying this much so I should start sending traffic down this other link." While that can be a useful rule to apply to one's fair queueing system it is hardly complete. One usually tests for link saturation not only by measuring packet flow but also by observation of responses to pings and similar traffic (obviously ping itself is no metric at all (by itself) because various routers along the way may be heavily loaded and prioritizing other traffic higher) and thus you get a certain view of the network which is fairly useful for sending traffic where it needs to go.

          However no matter what you do you to a link, unless it has multiple channels you can only send one piece of information down the pipe at a time. The higher-speed the link is the less this matters, of course, but it is still an issue; When a link is heavily loaded, low-bandwidth traffic might not suffer a decrease in throughput, but latency often increases.

          One relatively simple technique to increase the quality of the internet experience by making it more responsive would be to send traffic which does not require a low-latency link (most especially file transfers, I'm mostly talking P2P here which is where probably a majority of bandwidth is going now, though I've done no research) over a fairly highly loaded link which nonetheless has enough free bandwidth to carry the traffic. Packets might sit in queue longer, but what's four or five seconds lag when you're talking about a file transfer that takes at least minutes and sometimes days?

          Meanwhile traditionally interactive traffic like a shell (via ssh or telnet or what have you) or a game might not take up that much bandwidth but it's more important that it be transferred over low-latency links. 50ms makes the difference between natural reactions and missing everyone in many first person shooters, for example.

    • Re:I'm confused (Score:5, Informative)

      by Randolpho ( 628485 ) on Friday February 14, 2003 @04:36PM (#5305501) Homepage Journal
      The author is not trying to say "those bastards over at network X are selfish and they're slowing us down" or anything like that. He's trying to point out that a fundamental aspect of internet routing, the concept of forwarding a packet via the fastest route to the destination, can in many cases slow down performance if the fastest route gets congested.

      Frankly, I'm surprised this is considered news; I learned it in a networking course on my way to a CS degree. I can only assume that the author is trying to push a new algorithm for congestion control and is using "selfish routing" as a marketing scheme. The thing is, I can't seem to find the suggested reprieve.

      Ahh, here it is:
      Roughgarden has a suggestion that wouldn't be expensive to implement. Before deciding which way to send information, he says, routers should consider not only which route seems the least congested, but also should take into account the effect that adding its own new messages will have on the route it has chosen. That would be, he says, "just a bit altruistic" in that some routers would end up choosing routes that were not necessarily the fastest, but the average time for all users would decrease.
      • Re:I'm confused (Score:5, Informative)

        by Zeinfeld ( 263942 ) on Friday February 14, 2003 @05:54PM (#5306047) Homepage
        Frankly, I'm surprised this is considered news; I learned it in a networking course on my way to a CS degree. I can only assume that the author is trying to push a new algorithm for congestion control and is using "selfish routing" as a marketing scheme.

        Yep, if you have three available routes A, B, C with bandwidths 10, 4 and 1 the selfish router would send all trafic through route A in every case. An altruistic router would make a random choice between A, B, C such that A was chosen 2/3rds of the time and B, C were chosen in proportion 4:1 the rest of the time.

        You can then tweak further by using traffic information. If the system is unloaded then use A all the time.

        The same observation applies to the problem where traffic alternates between two routes rather than dividing itself evenly. That is elementary control theory. The problem is that the response has too high a gain factor, in effect the gain factor is infinite so instead of being shared across the routes the system is going into oscillation.

        There is an obvious solution to that problem, you measure the change in the traffic statistics and moderate your response to changes.

        This is the sort of thing the IETF should be doing. Unfortunately the IETF has been out to lunch for many years now. They have failled to respond with any urgency to most of the issues facing the net. Most of the participants seem to use it as a substitute social life rather than as a place to get things done.

        • Re:I'm confused (Score:2, Insightful)

          by obnoximoron ( 572734 )
          The same observation applies to the problem where traffic alternates between two routes rather than dividing itself evenly. That is elementary control theory. The problem is that the response has too high a gain factor, in effect the gain factor is infinite so instead of being shared across the routes the system is going into oscillation.

          The control theory you refer to is for linear systems with feedback. Routing is a highly nonlinear system and the analysis is much harder. However the basic concept of high gain leading to oscillation is the roughly the same. Multicommodity flow theory researchers have been working on flow allocation and stability for years. Recently this work has caught the attention of the MPLS crowd in IETF.

          You are right about IETF inertia though. I have given up on any bold progressive thinking in IETF for now with their attitudes such as "If it basically works, why fix it?"
          • The control theory you refer to is for linear systems with feedback. Routing is a highly nonlinear system and the analysis is much harder. However the basic concept of high gain leading to oscillation is the roughly the same. Multicommodity flow theory researchers have been working on flow allocation and stability for years. Recently this work has caught the attention of the MPLS crowd in IETF.

            Actually it is even easier to send the system into oscillation if you have a non-linear system. But explaining the ins and outs in a slashdot post...

            The frustrating thing is that an organization led by academics has so little academic input. The only academic habit they observe is lethargy.

            You are right about IETF inertia though. I have given up on any bold progressive thinking in IETF for now with their attitudes such as "If it basically works, why fix it?"

            The IETF attitude is to resist ideas as long as they can, then when someone loses patience and goes ahead without them complain about commercial interests having no respect for the standards process. In fact there is plenty of respect for standards processes, but not much for the specific IETF process.

            You can tell how backward the institution is simply by looking at an RFC, they look like a Nigerian letter asking for assistance with a money transfer.

            The whole NOMCOM system is a sick joke. The obvious purpose of the mechanism is to make sure that the IAB and IESG are accountable to no-one. A cabal of 15 people meeting in secret with no basic accountability is much less likely to upset the status quo with a dramatic move than a democratic system of elections. Democratic elections would mean a real risk of a change of power. The NOMCOM system means that bad ADs and ADs who have blatantly abused their power can continue to be reappointed, there is no way for the membership as a whole to reject them.

            If the IETF had balls they would have pushed through a program for completion of the IPSEC, DNSEC and IPV6 protocols five years ago and then moved ahead with a strategy for deployment. Today they would be aggressively considering how to address the problem of Spam. As it is DNSSEC is undeployable in the large zones, the IESG has been content to let the WG chair filibuster fixes. IPSEC is a mess, the ISAKMP/IKE scheme is a dogs breakfast, a scheme to negotiate the scheme for negotiation. The only thing that has happened to IPv6 is that we are closer to running out of address space and everyone is moving to NAT regardless of the IETF opinion of them.

            At the same time groups like OASIS have been completing standards in 18 months...

    • I think the important issue is the degradation behavior as more traffic is added. Essentially, if you add 1% more traffic to a particular network route, it may overload it so much that the average speed drops by much more than 1% (say 10%). Thus, everyone would go faster if they didn't insist on trying (and therefore overloading) the fastest route.

      To use a droll real-world example, consider the following:
      1. You get given a new project on top of your two existing projects.
      2. You spend some time on the new project, attend some meetings, etc.
      3. You fall behind on the second project because you're doing some work on the third.
      4. The second project manager schedules a meeting to find out why you're behind schedule.
      5. You catch up on the second project, but fall behind on #1 and #3
      6. The first and third project managers now schedule meetings to find out why you're late.
      7. Soon, you have no time to do any work because you're busy explaining why you're so busy!

      The article is suggesting that if some people used alternate routes, then the primary routes wouldn't get overloaded and therefore would get more done.

  • by scotay ( 195240 ) on Friday February 14, 2003 @04:17PM (#5305339)
    Eventually the system will settle to an equilibrium that mathematicians call a Nash flow, which will be, on the average, slower than the ideal.

    If nobody goes for the blond, we all get laid. Somebody go tell the routers.
    • Re:Thanks Ron Howard (Score:4, Interesting)

      by PetWolverine ( 638111 ) on Friday February 14, 2003 @04:27PM (#5305429) Journal
      And just as in A Beautiful Mind Nash's friends suspected him of coming up with a plan that would allow him to get the blonde, people will suspect Cornell of coming up with this plan to get more bandwidth. Also just as in the movie (/book, which I haven't read yet) that's probably not the case...or let's hope it's not.
      • > (/book, which I haven't read yet)

        It's been over a year and it was a long, long book (but well worth the read), but I don't recall the "blonde strategy" ever being in those pages. The blunt "let's just go have sex" scene was in there however. :p

        I'd attribute it to dressing up the screenplay.
      • I have 60 GB or so of MP3s [dhs.org] that you need.

        Funny you should be posting to an article on selfish routing....

    • Re:Thanks Ron Howard (Score:4, Informative)

      by JoeBuck ( 7947 ) on Friday February 14, 2003 @05:57PM (#5306069) Homepage

      As has been pointed out [variagate.com], the movie got the Nash equilibrium principle entirely wrong. Since a cheater can benefit by going for the blonde at the last minute, after the other guys have already committed themselves, it's not an equilibrium.

  • Another article (Score:4, Informative)

    by aengblom ( 123492 ) on Friday February 14, 2003 @04:17PM (#5305341) Homepage
    Cnet's got a write up [com.com] on this too.
  • by jj_johny ( 626460 ) on Friday February 14, 2003 @04:18PM (#5305349)
    Attention Science Blog - We have things called protocols and such. Please use specific terms.

    Maybe I am just a lowly CCNP but is this all just a theory paper about the problems with "routing" or were there specifics about current routing protocols that should be updated or current practices that should be changed. Please help, everyone knows that the current routing could be better but theory stuff just does not help us much.

    • by orthogonal ( 588627 ) on Friday February 14, 2003 @04:29PM (#5305450) Journal
      Maybe I am just a lowly CCNP

      No, it's no longer "CCNP"; the Soviet Socialists are now calling themselves the nationlists, the Union is gone, and the country's just named Russia.

      But thanks, "Comrade". We'll open a dossier on you anyway.
    • Yes it was a very light article but if you had followed the links, well not so much links as URLs, you would have found this. http://www.cs.cornell.edu/timr/ and this http://www.cs.cornell.edu/People/eva/eva.html

      Which although I have not even starte to read it yet appears to have more than enough detail to satisfy almost anyone. Have fun I know I will. :)
    • by ninewands ( 105734 ) on Friday February 14, 2003 @05:56PM (#5306064)
      It's not so much a theory piece as it is a GROSS misunderstanding, on the author's part, of the design principles behind the internet in the first place.

      The internet isn't, wasn't, never has been intended to be a high-performance network. It IS and was intended to be a high-availability network (read ... capable of suvivng a nuclear attack) ...

      One of the ways the 'net accomplishes this is by detecting damage and routing around it by trying to always use the "lowest cost" route from point A to point B. A significant factor in "lowest cost" is least time.

      By always seeking to use the fastest (or most efficient by some other measure than time) route from point A to point B, performance levels on the 'net get leveled out and really fat pipes draw lots of traffic, while "pin-holes" don't.

      For the life of me I can't understand just what the hell the author's complaint is ... it reads, to me, that he's complaining because the defined routing protocols work THE WAY THEY"RE SUPPOSED TO. Well, DUHH!

      Just my US$0.02
  • defaultuser@kaazalite.com

    'Cool! One meg left! .......huh? WTF?!!? Disconnected?! You dirty SOB!..FUUuuuuuuuccCCCKKK!'

    • Please send this article to defaultuser@kaazalite.com

      Don't worry, I read it. But I'm still not changing.

      Although I had a sad revelation last night, after saying to a friend "Yeah, hopefully when I get back from work tomorrow night those music videos will be finished." I then realized the interest of my Friday night is determined by whether or not my Utada Hikaru MTV Unplugged (JP) videos will be completed.

      I then realized I must get out more. Good thing my girlfriend gets back on Thursday...
  • by juanfe ( 466699 ) on Friday February 14, 2003 @04:20PM (#5305367) Homepage
    Given the growth of walled gardens, of email attacks, of DoS, of more traffic channeled through fewer fat pipes owned by fewer public/non-profit organizations, is this still possible?
  • by captainboogerhead ( 228216 ) on Friday February 14, 2003 @04:22PM (#5305379) Journal

    It seems the researchers at Pinko U finally realize that routers have always been programmed using the enlightened-self-interest model of bandwidth utilization. It's time to shut them down.

    The last thing we need is lazy, welfare dependant internet backbones sitting around all day watching The Dukes of Hazzard and drinking Lite Beer. If the altruists win this round, AOL transforms from the gated-suburb of the internet into the "Projects". Aren't we taxed enough?

  • by floppy ears ( 470810 ) on Friday February 14, 2003 @04:22PM (#5305385) Homepage
    It basically says that network congestion is like congestion on highways. If everybody is trying to change lanes all the time, they might save a bit of time for themselves, but on the whole they will slow down traffic for everybody.

    In theory, this may slow down the internet by something like 50-60% at most. Nobody really knows how well the Internet conforms to the mathematical model, however. Any benefit from trying to fix the problem might be outweighed by the cost of implementing a solution.
    • by Smidge204 ( 605297 ) on Friday February 14, 2003 @04:43PM (#5305564) Journal
      It's funny you should mention how internet traffic is like highway traffic.

      There's an amusing, if not somewhat interesting, article writting up on how you can single-handedly relieve traffic congestion here:

      http://www.amasci.com/amateur/traffic/traffic1.h tm l

      It's basically the same idea: If a few people just give a little slack, everybody wins out.
      =Smidge=
      • I've known this phenomenon for quite a while (it makes perfect logical sense) having commuted in rush hour traffic for many years

        scary how much thought this guy has put into it....but i bet the transportation department would fund him to do a study
      • I read this link at work about two weeks ago. On the way home, during a typical NH-I93 traffic jam, traffic was moving around 10 mph. I stayed in the right lane and kept about 8 car spaces ahead of me. The guy behind me was FURIOUS (cause he wasn't fast? ..ouch) He was flashing his high beams (still light out) and flicking his car horn in 1/8 second bursts. It was extremely entertaining at 10 mph. Everyone around me was going the same exact speed, I just had about enough space for 8 cars in front of me. Granted for about 8 miles I had to actually stop about 4 times, which when having a standard transmission is very nice to avoid. Anyways, the guy put his blinker on after about fifteen minutes or so of this, and of course the person in the left lane let him right in... he went in for the psych-out bumper clip on my car, and then he slowed Waaaaaay down, flipped off his review mirror (ow you got me pal!) and then he TORE OFF. For 8 car lengths. I swear to God with the bell curve as steep as it is I can't figure out why I'm not a greedy polititian making ubermillion dollars a year. Oh wait- I am not old enough yet. Buwahahahaaaa
    • It's those damn rubbernecking packets! You'd think they'd never seen an collision before. Move along, nothing to see here...
  • DL managers (Score:5, Funny)

    by zephc ( 225327 ) on Friday February 14, 2003 @04:23PM (#5305394)
    this is why I hate download managers, especially ones that create dozens of connections to download segments of large files.

    My flatmate does that with eDonkey on TWO of his computers and squashed our bandwidth for a week (downloading pr0n of course)
    • Re:DL managers (Score:3, Interesting)

      by bgarrett ( 6193 )
      Download managers aren't really the problem, except when you don't have the bandwidth to sustain parallel downloading. If you have enough pipe, parallel DLs ARE faster than a single serial download.

      The problem the paper is describing is at the larger "router's eye view" scale, where multiple routes out to the rest of the network exist, and where only the fastest route is used - the other two pipelines are basically starved of packets.
  • by Jugalator ( 259273 ) on Friday February 14, 2003 @04:25PM (#5305415) Journal
    I suppose this is the heart of the article, btw:

    "if routers choose the route that looks the least congested, they are doing selfish routing. As soon as that route clogs up, the routers change their strategies and choose other, previously neglected routes. Eventually the system will settle to an equilibrium that mathematicians call a Nash flow, which will be, on the average, slower than the ideal. "

    Now, hasn't there been a problem some time a long time ago in early Internet history where parts of the internet entered a state of self oscillation. I recall this was later fixed somehow to a point by revising some protocols.

    I remember it basically as the problem where lots of routers (for some reason) started sending packets to one path, it got very congested, all routers switched to another, congested, etc.

    I only have very vague memories since I took the course where I heard it some years ago. Perhaps I'm only full of bullshit. :-)
    • It's a continuing process, really. There are several protocols and algorithms to reduce congestion (not the least of which is a higher-level protocol known as TCP ;)). I think what's going on here is the author is trying to push a new method to control congestion at the network (routing) level.
  • by guacamolefoo ( 577448 ) on Friday February 14, 2003 @04:28PM (#5305437) Homepage Journal
    If the "altruistic" behavior results in a better network, then isn't there a benefit for the altruistic behavior? Doesn't it cease being altruistic if there is a benefit? Aaggh! I'm caught in another Prisoner's Dilemma with an uncertain number of moves!

    Where's my Dawkins? (That's twice today I've thought of him).

    GF.
    • This is a very interesting statement.

      I regularly tell people that they should be polite if not for altruistic reasons then for selfish reasons. People have accussed me of being extraordinarily polite: "Yes sir, No sir, Yes Ma'am, No Ma'am," etc. I regularly get treated better by the checker who is half my age, or the fast food cashier, or any number of other individuals; far better than the person in front of me or behind me in line.

      So, am I polite for altruistic reasons, or for selfish reasons? It's an interesting question. I would have to say for both reasons.

      • I have found that being polite has several wonderful benefits:
        • the other party tends to reciprocate, making the interaction civil and therefore less stressful
        • often, the other party is so pleased to encounter someone who extends a bit of courtesy that he or she will perform services not usually rendered (oh, here, let me take care of that for you -- it's no problem -- have a great day!)
        • it shortens unwanted interactions. If you want to talk to me and I don't particularly want to talk to you (typical for telemarketers for instance), then if I'm polite and clear you'll finish the transaction quickly. But if I'm rude, the transaction takes longer to complete and may be reinitiated multiple times.
        Courtesy -- it's not just for dates. ;-)
    • Okay, so tell me: what is the difference between a Taoist and an enlightened Machiavellian?

      Does it matter if someone consistently does something altruistic for selfish or selfless reasons? The outcome is the same.

      If you really want to get into the motives, then why not just say that acting in the best interest of the commons is itself "selfish" because it is a safer strategy for a better outcome. Sure, you can play the short-term selfish game and come out ahead (maybe), but you will be surrounded by people who resent your success. The lesson in competition is not "improve performance" but rather "sabotage your competition".

      Altruism is the reverse: by supporting your "competition" (called "complimentors" in the newspeak) you may risk losing an advantage (as in the prisoner's dilemma) in the short term, but by employing simple strategies like "tit-for-tat" in an environment that is biased towards altruism will eventually lead to maximal outcomes for the population taken as a whole (the "rising tide floats all boats" analogy, here properly applied, for once). It really is inevitable, because the feedback of the "game" allows the participants to learn what will work best for them.

      In a human social context, you would hope that that learning would move from the purely intellectual to the personal. If you can learn to do good for selfish reasons, it might occur to you that doing good has value in and of itself.

  • by Chocolate Teapot ( 639869 ) on Friday February 14, 2003 @04:29PM (#5305454) Homepage Journal
    I would have had first post but it got stuck in a jam in Toronto.
  • Somewhat interesting (Score:5, Informative)

    by rabtech ( 223758 ) on Friday February 14, 2003 @04:30PM (#5305463) Homepage
    It appears that they are claiming routers pick the fastest route to push packets down, which can in turn cause that route to become congested, thus slowing it down, and then the router picks a new route, causing it to become congested and slow down, and so on.

    Supposedly, if the router picked the fastest AND least congested route, then some packets might take a little longer to get to their destination, but the overall latency of the internet would decrease.

    In theory. In reality, I don't know how much peering arrangements change the equation. You see, if you are a network provider, you have two goals with peering: dump enough traffic onto your peer points so that you are exchanging about equal amounts with your peer AND get traffic that isn't bound for your network OFF your network as quickly as possible.

    In practice, this means ISPs who peer have a large incentive to route packets coming from peer parter A directly to peer partner B, without regard for what that does to the latency of the packet, nor the congestion of the peering partners. The peered packets become more like the hot potato, bouncing around peer points until they actually arrive near the destination network. That lowers overall efficiency as well. (companies like Internap don't peer for this reason; they pay for all connection points even though they have enough traffic to get peering points for free. They cost more, but they have very low latency, packet loss, etc).

    • And you've worked for internap for how long?
    • You see, if you are a network provider, you have two goals with peering: dump enough traffic onto your peer points so that you are exchanging about equal amounts with your peer AND get traffic that isn't bound for your network OFF your network as quickly as possible.

      Hi,

      This depends entirely on your policy decisions. For example, the traffic engineering that I do at my place of work is based around a cold-potato routing policy rather than hot; that is, we will carry our traffic to the point closest to it's destination, thus keeping it in our network for as long as possible rather than vice versa.

      There are arguments both sides of each issue, and it really depends on one's own topology and decision-making criterion.

      ~cHris
  • as long as (Score:3, Insightful)

    by geekoid ( 135745 ) <dadinportland@y[ ]o.com ['aho' in gap]> on Friday February 14, 2003 @04:34PM (#5305488) Homepage Journal
    'the internet' is faster then my connection to it, does it really matter?
    • 'the internet' is faster then my connection to it, does it really matter?

      Well, you're looking at this from the perspective as a "home user" where you're a leaf on the Internet. That's okay and all, but there's more than a few Slashdotter's here that actually have to think about this kind of stuff.

      With a home connection you're a "leaf" -- you're on the outside and you only have one route to the internet -- your ISP. That is your "gateway" and there's nothing you can do about it.

      The article's really talking about major ISPs, and how they peer with other networks. They have multiple routes to a single destination, and these routing tables can get pretty big when you're a big player on a big hub. I'm by no means an expert on the details of it, but I do appreciate the thought that goes into such decisions.

      When you want to send a packet to NYC from wherever you are, your network only has once choice: Send it to the ISP and let them figure out what to do with it.

      Your ISP may or may not have a choice what to do with it though. If they're a multi-homed place (meaning they have 2 or more connections) they have to decide what route to pass it off to. One of them might only require 1 hop to get to NYC, and the other might require 3. The simple answer is to just send it to the 1 hop route, but in the grand scheme of things it's better to send it to the 3-hop route sometimes because it's less congested.

      That's just the tip of the iceberg really, but that might enlighten the issue a bit. It's certainly NOT a concern for a home user, so I can understand your post -- but to anybody that runs an ISP it's a fairly interesting topic.

  • by rrkap ( 634128 ) on Friday February 14, 2003 @04:40PM (#5305541) Homepage

    This is essentially a pricing problem.

    Here's a quote from the original 1968 paper that used the term

    The tragedy of the commons develops in this way. Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.

    As a rational being, each herdsman seeks to maximize his gain. Explicitly or implicitly, more or less consciously, he asks, "What is the utility to me of adding one more animal to my herd?" This utility has one negative and one positive component.

    1. The positive component is a function of the increment of one animal. Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly + 1.

    2. The negative component is a function of the additional overgrazing created by one more animal. Since, however, the effects of overgrazing are shared by all the herdsmen, the negative utility for any particular decisionmaking herdsman is only a fraction of - 1.

    Adding together the component partial utilities, the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another.... But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit -- in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.

    There are two common solutions to this kind of problem. Regulate use of the common resource or sell it. Because of the structure of the internet, it is hard to fairly price bandwidth and no good regulatory scheme has developed, so I don't see any other answer than living with it.

    • Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality.

      Herein lies the inherent limit of the tragedy of the commons: The ability to load the common resource correlates with the trust the users feel in the common's stability. As long as I can drive another cow onto the pasture without fear that the neighbor will shoot the cow or me, I will. But once I feel I can't trust the shared resource--once the social stability is gone--I'll stop using the common and find some other way, because the cost (to me) of the common is no longer a fraction of the value I derive from it, but possibly total loss.

      Alternately, if social stability deteriorates radically, it may catch the users of the common off-guard before they can do their subconscious value calculations. If I drive one more cow onto the commons and it starts a stampede, killing many of the cattle (mine and my neighbors), and killing some of those neightbors, and starting violent feuds and vendettas among us who used the common, the cost of the common maximizes.

      Either way, the tragedy of the commons is eventually fulfilled, and everyone stops using the shared resource. Self-limiting, at least until the next time.

      What does this mean for the net? It will get progressively worse until "death of the net (mpegs at 11!)", after which we survivors will crawl out of our IPv6 bomb shelters and rule the Earth.

      • Either way, the tragedy of the commons is eventually fulfilled

        No. It's not inevitable. Shared resources do not inevitably die. Besides, on the internet, ISPs tend to act as policemen; any user trying to abuse the net also abuse the ISP and give them a bad name, if it gets bad enough; the ISP itself loses its feeds.

        • No. It's not inevitable. Shared resources do not inevitably die.

          Without intervention--the goodwill and self-restraint of the participants, or externally-imposed restraints, or a change of resource costing--it is inevitable. However, you're quite right in that there are restraining factors. The net, as a commons, has fences and gates which are guarded by the ISPs. They can impose pricing changes and enforce use restrictions (d/l caps, for instance) that can extend the life of the shared resource.

          So your point, that the "social stability" of the net is guarded by some of its users (ISPs), is well made. In engineering speak, the net degrades more gracefully than a complete collapse (which is why I was attempting to mock the whole "end of the net" line of thought, even as I used it as an extreme example).

          • Without intervention--the goodwill and self-restraint of the participants, or externally-imposed restraints, or a change of resource costing--it is inevitable.

            Yes. In practice there nearly always are these restraints. The actual tragedy of the commons is something of a myth I think, historically; I'm not aware of any definitive example.

            • There is some archaeological evidence [www.sfu.ca] of the tragedy of the commons in the Mediterranean and Mesopotamia. The general tenor of this evidence is that much of Mankind's earliest cradles were well forested with coniferous or deciduous trees. Since these trees were a common resource, everyone cut them freely for fuel and building materials. Now, these regions of the world are scrubland with only tended crop trees (nuts, olives) and very few stands of wild wood.
    • Here's a quote from the original 1968 paper that used the term

      Perhaps if you had read the article you would know the phrase comes from William Forster Lloyd (1794-1852).

      The tragedy of the Commons was used as a political weapon in the class warfare of the Victorian era. Those with Scottish ancestry might know this as 'the clearances', in England they were the enclosures.

      Basically the aristocracy transferred the common land from public ownership to private ownership. Since they wrote the acts of parliament they gave themselves the best deal. The result being a transfer of wealth from the poor to the rich.

      The deck was stacked so that the aristocracy quickly got control of the small proportion of the land that went to the peasants. It was similar to the land grab that made Bush rich. They bought a sports team then started building a bigger stadium using the pliant local council to confiscate large amounts of land at below market rates which were then used for development and sold for a vast profit.

      So the tragedy of the commons is not a politically neutral term. Also the real tragedy for the peasants came when the aristocracy used it as an excuse for exploitation. Its a bit like the plans for privatising social security, there is a problem there but it is being used as an excuse for a political agenda, not as something that is to be addressed for its own sake.

      When tragedy of the commons is used in relation to the Internet it is usually to justify some form of corporate or governmental control.

  • by mdouglas ( 139166 ) on Friday February 14, 2003 @04:46PM (#5305595) Homepage
    "Routers have many ways to decide. Sometimes they send out test packets and time them."

    it isn't RIP, OSPF, EIGRP, or BGP. i don't know ISIS, but i strongly suspect these people are talking out of their asses.
  • by chill ( 34294 )
    Wouldn't a decent implementation of QoS help this situation?

    Instead of a router choosing the fasted or least-congested route for a packet, it could also factor in things like what type of packet/service it is.

    NNTP, e-mail, and other non-interactive, non-realtime packets could be shunted down secondary pipes -- you'd never notice most of it anyway.

    QoS on IPv4 doesn't really have the granularity for this, and it seems most routers on the 'net ignore those bits anyway.

    I believe this was one of the things that IPv6 was supposed to address.
  • by Walker ( 96239 ) on Friday February 14, 2003 @04:51PM (#5305622)
    In many (but certainly not all), Internet traffic is similar to automobile traffic. Packets are discrete objects, like cars, and not continuous like a river or radio signal. Analysis on automobile traffic has already discovered properties like this. There are many simulations that show if we all ensured 3 car lengths between us and the next car, we would avoid the accordion and get to work significantly faster.
  • by BusDriver ( 34906 ) <tim@muppetz.com> on Friday February 14, 2003 @04:53PM (#5305637) Homepage
    This article makes no sense from a proper real world routing perspective.

    Any provider who is doing anything slightly serious will be using BGP4 routing for their EGP. It does NOT send out magic packets to find best paths. It learns routes from it's peers and will choose the best route based on a defined set of decisions. Routers do not keep a list of "neglected routes." If one route goes away, the router will simply pick the next best path.

    Read more about BGP4 from Cisco's website [cisco.com]. You will find little in common with this article and the one linked in the story.

    Good routing relies on good admins with a well defined routing policy. There is no such thing as a "selfish" router.

    Tim
    • "If one route goes away, the router will simply
      pick the next best path."

      That's the point! The article says that according to mathematical theory, this approach is not ideal.

      Basically, by sending some packets along alternate routes that are actually slower, while that individual packet may arrive later, statistically the packets will arrive faster.
    • by BeBoxer ( 14448 ) on Friday February 14, 2003 @10:00PM (#5307077)
      if I could.

      I think whoever wrote this article is far removed from the real world. They are finding theoretical problems with the routing protocols we would like to be running. As you pointed out, pretty much the entire backbone is using BGP4 to make routing decisions. And BGP4 doesn't really have any measure of how congested links are, nor how long the latency is. The basic measure of BGP4 is how many different providers (called AS's or Autonomous Systems) a packet might have to traverse.

      Hmmm, the router says, is the best route thru C&W->AT&T->Bob's_ISP or just Level3->Bob's_ISP? I'll pick the two hop route. Sure, we all do some manual tuning, where the engineer says "I know the L3->Bob link is slow, so I'll make it look like L3->L3->Bob", but BGP4 is fundamentally a really stupid protocol. In theory. In practice, it works fine almost all of the time.

      The most telling quote from the article is this:


      They also found that doubling the capacity of the system would provide the same benefits as a managed system.


      No shit Sherlock. I could've told you that five years ago. Why do you think QoS is still facing an uphill struggle? It's far cheaper and easier to just keep cranking up the bandwidth than to replace BGP4 with something smarter, or to deploy QoS protocols Internet wide.

      Don't get me wrong, I think they are doing great research. It's good to try and figure out what might go wrong with next-gen protocols before the get deployed. But I don't think they are talking about problems on todays Internet.
  • by urbazewski ( 554143 ) on Friday February 14, 2003 @04:55PM (#5305663) Homepage Journal
    This is not the main point of the article but:

    The Tragedy of the Commons , often cited by environmentalists, describes 14th-century Britain, where each household tried to gain wealth by putting as many animals as possible on the common village pasture. Overgrazing ruined the pasture, and village after village collapsed.

    The "tragedy of the commons" that Hardin's article is devoted to is increasing world population. What evidence is there for overgrazing in England before as opposed to during and after the forced transition to private ownership? Most cultures with a common land tradition also have a set of rules for governing land use that avoids such tragedies, for example, irrigation systems in Bali where the farmer who gets the water last controls the water flow. Ones that didn't solve the problem of overuse of resources are conspicuous by their non-existence (Easter Island, some settlements in the Southwest US, some populations on islands in the South Pacific ).

    The 'tragedy of the commons' is one of the most misunderstood and overused metaphors of our times. The idea that a system with resources held in common is necessarily unworkable is false --- what is needed is institutions that effectively manage common resources, and such institutions have emerged repeatedly and continue to exist. Often it is when these cultures come into contact with market-oriented societies that the traditional systems are undermined and collapsed. Often what happens is not "the tragedy of the commons" but "the tragedy of failed privatization" in which a traditional management system is destroyed without establishing a viable alternative.

    How does this relate to the internet? It's a cautionary tale --- be very very careful when introducing monetary incentives into a system that has previously relied on cooperation and cultural norms.

    blog-O-rama [annmariabell.com]

    • This summary of the Balinese water temple system is from an article [biu.ac.il] by Bradley J. Ruffle and Richard H. Sosis that looks at the use of religious practice to encourage cooperation via field experiments in kibbutzim.
      It follows that multinational corporations and foreign institutions investing in the developing world and dependent on collaboration with the indigenous people may profit from preserving indigenous ritual practices and the environment in which they take place. The well-documented water temple system of Bali represents a case in point (see Lansing, 1991, for an authoritative study). A lake in a volcanic crater on the island as well as the rains that run off of the volcano irrigate Bali's rice fields. The Balinese have developed what has proven to be an ingenious cooperative system of aqueducts to supply water in equitable amounts to the surrounding farmers. At the heart of this coordinated effort lies an indigenous religion that worships, among other deities, Dewi Danu, the goddess of the waters emanating from the volcano in whose honor an immense temple stands at the volcano's summit. Smaller temples for worship are located at every branch of the irrigation system and at the fields onto which the aqueducts empty.

      The wisdom and success of the Balinese water temple system became clear when the Asian Development Bank imposed a farming alternative on the Balinese in the 1980s. The Asian Development Bank concluded in 1988 that, "The substitution of the "high technology and bureaucratic" solution proved counter-productive and was the major factor behind the yield and cropped areas declines experienced between 1982 and 1985 ... The cost of the lack of appreciation of the merits of the traditional regime has been high. Project experience highlights the fact that the irrigated rice terraces of Bali form a complex artificial ecosystem which has been recognized locally over centuries" (quoted from Lansing, 1991, p. 124).

      Lansing, J. S. (1991) "Priests and Programmers: Technologies of power in the engineered landscape of Bali ", Princeton: Princeton University Press. Leviatan, U., H. Oliver, J. Quarter (1998)

      blog-O-rama [annmariabell.com]

    • Most cultures with a common land tradition also have a set of rules for governing land use that avoids such tragedies

      A commons is a resource that everyone is free to use as they wish. What history really shows is that commons are (ha ha) relatively uncommon. Societies do not leave resources as commons because of the problems that result from doing so. Instead they come up with systems that limited what individuals could do with such resources. Sometimes that meant a system of private property, where individuals got to control some part of the resource, sometimes a system of collective property, where some collective body would control the resource, but in almost every case resources were not left as commons.

      Of course you are right that western observers often failed to recognise the systems of management set up in other societies, especially when these did not resemble the kinds of systems that had arisen in Europe.
  • by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Friday February 14, 2003 @04:58PM (#5305684) Homepage Journal

    The problem is not that service providers pick the route that gets the packet to its destination quickest; it's that they pick the route that gets the packet off their network the fastest. Those two are not the same thing at all. Think about it geographically. Let's say I'm a square network and I receive a packet at the northern end of my western border destined for somewhere to my northeast. I know that the quickest way to get it to its destination is to move it east across my own network and deliver it to my eastern neighbor. However, I also know that if I pass it on to my northern neighbor it will still get there without coming to me again, and my northern neighbor is closer. So, if I'm a selfish bastard, what do I do? I ship it northward, minimizing the time that it spends on my own network but increasing the total time before it reaches its destination. If everyone does this same sort of "hot potato" routing, total load on the network increases for everyone. In fact, my northern neighbor might very well be doing the same for packets lying to our southwest. We'd both be better off if we'd "play nice" but since we're both trying to be selfish we both lose.

    Yes, folks, it's an instance of the prisoners' [brynmawr.edu] dilemma [vub.ac.be] and these researchers are not the first [gildertech.com] to notice the fact [zdnet.com.au].

    • Even worse, the network to your north might hand it back to you a bit farther east, and the taken path might go NE,SE,NE,SE,etc. (Could this really happen with BGP? Or is it uncommon to have multiple connections to the same AS at different points in your AS? I don't know the details of BGP...)
  • First thought: What do oysters have to do with internet?
    Second thought: OOPS! SELfish...
    Third thought: ??????
    Fourth thought: Profit!
  • by acomj ( 20611 ) on Friday February 14, 2003 @05:04PM (#5305730) Homepage
    This is a classic example of the prisoners dilema problem.

    Basically if everyone acts unselfishly they do better. But from each individuals perspective they do better when they act selfish, so it all falls apart. Its interesting stuff and the prisoners dilema game algorithms are interesting.

    Prisoners Dilema [drexel.edu]

    Play the dilema game online [plocp.com]

  • What happens in an ideal situation when traffic becomes congested in a city? They build more roads. Or more lanes. Or build up more mass transit. What is the commonality between all these? It moves more people.

    Messing with routing seems to be the same as the DOT messing with shuffling cars and metering lights. Instead of focusing on how we can change all these routing patterns, why don't we just "build more roads"? I realize it isn't exactly trivial to do that, and that the backbones might be pretty tough, but what about all that "dark fiber" that is supposedly just laying around? That's the equivalent of not using an open freeway in a major city during rush hour! We've already got the road, but we just don't use it.

    Wouldn't doing that just open up more bandwidth for people, at least locally?

    • the problem is what the ends of the "dark fiber" connect to

      It isn't a freeway between say New York and Washington D.C. thats dark and unused, its the major freeway between Safeway and bob's convenience store.

      Does that make any sense? The dark fiber is dark because its not needed because of what it connects to.
  • In my town, back in the day, we had a total of 2 local ISPs whose nocs were less than a block apart. I worked at one and a friend at the other. we tried like hell to get the 2 companies to string some cat5 between the 2 buildings and ease the load on each of our T1's. They wouldn't do it, wouldn't even talk about it. So whenever I sent an email to one of my friends who subscribed to the other ISP, it got to travel out our T1, halfway across the country and back, and down their T1. Stupid.

    G
  • by Billly Gates ( 198444 ) on Friday February 14, 2003 @06:10PM (#5306123) Journal
    Ipv6 supports better Qos so if the fastest route is congested the router can more easily find out and select an alternative route.

    Internet2 has an extremely fast backbone and is based on Ipv6. This will help greatly since the backbone of the current internet can be quite congested at times. Lets hope its implemented soon as the current problem will likely go away.
  • I mean, the metrics a network uses to determine the best route are not at
    all necessarily what is fastest, or what is closest..... it can be completely arbitrary.

    Lowest latency, least used, least hops, least dollar cost, etc.
    Some networks try to offload traffic to other networks as fast as possible. Others try to get data as close to the destination as possible before offloading it. In both cases, everything would work fine, if only everyone played by the same rules.
  • Ok, this has to be the most convoluted article I've ever read.. They're effectively saying, don't use the best route, pick another, because your extra traffic may break the best route.

    We diagramed a sample network here in the office, to try and explain what we just read to ourselves.. We picked 5 cities (New York, Chicago, Los Angeles, Dallas, and Miami), and drew direct routes between Miami, LA, and NY to each other. Chicago gets routes to NY and LA. Dallas gets routes to everything but Chicago.

    We then contemplated what a packet from LA to NY would be looking at.

    On our mythical network, we have the following ping times.

    LA -> NY 20ms
    LA -> Chicago -> NY 25ms
    LA -> Dallas -> NY 40ms
    LA -> Miami -> NY 60ms

    So, we shoudn't be selfish, and take the LA->NY route? We should direct our traffic LA->Dallas->NY ? If this route is already slow or conjested, what good does that do? Now instead of using a perfectly good route, we're killing a conjected one.

    If LA->NY is the best/fastest at the time, use it. If/when that becomes more conjested, it will no longer be the best choice, and the new best choice will be chosen..

    Not everyone is going to be using YOUR best choice all the time.. Very doubtful that Miami will be routing to LA to go to NY. If they do, it's because Miami->LA is already overloaded. But as it usually works, For Miami->NY, there is already a second best choice (Miami->Dallas->NY).

    No matter how we look at it, this doesn't make any sense.. Here's a sample of the lines for our example.

    LA->NY OC192
    LA->Chicago OC48
    Chicago->NY OC48
    LA->Dallas OC48
    Dallas->NY OC24

    So, we'll leave the LA->NY route empty, and keep dumping our load onto the lesser routes?

    I do like the idea though, to keep the best choice (LA->NY) open for myself.. Everyone else chooses the second best route.. Go ahead and flood those OC48's, I'll use the OC192 that no one else uses.. :)

    • The problem is that routers don't update often enough to notice when one route is getting congested. They will keep using the LA -> NY route even if it is so congested that LA->Chicago->NY would be faster. Ideally, you'd load-balance the routes based on how much traffic you were sending over them.
  • by obnoximoron ( 572734 ) on Friday February 14, 2003 @06:22PM (#5306184)
    of the main paper : http://www.cs.cornell.edu/timr/papers/indep_full.p df and others.

    1. Their basic idea is to model decentralized routing as a Nash game and then worst-case compare the performance of this game with the best achievable by ANY algorithm, decentralized or not. This sort of comparison is common in the field of competitive analysis .

    2. Assuming a hop latency to increase linearly with additional traffic on it, selfish routing causes the average packet latency to increase by no more than 4/3 of that caused by ideal optimal routing. This worst-case figure had been earlier called "the Price of Anarchy" by Papadimitriou, a famous researcher in algorithmic complexity who every CS student loves to hate :P

    3. Similar Prices of Anarchy have been derived by them for when the hop latency increases nonlinearly with the additional traffic on it.

    4. The worst case is always achievable with a simple network of 2 nodes connected by parallel links. This is the exactly the example used in networking courses and textbooks to illustrate the oscillation problem caused by selfish routing. This paper says that using this simple network as example is justified since the worst case can be always be analysed with it.

    5. Instead of optimizing routing to try reach the minimum possible average latency, you can keep the routing selfish but double each link capacity and achieve the same result.
    • Thank you. It surprised me how many people misunderstood the article _and_ got modded way up. It's not usually that bad.

      I doubt that a linear or quadratic latency vs. traffic volume relationship is anywhere near accurate. Queuing theory says it's more like Q/(1-Q), where Q = arrival rate / processing rate. See this web site [new-destiny.co.uk]. I'd be surprised if their linear or quadratic models give very useful results. If they're running a computer simulation anyway, why not get it right and use a reasonable queuing theory result? Hmm, I'm sure it's easier to model links with infinite capacity that just get slower and slower than it is to write a model that deals with dropping packets.
  • Jeepers, always such high quality thinking going on at Valentine's! And just about any other arbitrarily important date, I suppose. Here's an interesting article [guardian.co.uk] from the Guardian about the science that gets press on this day of love.

  • by earthforce_1 ( 454968 ) <earthforce_1@y a h oo.com> on Friday February 14, 2003 @08:40PM (#5306827) Journal
    Is unfortunate proof that altruism breaks down on a large scale. This is the fundamental flaw of socialism - humans evolved from simian ancestors, who basically lived in small tribal groups. We are altruistic up to a maximum of about 75 or so individuals, then it breaks down.

    I have seen videotape of a psychology experiment, where an individual feigned a serious medical problem and keeled over in the middle of the street. When the test subject tried this on a busy urban thoroughfare, large passing crowds actually stepped over the guy. But in a small village, shopkeepers rushed out onto the street to try and help him.

    There was a famous murder case in NYC where over 100 neighbours heard a woman begging for help as she was having her life snuffed out over a sadistic killer over a period of time. Nobody reported it or tried to intervene, they all assumed somebody else would do something about it. This resulted in the passage of a law, which as I recall was the subject of the final Seinfeld episode.
  • Altruism (Score:2, Interesting)

    by jbl81 ( 458835 )
    Altruism is not the way we keep air and water clean. Air and water quality are public goods (in the economics sense of the term), and keeping them clean is a collective action problem. It's straightforward game theory to show that the rational choice, in a system where you have no reason to trust people, is to make sure you don't get screwed before you have a chance to "get yours".

    The way people and governments get out of a collective action problem (like an arms race, or like EMU monetary/fiscal policy, etc) is not through altruism, but through formal cooperation. In order to ensure that everyone cooperates, you need to (1) clearly define what constitutes cooperation, (2) make it transparent (obvious) who is cooperating and who is not, and (3) decide on mechanisms for enforcement.
  • Maybe I did not understand the article but chances are that maybe i did!

    For distributed routing every router takes its own decision. SPF is used. Assume OSPF now. Routers
    basically set weights on its interfaces/ports. There are two types of weights: static and dynamic.For static weights there is nothing much a router can do except obey (a lazy) administrator's decision.Dynamic weight setting gives a router some freedom. It may set its interface weights depending upon the available bandwidth. It could even penalize congestion by choosing very high weights for loads more than say 95% of the link capacity.

    But there is a small problem commonly known as "osciallation". Consider two links A and B connected to a router. Router finds out that A is congested so it sets a high weight on interface A. This leads to shift of traffic from link A to link B. At some point link B will become loaded. Now the router sets interface B weight high.
    Question: where will the traffic of link B go now? Right. To link A!! This is oscillation.

    MPLE/IP:
    In MPLS/IP networks it is possible to do load balancing based on the utilization of the links. The traffic being virtual-circuit would use the same path for the duration of its existance as LSP. No unnecessary oscillations here.

    Offline Weight Optimization:
    Bandwidth is the resource. Customers produce demand. The objective function, for example, could be to minimize Maximum Link Utilization. There are some constraints, for example, total demand will not exceed the link capacity, etc etc. How this global (entire network) optimization problem is solved is not a big deal, the big deal, however, is the result. The solution provides a set of weights which when set on the interfaces leads to a load-balanced and better utilized network.

    Point : Humans maybe greedy but mathematics is generous!

"Your stupidity, Allen, is simply not up to par." -- Dave Mack (mack@inco.UUCP) "Yours is." -- Allen Gwinn (allen@sulaco.sigma.com), in alt.flame

Working...