'Selfish Routing' Slows the Internet 249
Smaz writes "Science Blog reports that a little love could speed things up on the Net. "Self-interest can deplete a common resource. It seems this also applies to the Internet and other computer networks, which are slowed by those who hurry the most. Fortunately, say computer scientists at Cornell University in Ithaca, N.Y. , there is a limit to how bad the slowdown can get. And after developing tools to measure how much the performance of a particular network suffers, they say, the way to get improved performance on the Internet is the same as the way to maintain air and water quality: altruism helps."
If there's anything the Internet has taught me... (Score:5, Funny)
Re:If there's anything the Internet has taught me. (Score:4, Insightful)
Re:If there's anything the Internet has taught me. (Score:2, Informative)
That's totally wrong, especially on the internet. (Score:3, Insightful)
Each of these has an altruistic collision avoidance method: when a collision happens, stop sending and wait a random amount of time before sending again. A selfish ethernet device would always immediately attempt to send under the assumption that the other device would be waiting, and it would get to go first. But of course, that's very bad for the network, so it's not done.
The fact that we've got selfish routers is not a sign that they're selfish, per se, but that selfish routing is somewhere near the most effective a means of communication that they could think of at the time when they where invented.
Re:If there's anything the Internet has taught me. (Score:2)
The answer is for routing costs to accurately reflect the contention for resources. If a particular route gets crowded, charge slightly more for sending packets down it. Routers can negotiate in real time to set prices and find the cheapest route for their data. Quality of service guarantees can be implemented by purchasing bandwidth (or options to use bandwidth) in advance.
You won't eliminate selfish behaviour, the way to keep things running smoothly is to make sure people pay for the cost of the resources they use (and no more). Then it will be in their own interests to consider the effect on others, and to avoid overusing already congested routes.
The phenomenon is dwarfed by... (Score:4, Funny)
Are we sure... (Score:5, Funny)
Me: Don't use as much bandwidth and everyone will go faster!
World: Hey! That seems like a good idea.
Me: (aside) Mwuhahahaha
Research networks (Score:5, Funny)
Of course, the most important aspect of such networks is that the bandwidth they offer is helpful in Dick Size Wars at supercomputing conferences, so it's not a terribly loss for the Internet at large.
I'm confused (Score:5, Funny)
Re:I'm confused (Score:3, Interesting)
The article is not saying that using the Internet slows it down (that much is obvious). It's saying that with different routing techniques and the same level of use, it could go faster. So, using it slows it down, but so does building a bad infrastructure for it.
Re:I'm confused (Score:5, Insightful)
The article states that computers test the routes, and pick the least congested route to use. Thus, it slows everything down for everyone.
What should it do? Pick the MOST congested route?
Either I'm just confused, the author didn't understand the situation correctly, or the whole thing is BS.
Re:I'm confused too! (Score:2, Insightful)
What should it do? Pick the MOST congested route? Either I'm just confused, the author didn't understand the situation correctly, or the whole thing is BS.
Thank you. I was sitting there reading it, thinking "this sounds like a load of shit. Either I am a blithering idiot (entirely possible) or this article is worthless."
Its sounds like a purely acedemic exercise that is being underwritten by someone with too much money, that has NO practical application.
Glad to know I'm not alone in the confusion.
Re:I'm confused too! (Score:5, Informative)
At the same time, I don't see how their suggestion really helps things that much. If everyone uses the same deterministic algorithm to choose a path, this sort of mass collision is still likely to happen (although it should happen less often with more complicated algorithms). I think that overall network performance would benefit from a little randomness in the routing algorithms. I'm not a CS, so there is probably already a random component that I don't know about.
Re:I'm confused too! (Score:2)
1. Choose the nth least congested path, where n is the statistical "sweet spot".
2. Add randomness, so that your actual choices oscillate around n.
3. Include logic to keep your random deviations from getting too far from n, where "too far" is "unacceptably sucky".
4. For great justice!!!
Re:I'm confused too! (Score:4, Interesting)
I've worked with our provider a bit with routing. We have mirrored servers in colo's around the country. If one city is conjested, we move traffic *AWAY* from the conjestion. Usually our traffic makes a difference for everyone else. I can have 500Mb/s added or removed from any given city within an hour, without flinching. Of course, before I do something like that, I put in a call first.. "Hey, can this city take 500Mb/s right now?"
We wrote a program to take traceroutes from all the cities to various points, and plot them all onto a big network map, with ping times and the like.. We know which cities, peerings, or lines have problems at a glance..
http://www.voyeurweb.com/network.12.23.2002-11h.p
Warning: This picture is *BIG*. It's of our networks in Los Angeles, New York, Tampa, between each other, and to all of the root nameservers.. It makes a rather extensive map that is 11580x2669. It won't fit on your screen. Save it, and take it into your favorite image editing software to view it..
This map is a little old (Dec 23, 2002 at 11am), but it gives a good impression of what the networks immediately around our servers looked like, and how they interact with each other.. Shitty networks stand out in red.. I definately wouldn't want to MORE of my traffic that way. Sometimes we don't have a choice. If your ISP uses a shitty provider, we have to send it that way..
Re:I'm confused (Score:2)
As you doubtless know, you can't simply say "This link will carry this much traffic and it is currently carrying this much so I should start sending traffic down this other link." While that can be a useful rule to apply to one's fair queueing system it is hardly complete. One usually tests for link saturation not only by measuring packet flow but also by observation of responses to pings and similar traffic (obviously ping itself is no metric at all (by itself) because various routers along the way may be heavily loaded and prioritizing other traffic higher) and thus you get a certain view of the network which is fairly useful for sending traffic where it needs to go.
However no matter what you do you to a link, unless it has multiple channels you can only send one piece of information down the pipe at a time. The higher-speed the link is the less this matters, of course, but it is still an issue; When a link is heavily loaded, low-bandwidth traffic might not suffer a decrease in throughput, but latency often increases.
One relatively simple technique to increase the quality of the internet experience by making it more responsive would be to send traffic which does not require a low-latency link (most especially file transfers, I'm mostly talking P2P here which is where probably a majority of bandwidth is going now, though I've done no research) over a fairly highly loaded link which nonetheless has enough free bandwidth to carry the traffic. Packets might sit in queue longer, but what's four or five seconds lag when you're talking about a file transfer that takes at least minutes and sometimes days?
Meanwhile traditionally interactive traffic like a shell (via ssh or telnet or what have you) or a game might not take up that much bandwidth but it's more important that it be transferred over low-latency links. 50ms makes the difference between natural reactions and missing everyone in many first person shooters, for example.
Re:I'm confused (Score:5, Informative)
Frankly, I'm surprised this is considered news; I learned it in a networking course on my way to a CS degree. I can only assume that the author is trying to push a new algorithm for congestion control and is using "selfish routing" as a marketing scheme. The thing is, I can't seem to find the suggested reprieve.
Ahh, here it is:
Re:I'm confused (Score:5, Informative)
Yep, if you have three available routes A, B, C with bandwidths 10, 4 and 1 the selfish router would send all trafic through route A in every case. An altruistic router would make a random choice between A, B, C such that A was chosen 2/3rds of the time and B, C were chosen in proportion 4:1 the rest of the time.
You can then tweak further by using traffic information. If the system is unloaded then use A all the time.
The same observation applies to the problem where traffic alternates between two routes rather than dividing itself evenly. That is elementary control theory. The problem is that the response has too high a gain factor, in effect the gain factor is infinite so instead of being shared across the routes the system is going into oscillation.
There is an obvious solution to that problem, you measure the change in the traffic statistics and moderate your response to changes.
This is the sort of thing the IETF should be doing. Unfortunately the IETF has been out to lunch for many years now. They have failled to respond with any urgency to most of the issues facing the net. Most of the participants seem to use it as a substitute social life rather than as a place to get things done.
Re:I'm confused (Score:2, Insightful)
The control theory you refer to is for linear systems with feedback. Routing is a highly nonlinear system and the analysis is much harder. However the basic concept of high gain leading to oscillation is the roughly the same. Multicommodity flow theory researchers have been working on flow allocation and stability for years. Recently this work has caught the attention of the MPLS crowd in IETF.
You are right about IETF inertia though. I have given up on any bold progressive thinking in IETF for now with their attitudes such as "If it basically works, why fix it?"
Re:I'm confused (Score:2)
Actually it is even easier to send the system into oscillation if you have a non-linear system. But explaining the ins and outs in a slashdot post...
The frustrating thing is that an organization led by academics has so little academic input. The only academic habit they observe is lethargy.
You are right about IETF inertia though. I have given up on any bold progressive thinking in IETF for now with their attitudes such as "If it basically works, why fix it?"
The IETF attitude is to resist ideas as long as they can, then when someone loses patience and goes ahead without them complain about commercial interests having no respect for the standards process. In fact there is plenty of respect for standards processes, but not much for the specific IETF process.
You can tell how backward the institution is simply by looking at an RFC, they look like a Nigerian letter asking for assistance with a money transfer.
The whole NOMCOM system is a sick joke. The obvious purpose of the mechanism is to make sure that the IAB and IESG are accountable to no-one. A cabal of 15 people meeting in secret with no basic accountability is much less likely to upset the status quo with a dramatic move than a democratic system of elections. Democratic elections would mean a real risk of a change of power. The NOMCOM system means that bad ADs and ADs who have blatantly abused their power can continue to be reappointed, there is no way for the membership as a whole to reject them.
If the IETF had balls they would have pushed through a program for completion of the IPSEC, DNSEC and IPV6 protocols five years ago and then moved ahead with a strategy for deployment. Today they would be aggressively considering how to address the problem of Spam. As it is DNSSEC is undeployable in the large zones, the IESG has been content to let the WG chair filibuster fixes. IPSEC is a mess, the ISAKMP/IKE scheme is a dogs breakfast, a scheme to negotiate the scheme for negotiation. The only thing that has happened to IPv6 is that we are closer to running out of address space and everyone is moving to NAT regardless of the IETF opinion of them.
At the same time groups like OASIS have been completing standards in 18 months...
The key is in the degradation curve (Score:2)
To use a droll real-world example, consider the following:
The article is suggesting that if some people used alternate routes, then the primary routes wouldn't get overloaded and therefore would get more done.
Re:I'm confused (Score:2)
Thanks Ron Howard (Score:5, Funny)
If nobody goes for the blond, we all get laid. Somebody go tell the routers.
Re:Thanks Ron Howard (Score:4, Interesting)
Re:Thanks Ron Howard (Score:2)
It's been over a year and it was a long, long book (but well worth the read), but I don't recall the "blonde strategy" ever being in those pages. The blunt "let's just go have sex" scene was in there however.
I'd attribute it to dressing up the screenplay.
Re:Thanks Ron Howard (Score:2)
Funny you should be posting to an article on selfish routing....
Re:Thanks Ron Howard (Score:4, Informative)
As has been pointed out [variagate.com], the movie got the Nash equilibrium principle entirely wrong. Since a cheater can benefit by going for the blonde at the last minute, after the other guys have already committed themselves, it's not an equilibrium.
Another article (Score:4, Informative)
Could the bloody writer be specific (Score:5, Interesting)
Maybe I am just a lowly CCNP but is this all just a theory paper about the problems with "routing" or were there specifics about current routing protocols that should be updated or current practices that should be changed. Please help, everyone knows that the current routing could be better but theory stuff just does not help us much.
Re:Could the bloody writer be specific (Score:5, Funny)
No, it's no longer "CCNP"; the Soviet Socialists are now calling themselves the nationlists, the Union is gone, and the country's just named Russia.
But thanks, "Comrade". We'll open a dossier on you anyway.
Re:Could the bloody writer be specific (Score:2)
In Soviet Russia.... ahh, forget it, it's too easy.
Re:Could the bloody writer be specific (Score:3, Informative)
Which although I have not even starte to read it yet appears to have more than enough detail to satisfy almost anyone. Have fun I know I will.
Re:Could the bloody writer be specific (Score:4, Insightful)
The internet isn't, wasn't, never has been intended to be a high-performance network. It IS and was intended to be a high-availability network (read
One of the ways the 'net accomplishes this is by detecting damage and routing around it by trying to always use the "lowest cost" route from point A to point B. A significant factor in "lowest cost" is least time.
By always seeking to use the fastest (or most efficient by some other measure than time) route from point A to point B, performance levels on the 'net get leveled out and really fat pipes draw lots of traffic, while "pin-holes" don't.
For the life of me I can't understand just what the hell the author's complaint is
Just my US$0.02
Please send this article to (Score:5, Funny)
'Cool! One meg left! .......huh? WTF?!!? Disconnected?! You dirty SOB!..FUUuuuuuuuccCCCKKK!'
Re:Please send this article to (Score:3, Funny)
Don't worry, I read it. But I'm still not changing.
Although I had a sad revelation last night, after saying to a friend "Yeah, hopefully when I get back from work tomorrow night those music videos will be finished." I then realized the interest of my Friday night is determined by whether or not my Utada Hikaru MTV Unplugged (JP) videos will be completed.
I then realized I must get out more. Good thing my girlfriend gets back on Thursday...
Is altruism still possible on the Internet, tho? (Score:5, Interesting)
Objectivists Unite (Score:4, Funny)
It seems the researchers at Pinko U finally realize that routers have always been programmed using the enlightened-self-interest model of bandwidth utilization. It's time to shut them down.
The last thing we need is lazy, welfare dependant internet backbones sitting around all day watching The Dukes of Hazzard and drinking Lite Beer. If the altruists win this round, AOL transforms from the gated-suburb of the internet into the "Projects". Aren't we taxed enough?
For those too lazy to read the article ... (Score:5, Interesting)
In theory, this may slow down the internet by something like 50-60% at most. Nobody really knows how well the Internet conforms to the mathematical model, however. Any benefit from trying to fix the problem might be outweighed by the cost of implementing a solution.
Re:For those too lazy to read the article ... (Score:5, Interesting)
There's an amusing, if not somewhat interesting, article writting up on how you can single-handedly relieve traffic congestion here:
http://www.amasci.com/amateur/traffic/traffic1.
It's basically the same idea: If a few people just give a little slack, everybody wins out.
=Smidge=
Re:For those too lazy to read the article ... (Score:3, Funny)
scary how much thought this guy has put into it....but i bet the transportation department would fund him to do a study
Re:For those too lazy to read the article ... (Score:2)
Re:For those too lazy to read the article ... (Score:2)
DL managers (Score:5, Funny)
My flatmate does that with eDonkey on TWO of his computers and squashed our bandwidth for a week (downloading pr0n of course)
Re:DL managers (Score:3, Interesting)
The problem the paper is describing is at the larger "router's eye view" scale, where multiple routes out to the rest of the network exist, and where only the fastest route is used - the other two pipelines are basically starved of packets.
Re:DL managers (Score:3, Insightful)
LAN switch DSL modem ISP world
we have sDSL all routable IPs, at about 80-100K in any direction
i have no way to throttle anything when he is running eDonkey, downloading 5-10 movies at once with over a dozen connections between each. i dont believe eDonkey allowes any kind of throttling, unlike Kazaa.
I lost entire messages over AIM while he was doing that shit.
my http server is set up to allow only 5 connections max now, sincew someone a few months ago started leaching movies from me with FlashGet, killing my own overall speed.
Sure its not related to the article, but when i saw 'selfish' and 'routing' I had to rant a bit.
Hasn't something similar happened in the past? (Score:5, Interesting)
"if routers choose the route that looks the least congested, they are doing selfish routing. As soon as that route clogs up, the routers change their strategies and choose other, previously neglected routes. Eventually the system will settle to an equilibrium that mathematicians call a Nash flow, which will be, on the average, slower than the ideal. "
Now, hasn't there been a problem some time a long time ago in early Internet history where parts of the internet entered a state of self oscillation. I recall this was later fixed somehow to a point by revising some protocols.
I remember it basically as the problem where lots of routers (for some reason) started sending packets to one path, it got very congested, all routers switched to another, congested, etc.
I only have very vague memories since I took the course where I heard it some years ago. Perhaps I'm only full of bullshit.
Re:Hasn't something similar happened in the past? (Score:2)
Re:Hasn't something similar happened in the past? (Score:3, Interesting)
Re:Hasn't something similar happened in the past? (Score:2)
No... RIP is the only protocol that depends solely on hop-count. BGP has numerous variables with which it judges the best path... Congestion and bandwidth are two of the numerous variables.
It really is up to the Admin to decide which path is the most desirable by defining things such as cost, and possibly modifying the routing policy in other ways as well.
Re:Hasn't something similar happened in the past? (Score:2)
OK With all that said traffic engineering with BGP can be fun and it's allways a question of how much memory the routers have (BGP tables are BIG and router Memory is was to small think 32 megs ish for the current 110k line BGP tables (and Telstra is responcible for most of those the whole internet would thank them if they thaught of agrigating there routes and buying fargin bigger pipes instead of making the rest of the world backhaul for them)
Altruism is actually selfish (Score:5, Funny)
Where's my Dawkins? (That's twice today I've thought of him).
GF.
Re:Altruism is actually selfish (Score:2)
This is a very interesting statement.
I regularly tell people that they should be polite if not for altruistic reasons then for selfish reasons. People have accussed me of being extraordinarily polite: "Yes sir, No sir, Yes Ma'am, No Ma'am," etc. I regularly get treated better by the checker who is half my age, or the fast food cashier, or any number of other individuals; far better than the person in front of me or behind me in line.
So, am I polite for altruistic reasons, or for selfish reasons? It's an interesting question. I would have to say for both reasons.
Politeness (Score:2)
A question... (Score:2)
Does it matter if someone consistently does something altruistic for selfish or selfless reasons? The outcome is the same.
If you really want to get into the motives, then why not just say that acting in the best interest of the commons is itself "selfish" because it is a safer strategy for a better outcome. Sure, you can play the short-term selfish game and come out ahead (maybe), but you will be surrounded by people who resent your success. The lesson in competition is not "improve performance" but rather "sabotage your competition".
Altruism is the reverse: by supporting your "competition" (called "complimentors" in the newspeak) you may risk losing an advantage (as in the prisoner's dilemma) in the short term, but by employing simple strategies like "tit-for-tat" in an environment that is biased towards altruism will eventually lead to maximal outcomes for the population taken as a whole (the "rising tide floats all boats" analogy, here properly applied, for once). It really is inevitable, because the feedback of the "game" allows the participants to learn what will work best for them.
In a human social context, you would hope that that learning would move from the purely intellectual to the personal. If you can learn to do good for selfish reasons, it might occur to you that doing good has value in and of itself.
You selfish bastards! (Score:5, Funny)
Somewhat interesting (Score:5, Informative)
Supposedly, if the router picked the fastest AND least congested route, then some packets might take a little longer to get to their destination, but the overall latency of the internet would decrease.
In theory. In reality, I don't know how much peering arrangements change the equation. You see, if you are a network provider, you have two goals with peering: dump enough traffic onto your peer points so that you are exchanging about equal amounts with your peer AND get traffic that isn't bound for your network OFF your network as quickly as possible.
In practice, this means ISPs who peer have a large incentive to route packets coming from peer parter A directly to peer partner B, without regard for what that does to the latency of the packet, nor the congestion of the peering partners. The peered packets become more like the hot potato, bouncing around peer points until they actually arrive near the destination network. That lowers overall efficiency as well. (companies like Internap don't peer for this reason; they pay for all connection points even though they have enough traffic to get peering points for free. They cost more, but they have very low latency, packet loss, etc).
Re:Somewhat interesting (Score:2)
Re:Somewhat interesting (Score:3, Informative)
Hi,
This depends entirely on your policy decisions. For example, the traffic engineering that I do at my place of work is based around a cold-potato routing policy rather than hot; that is, we will carry our traffic to the point closest to it's destination, thus keeping it in our network for as long as possible rather than vice versa.
There are arguments both sides of each issue, and it really depends on one's own topology and decision-making criterion.
~cHrisas long as (Score:3, Insightful)
Re:as long as (Score:2)
Well, you're looking at this from the perspective as a "home user" where you're a leaf on the Internet. That's okay and all, but there's more than a few Slashdotter's here that actually have to think about this kind of stuff.
With a home connection you're a "leaf" -- you're on the outside and you only have one route to the internet -- your ISP. That is your "gateway" and there's nothing you can do about it.
The article's really talking about major ISPs, and how they peer with other networks. They have multiple routes to a single destination, and these routing tables can get pretty big when you're a big player on a big hub. I'm by no means an expert on the details of it, but I do appreciate the thought that goes into such decisions.
When you want to send a packet to NYC from wherever you are, your network only has once choice: Send it to the ISP and let them figure out what to do with it.
Your ISP may or may not have a choice what to do with it though. If they're a multi-homed place (meaning they have 2 or more connections) they have to decide what route to pass it off to. One of them might only require 1 hop to get to NYC, and the other might require 3. The simple answer is to just send it to the 1 hop route, but in the grand scheme of things it's better to send it to the 3-hop route sometimes because it's less congested.
That's just the tip of the iceberg really, but that might enlighten the issue a bit. It's certainly NOT a concern for a home user, so I can understand your post -- but to anybody that runs an ISP it's a fairly interesting topic.
Tragedy of the Commons (Score:4, Interesting)
This is essentially a pricing problem.
Here's a quote from the original 1968 paper that used the term
There are two common solutions to this kind of problem. Regulate use of the common resource or sell it. Because of the structure of the internet, it is hard to fairly price bandwidth and no good regulatory scheme has developed, so I don't see any other answer than living with it.
Re:Tragedy of the Commons (Score:2)
Herein lies the inherent limit of the tragedy of the commons: The ability to load the common resource correlates with the trust the users feel in the common's stability. As long as I can drive another cow onto the pasture without fear that the neighbor will shoot the cow or me, I will. But once I feel I can't trust the shared resource--once the social stability is gone--I'll stop using the common and find some other way, because the cost (to me) of the common is no longer a fraction of the value I derive from it, but possibly total loss.
Alternately, if social stability deteriorates radically, it may catch the users of the common off-guard before they can do their subconscious value calculations. If I drive one more cow onto the commons and it starts a stampede, killing many of the cattle (mine and my neighbors), and killing some of those neightbors, and starting violent feuds and vendettas among us who used the common, the cost of the common maximizes.
Either way, the tragedy of the commons is eventually fulfilled, and everyone stops using the shared resource. Self-limiting, at least until the next time.
What does this mean for the net? It will get progressively worse until "death of the net (mpegs at 11!)", after which we survivors will crawl out of our IPv6 bomb shelters and rule the Earth.
Re:Tragedy of the Commons (Score:2)
No. It's not inevitable. Shared resources do not inevitably die. Besides, on the internet, ISPs tend to act as policemen; any user trying to abuse the net also abuse the ISP and give them a bad name, if it gets bad enough; the ISP itself loses its feeds.
Re:Tragedy of the Commons (Score:2)
Without intervention--the goodwill and self-restraint of the participants, or externally-imposed restraints, or a change of resource costing--it is inevitable. However, you're quite right in that there are restraining factors. The net, as a commons, has fences and gates which are guarded by the ISPs. They can impose pricing changes and enforce use restrictions (d/l caps, for instance) that can extend the life of the shared resource.
So your point, that the "social stability" of the net is guarded by some of its users (ISPs), is well made. In engineering speak, the net degrades more gracefully than a complete collapse (which is why I was attempting to mock the whole "end of the net" line of thought, even as I used it as an extreme example).
Re:Tragedy of the Commons (Score:2)
Yes. In practice there nearly always are these restraints. The actual tragedy of the commons is something of a myth I think, historically; I'm not aware of any definitive example.
Re:Tragedy of the Commons (Score:2)
Re:Tragedy of the Commons (Score:2)
Perhaps if you had read the article you would know the phrase comes from William Forster Lloyd (1794-1852).
The tragedy of the Commons was used as a political weapon in the class warfare of the Victorian era. Those with Scottish ancestry might know this as 'the clearances', in England they were the enclosures.
Basically the aristocracy transferred the common land from public ownership to private ownership. Since they wrote the acts of parliament they gave themselves the best deal. The result being a transfer of wealth from the poor to the rich.
The deck was stacked so that the aristocracy quickly got control of the small proportion of the land that went to the peasants. It was similar to the land grab that made Bush rich. They bought a sports team then started building a bigger stadium using the pliant local council to confiscate large amounts of land at below market rates which were then used for development and sold for a vast profit.
So the tragedy of the commons is not a politically neutral term. Also the real tragedy for the peasants came when the aristocracy used it as an excuse for exploitation. Its a bit like the plans for privatising social security, there is a problem there but it is being used as an excuse for a political agenda, not as something that is to be addressed for its own sake.
When tragedy of the commons is used in relation to the Internet it is usually to justify some form of corporate or governmental control.
what routing protocol are they referring to? (Score:3)
it isn't RIP, OSPF, EIGRP, or BGP. i don't know ISIS, but i strongly suspect these people are talking out of their asses.
Re:what routing protocol are they referring to? (Score:2)
At least they admit they have no idea whether their mathematical models mirror real life!
Re:what routing protocol are they referring to? (Score:2)
QoS? (Score:2)
Instead of a router choosing the fasted or least-congested route for a packet, it could also factor in things like what type of packet/service it is.
NNTP, e-mail, and other non-interactive, non-realtime packets could be shunted down secondary pipes -- you'd never notice most of it anyway.
QoS on IPv4 doesn't really have the granularity for this, and it seems most routers on the 'net ignore those bits anyway.
I believe this was one of the things that IPv6 was supposed to address.
Similar to Automobile Traffic (Score:5, Interesting)
This is a very odd article. (Score:5, Informative)
Any provider who is doing anything slightly serious will be using BGP4 routing for their EGP. It does NOT send out magic packets to find best paths. It learns routes from it's peers and will choose the best route based on a defined set of decisions. Routers do not keep a list of "neglected routes." If one route goes away, the router will simply pick the next best path.
Read more about BGP4 from Cisco's website [cisco.com]. You will find little in common with this article and the one linked in the story.
Good routing relies on good admins with a well defined routing policy. There is no such thing as a "selfish" router.
Tim
Re:This is a very odd article. (Score:2)
pick the next best path."
That's the point! The article says that according to mathematical theory, this approach is not ideal.
Basically, by sending some packets along alternate routes that are actually slower, while that individual packet may arrive later, statistically the packets will arrive faster.
I would give you all five of my mod points (Score:4, Informative)
I think whoever wrote this article is far removed from the real world. They are finding theoretical problems with the routing protocols we would like to be running. As you pointed out, pretty much the entire backbone is using BGP4 to make routing decisions. And BGP4 doesn't really have any measure of how congested links are, nor how long the latency is. The basic measure of BGP4 is how many different providers (called AS's or Autonomous Systems) a packet might have to traverse.
Hmmm, the router says, is the best route thru C&W->AT&T->Bob's_ISP or just Level3->Bob's_ISP? I'll pick the two hop route. Sure, we all do some manual tuning, where the engineer says "I know the L3->Bob link is slow, so I'll make it look like L3->L3->Bob", but BGP4 is fundamentally a really stupid protocol. In theory. In practice, it works fine almost all of the time.
The most telling quote from the article is this:
They also found that doubling the capacity of the system would provide the same benefits as a managed system.
No shit Sherlock. I could've told you that five years ago. Why do you think QoS is still facing an uphill struggle? It's far cheaper and easier to just keep cranking up the bandwidth than to replace BGP4 with something smarter, or to deploy QoS protocols Internet wide.
Don't get me wrong, I think they are doing great research. It's good to try and figure out what might go wrong with next-gen protocols before the get deployed. But I don't think they are talking about problems on todays Internet.
tragedy of the commons (Score:4, Interesting)
The Tragedy of the Commons , often cited by environmentalists, describes 14th-century Britain, where each household tried to gain wealth by putting as many animals as possible on the common village pasture. Overgrazing ruined the pasture, and village after village collapsed.
The "tragedy of the commons" that Hardin's article is devoted to is increasing world population. What evidence is there for overgrazing in England before as opposed to during and after the forced transition to private ownership? Most cultures with a common land tradition also have a set of rules for governing land use that avoids such tragedies, for example, irrigation systems in Bali where the farmer who gets the water last controls the water flow. Ones that didn't solve the problem of overuse of resources are conspicuous by their non-existence (Easter Island, some settlements in the Southwest US, some populations on islands in the South Pacific ).
The 'tragedy of the commons' is one of the most misunderstood and overused metaphors of our times. The idea that a system with resources held in common is necessarily unworkable is false --- what is needed is institutions that effectively manage common resources, and such institutions have emerged repeatedly and continue to exist. Often it is when these cultures come into contact with market-oriented societies that the traditional systems are undermined and collapsed. Often what happens is not "the tragedy of the commons" but "the tragedy of failed privatization" in which a traditional management system is destroyed without establishing a viable alternative.
How does this relate to the internet? It's a cautionary tale --- be very very careful when introducing monetary incentives into a system that has previously relied on cooperation and cultural norms.
blog-O-rama [annmariabell.com]
reference on Balinese water temples (Score:2, Interesting)
Lansing, J. S. (1991) "Priests and Programmers: Technologies of power in the engineered landscape of Bali ", Princeton: Princeton University Press. Leviatan, U., H. Oliver, J. Quarter (1998)
blog-O-rama [annmariabell.com]
Re:tragedy of the commons (Score:2)
A commons is a resource that everyone is free to use as they wish. What history really shows is that commons are (ha ha) relatively uncommon. Societies do not leave resources as commons because of the problems that result from doing so. Instead they come up with systems that limited what individuals could do with such resources. Sometimes that meant a system of private property, where individuals got to control some part of the resource, sometimes a system of collective property, where some collective body would control the resource, but in almost every case resources were not left as commons.
Of course you are right that western observers often failed to recognise the systems of management set up in other societies, especially when these did not resemble the kinds of systems that had arisen in Europe.
Re:tragedy of the commons (Score:2)
When they arise spontaneously out of agreements between those who rely on the commons, we call them "free markets".
Then when those who rely on the commons take up arms to keep out people who want to use them differently, what do we call them?
definition of "commons" (Score:2)
It's a mistake to posit altruism and the market as the only alternative institutions --- the Balinese example does not rely on altruism, it's consistent with a game theoretic model with rational actors.
As to the benefits of introducing market-mechanisms into the internet, I would pose the following question: how many viruses, worms, etc. would we expect to see released in an environment where there was a potential monetary payoff for such actions?
Enron made a huge mess of the electricity markets in California, partly through fraud and deceit, but mainly because the people who designed the rules of the market didn't think the problem through. Let's not repeat that mistake with the Internet based on some theoretical ideas about the efficiency of markets.
blog-O-rama [annmariabell.com]
They botched others' ideas (Score:4, Insightful)
The problem is not that service providers pick the route that gets the packet to its destination quickest; it's that they pick the route that gets the packet off their network the fastest. Those two are not the same thing at all. Think about it geographically. Let's say I'm a square network and I receive a packet at the northern end of my western border destined for somewhere to my northeast. I know that the quickest way to get it to its destination is to move it east across my own network and deliver it to my eastern neighbor. However, I also know that if I pass it on to my northern neighbor it will still get there without coming to me again, and my northern neighbor is closer. So, if I'm a selfish bastard, what do I do? I ship it northward, minimizing the time that it spends on my own network but increasing the total time before it reaches its destination. If everyone does this same sort of "hot potato" routing, total load on the network increases for everyone. In fact, my northern neighbor might very well be doing the same for packets lying to our southwest. We'd both be better off if we'd "play nice" but since we're both trying to be selfish we both lose.
Yes, folks, it's an instance of the prisoners' [brynmawr.edu] dilemma [vub.ac.be] and these researchers are not the first [gildertech.com] to notice the fact [zdnet.com.au].
Re:They botched others' ideas (Score:2)
Re:They botched others' ideas (Score:3)
I didn't say it was going northwest; it was going northeast, and the shortest route would have been "straight across my network". That's all besides the fact that real networks don't have such simple geometry, so line algorithms are utterly irrelevant.
If only. The whole point is that it's not handled that way. Did you look up from your graphics-algorithm textbook to read what I actually wrote?
Did anyone else misread that? (Score:2, Funny)
Second thought: OOPS! SELfish...
Third thought: ??????
Fourth thought: Profit!
Classic Prisoners Dilema problem (Score:5, Interesting)
Basically if everyone acts unselfishly they do better. But from each individuals perspective they do better when they act selfish, so it all falls apart. Its interesting stuff and the prisoners dilema game algorithms are interesting.
Prisoners Dilema [drexel.edu]
Play the dilema game online [plocp.com]
Build more lanes? (Score:2)
Messing with routing seems to be the same as the DOT messing with shuffling cars and metering lights. Instead of focusing on how we can change all these routing patterns, why don't we just "build more roads"? I realize it isn't exactly trivial to do that, and that the backbones might be pretty tough, but what about all that "dark fiber" that is supposedly just laying around? That's the equivalent of not using an open freeway in a major city during rush hour! We've already got the road, but we just don't use it.
Wouldn't doing that just open up more bandwidth for people, at least locally?
Re:Build more lanes? (Score:2)
It isn't a freeway between say New York and Washington D.C. thats dark and unused, its the major freeway between Safeway and bob's convenience store.
Does that make any sense? The dark fiber is dark because its not needed because of what it connects to.
Real selflish routing (Score:2)
G
Lets here if for ipv6 (Score:3, Insightful)
Internet2 has an extremely fast backbone and is based on Ipv6. This will help greatly since the backbone of the current internet can be quite congested at times. Lets hope its implemented soon as the current problem will likely go away.
hasn't this always been the case? (Score:2, Insightful)
all necessarily what is fastest, or what is closest..... it can be completely arbitrary.
Lowest latency, least used, least hops, least dollar cost, etc.
Some networks try to offload traffic to other networks as fast as possible. Others try to get data as close to the destination as possible before offloading it. In both cases, everything would work fine, if only everyone played by the same rules.
Use Poor Routing - Better Performance? (Score:5, Insightful)
We diagramed a sample network here in the office, to try and explain what we just read to ourselves.. We picked 5 cities (New York, Chicago, Los Angeles, Dallas, and Miami), and drew direct routes between Miami, LA, and NY to each other. Chicago gets routes to NY and LA. Dallas gets routes to everything but Chicago.
We then contemplated what a packet from LA to NY would be looking at.
On our mythical network, we have the following ping times.
LA -> NY 20ms
LA -> Chicago -> NY 25ms
LA -> Dallas -> NY 40ms
LA -> Miami -> NY 60ms
So, we shoudn't be selfish, and take the LA->NY route? We should direct our traffic LA->Dallas->NY ? If this route is already slow or conjested, what good does that do? Now instead of using a perfectly good route, we're killing a conjected one.
If LA->NY is the best/fastest at the time, use it. If/when that becomes more conjested, it will no longer be the best choice, and the new best choice will be chosen..
Not everyone is going to be using YOUR best choice all the time.. Very doubtful that Miami will be routing to LA to go to NY. If they do, it's because Miami->LA is already overloaded. But as it usually works, For Miami->NY, there is already a second best choice (Miami->Dallas->NY).
No matter how we look at it, this doesn't make any sense.. Here's a sample of the lines for our example.
LA->NY OC192
LA->Chicago OC48
Chicago->NY OC48
LA->Dallas OC48
Dallas->NY OC24
So, we'll leave the LA->NY route empty, and keep dumping our load onto the lesser routes?
I do like the idea though, to keep the best choice (LA->NY) open for myself.. Everyone else chooses the second best route.. Go ahead and flood those OC48's, I'll use the OC192 that no one else uses..
Re:Use Poor Routing - Better Performance? (Score:2)
Summary of main results. (Score:5, Interesting)
1. Their basic idea is to model decentralized routing as a Nash game and then worst-case compare the performance of this game with the best achievable by ANY algorithm, decentralized or not. This sort of comparison is common in the field of competitive analysis .
2. Assuming a hop latency to increase linearly with additional traffic on it, selfish routing causes the average packet latency to increase by no more than 4/3 of that caused by ideal optimal routing. This worst-case figure had been earlier called "the Price of Anarchy" by Papadimitriou, a famous researcher in algorithmic complexity who every CS student loves to hate
3. Similar Prices of Anarchy have been derived by them for when the hop latency increases nonlinearly with the additional traffic on it.
4. The worst case is always achievable with a simple network of 2 nodes connected by parallel links. This is the exactly the example used in networking courses and textbooks to illustrate the oscillation problem caused by selfish routing. This paper says that using this simple network as example is justified since the worst case can be always be analysed with it.
5. Instead of optimizing routing to try reach the minimum possible average latency, you can keep the routing selfish but double each link capacity and achieve the same result.
Re:Summary of main results. (Score:2)
I doubt that a linear or quadratic latency vs. traffic volume relationship is anywhere near accurate. Queuing theory says it's more like Q/(1-Q), where Q = arrival rate / processing rate. See this web site [new-destiny.co.uk]. I'd be surprised if their linear or quadratic models give very useful results. If they're running a computer simulation anyway, why not get it right and use a reasonable queuing theory result? Hmm, I'm sure it's easier to model links with infinite capacity that just get slower and slower than it is to write a model that deals with dropping packets.
When Love Comes to Town (Score:2)
Jeepers, always such high quality thinking going on at Valentine's! And just about any other arbitrarily important date, I suppose. Here's an interesting article [guardian.co.uk] from the Guardian about the science that gets press on this day of love.
The existance of so much spam (Score:4, Insightful)
I have seen videotape of a psychology experiment, where an individual feigned a serious medical problem and keeled over in the middle of the street. When the test subject tried this on a busy urban thoroughfare, large passing crowds actually stepped over the guy. But in a small village, shopkeepers rushed out onto the street to try and help him.
There was a famous murder case in NYC where over 100 neighbours heard a woman begging for help as she was having her life snuffed out over a sadistic killer over a period of time. Nobody reported it or tried to intervene, they all assumed somebody else would do something about it. This resulted in the passage of a law, which as I recall was the subject of the final Seinfeld episode.
Altruism (Score:2, Interesting)
The way people and governments get out of a collective action problem (like an arms race, or like EMU monetary/fiscal policy, etc) is not through altruism, but through formal cooperation. In order to ensure that everyone cooperates, you need to (1) clearly define what constitutes cooperation, (2) make it transparent (obvious) who is cooperating and who is not, and (3) decide on mechanisms for enforcement.
Dijkstra's SPF, MPLS, Offline Weight Optimization (Score:2, Informative)
For distributed routing every router takes its own decision. SPF is used. Assume OSPF now. Routers
basically set weights on its interfaces/ports. There are two types of weights: static and dynamic.For static weights there is nothing much a router can do except obey (a lazy) administrator's decision.Dynamic weight setting gives a router some freedom. It may set its interface weights depending upon the available bandwidth. It could even penalize congestion by choosing very high weights for loads more than say 95% of the link capacity.
But there is a small problem commonly known as "osciallation". Consider two links A and B connected to a router. Router finds out that A is congested so it sets a high weight on interface A. This leads to shift of traffic from link A to link B. At some point link B will become loaded. Now the router sets interface B weight high.
Question: where will the traffic of link B go now? Right. To link A!! This is oscillation.
MPLE/IP:
In MPLS/IP networks it is possible to do load balancing based on the utilization of the links. The traffic being virtual-circuit would use the same path for the duration of its existance as LSP. No unnecessary oscillations here.
Offline Weight Optimization:
Bandwidth is the resource. Customers produce demand. The objective function, for example, could be to minimize Maximum Link Utilization. There are some constraints, for example, total demand will not exceed the link capacity, etc etc. How this global (entire network) optimization problem is solved is not a big deal, the big deal, however, is the result. The solution provides a set of weights which when set on the interfaces leads to a load-balanced and better utilized network.
Point : Humans maybe greedy but mathematics is generous!
Re:Light Precipitation (Score:3, Interesting)
-former employee
pm
Re:Game Theory (Score:2, Funny)