Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
The Internet

Smart Routers 125

Lukenary writes: "For years, Cisco and Juniper have been stuck in the "smart fringes, dumb core" view of routers and the Internet. If Larry Roberts and his new company, Caspian Networks, have their way, all those promises you've heard about the Web being the new entertainment medium may play out. "Smart" routers will be able to pick out different types of packets (text, voice, media, etc.) and intelligently sequence them to their destination more efficiently. Broadband that can really stream high-quality multimedia. Worldwide, high-quality IP-based long-distance telephone. Even faster dialup connections." While the Wired reporter doesn't question the greatness of these new routers, what it means is that the backbone companies gain greater control over what traffic they will and won't permit, what they'll speed up and slow down, etc. This is likely to increase their profits at the expense of the health and dynamicism of the overall network. ("You're a residential customer, you can't serve data, only consume it!") These are the issues we've looked at before here and here.
This discussion has been archived. No new comments can be posted.

Smart Routers

Comments Filter:
  • ...Of IPv6, I have little worry about the immediate impact of this.

    Tunneled traffic will also have to be worked out as a problem, as the flags have to be available in the visible header. Thisisi a headache waining for someone to iron out.

    Also, what about networks other than your ISP? The peering agreements between providers are pretty fast and loose as it is... Have you looked into the problems in enforcing QOS bits outside of even your own network?

    Inertia is not an impervious defense, but it looks like a comforting one for this issue into the near-to-mid future.

    Jeremiah Cornelius

  • 1) While this could theoretically be used for what you might call "censorship", the intention is actually a good one. It is quite sensible to give higher priority to realtime data such as audio and video than it is for, say, SMTP traffic, and all users would benefit from this.
    2) Unlike IPv6, this doesn't require widespread deployment before it can be used (the chicken-and-egg problem that is delaying IBv6 deployment). Even if you are the only ISP on the planet using it, there will still be some benefit to your users.


  • DoH! Should have been 'or', not 'of' higher priority web browsing.
  • The idea of priority is decided based on the type of data inside of the packet 'stream', instead of simply taking the labels on the packets for granted.
  • What provider was this, if you don't mind me asking? I know that it's stated in many usage agreements, however, it's a matter of public servers vs private ones.
  • Democratic isn't the solution. I'd say that, just like the United States, a republic is more the idea. For the same reasons that ever single little law can't have a general vote in the US, the Internet can't have every little priority of ever little packet decided by everyone. I certainly don't want MediaOne, by service provider, deciding my bandwidth allotment based on every little packet I send.
  • I'm confused by at least one statement after reading this post. How is this a 'new standard'? Theres no standard set forward with this. It's a product that provides these capabilities. In no way do they ever state they want this to be a standard. Then they'd lose their competitive advantage.
  • No argument there. I was simply stating how this way different then a QOS flag within the individual IP packets.

    One could also argue, however, that the flagging of streams on even a large scale, would require only a fractional increase in price compared to the total cost of these kind of switches.
  • Well, it could always just look for known protocols, such as those used for video vs email, etc, tag that packet, and treat all other packets to the destination port/host as the priority of that it has 'decided' it should be.

    It's really a vague mapping of what the routers think the data really is. Not the best idea, but if they could flag at least the more common video/audio streaming 'paths', it could help deliver these with a lower latency. Personally, I dunno if I'm comfortable about it. If you look at it on a really theoretical lvl, providers could start to analyze what kind of traffic I'm 'consuming', and hence, target my email with yet more spam based on what I've been doing. Blech.. More data to profile on..
  • Yea, but the issue I can see is how it's going to actually be enforced. It's a great idea, but beyond the ability to prioritize based on either destination or source, it seems to m that developers will eventually spoof it, and hence, negate it.
  • This is a darned good post, I gotta say. IPv6 provides the 'smart network' by simply stating in the actual packets what priority needs to be given, and negates the need for some sort of a predictive algorithm to logically figure out what needs a higher priority.

    My main concern, however, is that fact that application developers would still spoof this out, providing a higher priority for their traffic, hence, a higher thoughput, then needed. Blech...
  • Well, right now, there is quite literally NO intelligence involed beyond a simply netmask to determine where to hell to send the packet. I'm not so sure that intoducing *SOME* intelligence is such as bad idea, if it could be enforced somehow.. 8-(
  • Yea, thats what I was thinking, really. The idea is great, but the enforcement of it simply can't exist beyond a simply source/destination based rule..
  • Exactly what I'm thinking of. And I'm not so sure it'd b so hard to spoof the protocol into thinking it eas something else. I'm taking for granted that the routers/switches aren't going to be doing such an indepth analyses of each packet do to processing considerations, and hence, it shouldn't take to long to figure out what makes the higher priority 'kick in'.
  • I'd imagine it'd deal with SSL as a standard 'web' traffic packet. Things like streaming video and audio wouldn't be transfered over SSL, so hence, SSL would have a lower priority, presumably the same as standard port 80 traffic.
  • One has to wonder, if these 'smart routers' ever come to fruition on the internet on a large scale, how long it would take developers to beging to 'camaflague' their applicaitons data to be those of higher priority purposes, and use this as a 'selling point'. Even if there would be really no basis for this, could it simply become a selling point for 'High Priority' instant messaging of web browsing, and hence make the entire idea innefective?

    As a disclaimer, I'm not saying that this SHOULD happen, simply that I could see developers trying to get their applications to utilize smart switches and routers at a higher priority then they should. Some people just don't know how it all really works, and might be 'sold' on the idea of their emails going thru at a higher priority then they really need to.
  • Here is an article that says why "smart networks" are not such a good idea: The Rise of Stupid Network [rageboy.com].


  • The point is that if traffic is marked as high priority, you let it jump the queue *and bill extra for it*.

    Obviously it would be stupid to award better treatment to some types of packets if they are all charged at the same price.
  • Strangely, I participated in a debate on the concept of "intelligent packets/networks" just a few days ago here in Antalya, Turkey.
    What kills this idea is two things:
    - Routers don't have TIME to be smart. At 40 Gbits, a small packet goes by in 30 nanoseconds.
    - People will interconnect only when they think they understand what they interconnect. And people's understanding is VERY limited.

    The KISS principle rules.
  • The first thing anyone does when deploying 'smart routers', aka QoS, is re-mark all traffic that is not from specific applications to best effort - this has the effect of re-writing any 'high priority' settings in the IP Precedence or Type of Service bits.
  • A few misconceptions here:

    1) QoS (quality of service, the main use of 'smart routers', and doable today) is nothing to do with censorship. First of all, it is fairly pointless to deploy QoS on one part of an end to end Internet connection, so QoS is not used in the average Internet network. Instead, QoS technology is used on private IP networks used by businesses - either 'true' private networks, which run over leased lines or ATM/Frame Relay virtual circuits; or virtual private networks (VPNs), which in the IP world typically use IPSec or MPLS.

    What happens is that businesses are sold more expensive, more secure, and higher QoS value added IP services that still save them money over separate ATM/FR services. The carriers rely on these value added services to make a profit(particularly given the telecoms downturn), particularly compared to basic dialup and ADSL/cable Internet access services. In other words, business users subsidise consumers.

    This is very similar to the airlines - the existence of first and business class, etc, is another form of subsidy for consumers. You should be happy that QoS, VPNs and other value added services are being sold to businesses - they will fund network expansion (as business Internet access has done for the last five or more years) and generally make things faster and better for consumers.

    2) *No new standards are required* - I work for a company that enables carriers to do all this stuff with plain old IPv4 routers. The biggest myth about IPv6 is that it will improve QoS - it won't make any difference, and has only one QoS feature (the flow label) over IPv4, which feature will require a new end-to-end QoS reservation approach that is unlikely to take over.

    The people who have designed IPv6 have taken immense care to make the transition from IPv4 as easy as possible, e.g. through allowing automatic creation of IPv6 tunnels over IPv4 domains (6to4).

    As with QoS, the transition to IPv6 will be funded by companies - with modern technology, the extra cost of running IPv6 on hosts vs. IPv4 is quite trivial, so even your mobile phone will have IPv6 in time, avoiding the hassles of NAT and making mobile IP much more efficient (roaming from wireless LAN to 3G networks without changing your IP address or having your sessions drop).

    You might like to try reading up on these technologies before forming opinions on them - good places to start are qosforum.com, mplsrc.com and ipv6forum.com.

  • The kind of enforcement of what services you are able to run are already here in Australia. The current broadband carriers ( both of them - we have only 2, with DSL still being introduced ) require you to not run any type of service from your connection as part of the AUP ( Acceptable Usage Policy ), and as such, is a ground for termination.

    A lot of people are quite annoyed with this AUP, as by default, unicies run the type of services from scratch.

    While I can understand the need for the ISP's to restrict what their clients do, and i respect their distaste of 31337 kiddies running 0-day warez ftp sites on their network, I find it highly vexing that I have to firewall myself from my own ISP against me.
  • by FWMiller ( 9925 ) on Sunday May 20, 2001 @03:50PM (#209340) Homepage

    Theres a fundamental flaw with the theory that interior network nodes should be intelligent. The rate of progress in optical bandwidth improvement is increasing faster than Moore's Law. As a result, any intelligence that is built into the core using optical-electical-optical (OEO) technologies will actually cause the core to get slower (with respect to the overall bandwidth available) over time.

    Something to think about...


  • I was about to say the same thing, but figured I'd search first because someone HAD to notice. Only you did.

  • Did you give that customer the option for a higher bandwidth contract, say at double the price for double the bandwidth? If so, then I would agree thc customer is cheating you. But if not, then I think you're in the wrong and the contract is agreed to under duress. If the physical link can handle 10 meg up, and the customer wants to use 10 meg up, and pays you for it, then adjust your core pipe accordingly, make more profits, and be a nice guy. Of course you need to use QOS either way.

  • How much more bandwidth, and how much more cost? I'm just curious if this offering is priced appropriately. Also, if the tier step is too steep, I can see why the customers are wanting to cheat on you. Of course if you were charging per usage, they wouldn't be cheating, although I suspect they might not like that (since they would have to pay for what they get).

  • There was a story a while back about a company (not sure if it was cable or DSL) that was doing similar bandwidth monitoring. When they did find someone going over their levels for a month, even by 40% over, they would also have sales contact them about an "upgrade". But the upgrade was a change from around $35 a month to over $300 a month.

    If you have a big pipe to the consumer, and can monitor their usage accurately, then you really can put them in incremental tiers of service level. If they need (want) twice the bandwidth, then double the bandwidth part of the cost. Let customers set their own levels with their wallets! Why not?

    And if you're going to fall back on saying the software can't do that, yet, then let's get together and go over to the development department and kick some pointy haired arse because the software should have been able to do that right from the start, and any good developer would be able to do that easily if management had wanted it.

  • If I am trying to ftp from my machine to a client across town, why on earth does it need to be bounced across the country, because we use different ISP's.

    Not that this will actually solve that problem. It just means that all the routers on the path across the country and back will decide to deprioritize your traffic so the connection will be slower.

    The reason it bounces across the country isn't technological, it's 'business'.


  • This problem already exists. This is why (some) people get excited about zero-copy network code for Linux. If your network is 1 Gbps, but the bus between your computer's CPU and RAM (say) 400 Mbps, if you copy your data once, then you have effectively halved your network throughput.
  • The company Adero [adero.com] has been doing global content deliver via "smart" routing for a few years now.
  • ...a headache waining for someone to iron out.

    err... you iron out wrinkles, not headaches.



  • "For years, Cisco and Juniper have been stuck in the "smart fringes, dumb core" view of routers and the Internet.

    This just shows this guy doesn't have a clue about the lessons that countless thousands have learned the hard way about where complexity can be economically exploited in large scale networks.

    Those of us who have had to deal with the inane complexities of "smart" networks (like, for example, OSI and ATM) recognize that the original Internet philosophy of pushing the intelligence to the edges is the *only* way to build networks that have any staying power.

    Both the Internet Protocol itself and Ethernet have succeeded far beyond what most "experts" predicted specifically because they embody the dumb network idea. Although I don't expect many people here on /. to realize it, we've tried all these grand ideas before, and they've failed miserably each and every time they've been tried.

    There are *really good reasons* why dumb networks are better - I don't have time for the whole rundown, here are a few of the biggies:

    1) Pushing the intelligence to the edge puts it where it can be most easily and flexibly be changed. This is a huge win, and it allows each node to accomodate its own needs, as well as adapte reasonably to meet ongoing needs. Smart networks are, almost by definintion, static networks. They *may* be appropriate in 100 years when the Internet has a mturity level similar to the switched telephone network. (Of course, that network will then have long been replaced by IP telephony, so you're never really safe, are you?)

    2) It's orders of magnitude cheaper to put these capabilities at the hosts than to put them in the network. Yes, you pay a little performance penalty for that, but remember, we've got Moore's law on our side: CPU cycles to burn, and increasingly intelligent network adapters at the volume prices that make the whole thing work. This is the whole reason why nearly all of us use the "simple, stupid" Ethernet almost exclusively and the elegant smart and complex networks like Token Ring, FDDI, and ATM will be footnotes on the ash heap of history. Because the intelligence of these networks was very expensive and in the wrong place (the core), they were not able to effectively compete with a standard that has evolved from 3 megabits/second to 10 gigabits/second, and will nearly certainly move to terabit speeds in the next few years. (For you flamers out there, I realize that 10Gb Ethernet isn't standardized yet, but the IEEE 802.3ae working group is making good progress on it and some vendors (Foundry for one) are already selling "pre-standard" 10 Gb products. It's likely we'll see a 100 Gb working group formed in the next several months...)

    3) Putting the intelligence in the cheap stuff at the ends is the best way to "future-proof" the network. Centralized planning only works if you have both a crystal ball and a perfect plan, perfectly executed (ask the former Soviet Union.) The real world is (and should be) messier than that, and we *want* networks (well, everyone but the RIAA/MPAA) that accomodate serendipitous re-use.

    Bottom line: History clearly shows that "Dumb" networks are in almost every case far preferable to "Smart" ones. They definitely work better in the real world, with real world economic considerations, and support the freedom to "misuse" the network by using it for things that were not foreseen by the inevitably short-sighted designers of the smarts...
  • Content Delivery Network companies such as Adero and Akamai don't work with the IP layer. Instead, they direct the web browser to the nearest mirror of the content.
  • Except OEO will soon die off for the really big switches, as all Optical Swtiches come into creation. No conversion from and to electronic signals, just all movements of photons. If OEO switches get slower as the get bigger, that fine. Just use all optical and forget the conversion all together.
  • No, just Optical to Electrical. If the data is all that matters, just convert it to Electrical Signals and route accordingly. Don't bother to convert it back. It takes to much time and if you left the data there in the first place it isn't needed to send the data to the next router. Eventually though, someone will come up with a technology to do DSP on the optical signals without conversion.
  • This might be the first company I've seen outdo StarBridge in the "blatantly obvious BS" category. Slashdot has yet again fulfilled one of it's major roles in my life: letting me know about companies I should *avoid* investing in

  • At my current job at Net.com I'm currently implementing what Caspian is talking about. I'm working on a BRAS - Broadband Remote Access Server which is able to interface with a portal and various services to allow the subscriber to do what they want to do.

    For example, say it's Friday night and you want to watch that new movie that just came out. You log into your ISP's portal and go to the video selection, click on the movie you want and go watch it. Behind the scenes, the portal tells our box that you need, say, 5Mbps of bandwidth to the video server. Our box will guarantee that you have the bandwidth needed for the video, even if your roomate starts downloading a bunch of porn in the middle of a really exciting action scene.

    This has other applications, for example gaming, video conferencing, or anything where a certain quality of service is required.

    Now the BRAS needs to identify various flows and be able to individually shape flows as needed. Usually all that is needed is to look at either the layer 3 or layer 4 information, but unlike a traditional router, both the source and destination are important.

    The product I am working on can guarantee bandwidth on a per-flow basis, where a subscriber might have multiple flows. That way traffic from a video server, or packets going to other gamers, will have the bandwidth and/or latency needed.

    Our product is controlled via an open API, which is based on Corba and XML. This allows our box to be easily integrated into existing infrastructure (i.e. web portals and billing packages).

  • I might add that the box I'm working on does this with standard IPv4 and clients running standard software. No custom software on the subscriber's computer is needed. No custom protocols are used. In fact, the specifications are supposedly free (see http://www.net.com/products/broadband/new.index.sh tml).

    The article on Caspian is rather sparse on information and mostly marketing fluff, however, the service creation model described is definitely the way things are going.

    As for being able to sniff traffic and analyze packets, the box I'm working on doesn't go beyond layer 4 except for handling L2TP tunnels.

    Another nice thing about the box I'm working on is adding new protocols and interfaces is a piece of cake. My code already handles all of the various PPPoE, PPP, Ethernet, and ATM encapsulations, but adding new ones like POS (Packet over Sonet) is also a piece of cake. This is due to the fact that it's based on a network processor. No, it doesn't run Linux nor is it capable of running it due to its highly specialized design.

    Down the road expect for these routers to look deeper and deeper into the traffic as the network processors become more powerful.

  • My puny P166 gateway does something not quite unlike this, and more... Not hyperfast of course, but the technology to make content/context-based routing decisions is in the Linux kernel in the form of Class Based Queueing (CBQ). There's also the firewall based classifier, which enables you to 'mark' packets with iptables and use those marks for routing decisions. Look for this stuff under the 'advanced router' option.
  • blah. its already taken place before. the TCP/IP packet has a flag for high priority traffic. that lasted for a few days before everyone figured out the flag and now ALL traffic has the flag set for high priority. superswitches wont do shit -- everyone will just mark their data as high priority as before and life will go on.
  • by Zurk ( 37028 ) <zurktech&gmail,com> on Sunday May 20, 2001 @04:35PM (#209358) Journal
    no you anonymous dumbass. its not the URGENT field. its bits 8 thru 16 of the IP packet -- the TOS. and its not part of TCP either...maybe you should reread whatever crap reference you gave me.
    i quote from RFC 791 : Internet Protocol :

    Type of Service: 8 bits

    The Type of Service provides an indication of the abstract parameters of the quality of service desired. These parameters are to be usedto guide the selection of the actual service parameters when transmitting a datagram through a particular network. Several networks offer service precedence, which somehow treats high precedence traffic as more important than other traffic (generally by accepting only traffic above a certain precedence at time of high load). The major choice is a three way tradeoff between low-delay,high-reliability, and high-throughput.

    111 - Network Control
    011 - Flash
    110 - Internetwork Control
    010 - Immediate
    101 - CRITIC/ECP
    001 - Priority
    100 - Flash Override
    000 - Routine
  • Is if this could be taken to the level of moving commonly-used data to multicast channels. How much of the Internet traffic is static content from commonly-accessed sites?

    Maybe in a decade it will be taken for granted that the low bandwidth data comes from your main connection, and most of the packets for the Star Wars Part 9 trailer come from the TV's tuner, broadcast on a TV station's spare digital subchannels to the larger audience. We'll see what happens.
  • Thank you. It is customers like you who mean my own ISP has enough outgoing bandwidth (after their own web hosting) that they can and do allow servers to be run.
  • Ah, but what does it do when it cannot interpret the content from the packet type? Say "hey, this is an SSL packet, who knows, it might contain video or someone's electrocardiogram data, so give it highest priority just in case"?
  • So what you're saying is that your ISP oversells it's bandwidth: you don't have enough capacity for everyone to use what they've paid for at the same time.

    Two different things. I haven't noticed any unacceptably slow connections, so I would say they don't oversell their bandwidth.

    But of course they don't buy enough capacity for everyone to use the maximum bandwidth of their connection at once, that would be ridiculous.

    The fact is people don't all use all their bandwidth all the time. They get what they pay for - if they were paying for a reserved channel to the Internet backbones that would cost more than what the ISP paid for that much bandwidth, they need to make a profit somehow.

    So it's a business ethics problem, not one of customer abuse.

    It isn't a problem at all as far as I can see.
  • Not that this will actually solve that problem. It just means that all the routers on the path across the country and back will decide to deprioritize your traffic so the connection will be slower.

    The reason it bounces across the country isn't technological, it's 'business'.

    Business is exactly why that won't happen. How many routers will implement this stuff at once? Not too many. One corporation's segment of the Internet backbone at a time, at the fastest. Result: angry customers - lawsuits and courts interpreting contracts for guaranteed bandwidth broken - cat and dogs, living together - et cetra until the features are disabled.

    Most of the router buyers won't purchase processing power beyond what is needed for plain vanilla routing anyway, plus processing some access lists at most, and the ballooning of the routing tables is likely to suck up enough router cycles that any multi-homed network will have to pass on the new "features".

    Maybe it will come in as a lump feature everywhere simultaneously along with actual IPv6 usage. I doubt it.
  • Still, video and audio data will be - probably is - transmitted in encrypted form when confidentiality is required, over PPTP links for company "extranets", etc.

    I suspect (hope) that with a secure protocol it would be difficult to classify the traffic inside, as I understand is beingh proposed.
  • Sorry.

    Next startup please!!!

    So?... how do you expect to do this analyze packets and do smart routing when everything is encrypted? Huh? Thought so.

    A lot of people in the cypherpunk community want a 100% encrypted network without the capability of wiretapping.

    I think that it could be an interesting idea and I have thought about doing this for the last few years. The second you add encryption you can just throw the idea out the window.

  • A couple of years ago her in Australia Telstra (the telecomms monopoly) wanted to introduce timed local calls for data while leaving voice calls alone. To do this they would have had to sample and analyze a piece of each call to determine how it should be charged. This, being an invasion of privacy, was thrown away as soon as the public got wind of it.

    Classification of packets by protocol can only really work if the data in analyzed. Doesn't this also constitute and invasion of privacy?

    The classification could also be achieved by only looking at the source / destination ports, but they can easily be changed by the provider of the service.
  • we need an international, independent, governing body for DNS and the internet, not an American-controlled company.

    What, like ICANN? Yeah, that's worked well.
  • >To the untrained eye, Caspian's product, the Apeiro, is a
    >new kind of router. But Roberts says it's not a router at
    >all, because where traditional routers are "dumb" - Roberts'
    >shorthand for the fact that they don't differentiate between
    >the kinds of bits running over a network - his "optical IP
    >superswitch," as he calls it, is smart. It can identify
    >packet types (voice, text, video, et cetera) and priorities,
    >allowing it to determine one packet's relation to others,
    >and expedite traffic
    in a way that's impossible today.

    Roberts sounds like a jerk. He also sounds like he's trying to do multiple label router switching (MPLS). The idea is the originating device adds a short label in a shim layer between the transport medium (ATM, for example) and the IP layer. The label is read by the label switched routers (LSR) to route the packets through the network. Contrast this with the "best guess" method that's used with regular routers or even the QoS features of ATM.

    The idea is that voice packets, various types of data packets, or anything that has different requirements on latency or jitter can be served on the same network.

    And the "big gorillas" are happily implementing MPLS for their next generation data networks; Alcatel, Juniper, Cisco (although -- surprise! -- they're initially implementing a proprietary approach and just calling it MPLS), and others are all doing this for their core and edge router products.

    But, hey, who am I to rain on this guy's parade?

  • The term broadband used to have a meaning. It meant the opposite of base band. Now it has no meaning because of people like Micheal believing the marketing departments allusion that it means hi bandwidth. It does not.

    now it means nothing. Just like p2p and b2p and p2b2e2g
  • 2) Its a new standard. It will never fly. The internet hasn't really changed since IPv4 & TCP/IP were implemented over a decade ago. Remember: we need IPv6, and we need "intellegent" routers if we want what people have been promised, the great "information superhighway". However, there are 10's of millions of hosts on the internet, and they all have to start using new protocols for packets, and IPv6. Before we start implementing major new changes online, we need an international, independent, governing body for DNS and the internet, not an American-controlled company. The internet used to be open and democratic, lets try and make it that way once more.

    Correction. There would be no benifit if you were the only ISP on the planet using this, as all of the data you want to give priority to would be "off-net", not "on-net". Odds are your custiomers are looking to access multimedia from some other site, not your own network.

  • The difference between the Internet, and TV, radio, and other "mediums" is that it is a Communication medium, whereas TV and radio are Entertainment mediums.

    It is sad to hear that narrow minds in high places are intent on trick f**king the Internet into an Entertainment/Marketing medium.

    I hope that by the time these QoS enabled routers become ubiquitous enough to annoy me, I will be to old and senile to notice.

  • Look, this doesn't have to be an IP-layer bit. This can easily be implemented in an MPLS QOS-style deal, where the packet is encapsulated at the ingress to the network, and stripped at the egress. Try getting round THAT in a TCP header. Layer 2-and-a-half is where the business takes place, not layer 3. As soon as you've left your PPP, Ethernet or NBMA segment, you can kiss your application settings goodbye.

    Not everything exists at layer 3, chaps.

  • I fail to see how a server can "demand heavier use" than a non-server connection, when both have the same bandwidth limit.
  • by Animats ( 122034 ) on Sunday May 20, 2001 @05:13PM (#209374) Homepage
    I read the original article. They're implementing quality of service via something that looks vaguely like circuit-switched pipes. We've seen that before. Anyone remember Tymnet, from the 1960s, the predecessor to X.25? Same idea, but with newer technology. Very telco-oriented, with explicit setup and teardown. Well-defined for billing purposes. Some people think that's a good thing, and some don't.

    The pro and anti QOS issue is quite old. When I first developed fair queueing [fh-koeln.de], it was obvious to me that the queuing system could be biased to be "unfair", and that this would aid in making the Internet a billable transmission system. I deliberately didn't put that in RFC970, because I didn't want to make that happen.

    A truism of modern transmission systems, including voice telephony, is that the billing process costs more than actual transmission. Worse, once you have a traffic-based billing system in place, prices tend not to decline as rapidly as tranmission costs decline. This is the major argument against QOS in the Internet.

    John Nagle

  • A Smart `center' means a more complex center, more things to go wrong, lower reliability.

    Routing problems are common enough now. Imagine what will happen if configuring a router involves 10 times as many options to control all that intelligence...

  • It is, however, Layer 4 switching and it has been around for a while by now. Both Cisco and Juniper have their hands quite deep into this pie as well! Let us not forget the number two and three WAN hardware vendors, either. An easier way to write this article would have been to link to any one or two of a hundred old articles on this topic. (Oh, wait, that is what you did.)
  • I hope this is the journalist who's clueless rather than the founder of Caspian (PhD from MIT, worked at Lincoln Labs), but just by reading the article one can safely say Caspian won't be anything more than a small blib in a radar screen, not due to the evil Cisco and Jupiter (sic!) plotting against it, but simply because these proposed routers don't offer anything which is fundamentally new. There are two reasons packets classification is done at the edges rather than at the core: the first one is that if you didn't do it right at the edges, it may be too late to do anything at the core, and the second one - packet rate at the core is such that doing anything fancy simply slows down the router. yes, you can put the prioritization logic into the silicon, this is what Juniper already has, and probably Cisco will have as well, and yet the fact of life is that on a core interface runing at OC-192 (and soon at OC-768), getting all the packets through at a wire speed, in conjunction with overallocation (so the average utilization is about 20 to 30%) gives you that 'guaranteed delivery' since most of the packets don't get lost due to queuing or interface pps rate. This is the key element in the design of the core: let's keep things simple. let's not introduce fancy algorithms. let's through some extra bandwidth and some raw power but other than that - no fancy algorithms, special policies, etc etc etc. Reliability achieved by a simple brute force. I don't wish anyone dealing with INTERNET core get involved in what appears to be 'content-sensitive prioritization and routing'. incidently the main reason Juniper bits the shit out of Cisco at the core is wire-speed and high density. as simple as that.
  • can you say (cough*) UDP?
  • The two companies are Cisco and Juniper, not Cisco and Jupiter.
  • trying to exceed the contractual bandwidth caps

    Just curious; why don't you use real cap (as in having actual hard bottleneck... or even dynamically adjusting that accounts for consumed bw) for max speed. It seems this is/should be routers' responsibility; since it's beneficial for _you_ to make sure bandwidth is limited (not end user), you should keep everyone happy. I guess I'm just saying that preventing problems beforehand is better than letting problems occur then punishing your (soon ex-)customer.

  • by -tji ( 139690 ) on Sunday May 20, 2001 @04:13PM (#209381) Journal
    The stuff he says about the "smart network" is a big piece of crap, intended to make them look better than the giant, cisco.

    'He designed the Internet to be dumb at the core, so he could keep control at his lab' What a load of crap.. The Internet of his day bears little resemblance to the Internet of today. The reason the core doesn't get into complexities is just because of CPU power. The relatively low bandwidths on the edge was the only place that had enough CPU power to do heavy processing. If you tried to do that in the core, where all the links were aggregated, you could not keep up with the load and do complex processing.

    And, the junk about smart routers telling different data types is REALLY simplistic. Labelling packets for their type of data is not the challenge. Allocating bandwidth per customer, billing per usage were more difficult. And, for a really tough issue to overcome.... Your ISP, say AT&T, labels your packet high priority voice data.. It zips through their network, then goes through a NAP & gets passed to MCI's network.. You don't pay MCI a dime.. Why should they honor your priority & preempt their paying customers?

    Also, he tries to make it out to be a big benefit to everyone. As if, my WWW browsing will get faster if they do prioritized switching in the core. But, in reality, today my packets are treated equally with everyone else's. With prioritized routing, I will be at a lower priority than Mr. Deep Pockets at GM, CitiBank, GE, and other high paying customers.

  • by tcc ( 140386 ) on Sunday May 20, 2001 @02:35PM (#209382) Homepage Journal
    ("You're a residential customer, you can't serve data, only consume it!")

    That's okay with me, I wanna consume p0rn not serve it.
  • Can someone in the know explain how this is different from QOS?
  • stream = state
    state = memory
    memory = $$

    And where do you need to put that stream, state, memory, and $$?

    Answer: everywhere in the core!

    This is *precisely* why SMART edges are a good thing. Buffering video is a good thing. Beyond that, identifying interactive flows without retaining state is important.

  • The only place you NEED QoS is in your network, and not in "the protocol". You need QoS properly implemented in ANYTHING that buffers, and at each and every buffer location.

    Any time you make a QoS decision you make it based on your resources and policies, not just on request. As an administrator you need to decide what's important. The routers help you implement that choice today. And you can provide special queue handling or you can redirect the path of the packet (policy based routing). All of this works today.

    A lot of what you're talking about sounds an awful lot like RSVP. It's already here for IPv4. Ipv6 solves the addressing shortage today, and that's about it.

  • The major problems with ISPs today are as follows:

    1] space
    1a] power

    2] truck roles

    3] support

    Putting aside [2] right now, I think [1] and [3] are going to be seriously impacted by mechanisms that don't make judicious use of well known ports.

    Looking inside the packet is exceedingly complex, and it subjects the router to all sorts of state changes based on potentially buggy user code at the end station.

    I apologize if my response was short in tone, but we've got to remember why we moved the intelligence out of the core in the first place. Remember, with a telephone you get O(1) stream, fixed rate, and there is no buffering on the ends.

    Today, you can bunches of streams running simultaneously to your laptop/handheld/desk side. You could be watching a movie while talking to your brother on the phone, reading mail, playing with GNUtella, running IRC/Aimster/...

  • 1) On the topic of encryption, you're talking about something at layer 5 or 6, whereas these routers would be looking at layer 4. At least, that's my view of how they work.

    IPSEC is at layer 3. TLS sits atop layer 4, but in front of the data. If you wish to follow a stream on a core device without looking at port information (implied by others), then you're hosed.

    2) QoS. Again, we are talking about the core. The backbone providers presently use a 'dumb' core. It doesn't care about QoS and can't implement it. They route purely at layer 3, usually using IS-IS as a routing protocol. What Caspian is proposing is to enable the backbone to route at a higher layer - presumably 4 - to prioritize packets, and to keep packets of the same stream together, rather than scattering them all over the place, hoping that they all get to the destination in some useful order.

    First, I don't know what you mean by "scattering them all over the place", but I presume you mean in time, and not in route, since routing on the Internet doesn't change all that much from moment to moment, and when it does change, it does so due to a legitimate outage.

    While it's true that backbone routers route at layer 3, they can and do implement QoS. This is particularly true for the GSR. Go check the web pages at Cisco. For interactive communications, by the way, you do not WANT the packets kept together. Instead you want them transmitted in the core at the same rate they were transmitted by the edge device. If you bunch traffic the humans on either end will notice, and/or you'll need buffering in the end devices to cover it.

    3) The ISP's and backbone providers to a degree can already favor one customer over another. They can adjust BGP costs, set static routes, etc. so that certain traffic flows in a certain way.

    The play for an ISP is to have a small percentage of priority traffic and a high percentage of non-priority traffic. So, what you want to sell is the right to have SOME high priority traffic, like interactive voice or video from a customer and a lot of low priority from that same customer.

    Also, customers who pay for priority service want to know that they're getting it. That means that you need to know where the customer is going to be transmitting high priority traffic (i.e., provisioning bandwidth). That turns out to be a tricky problem solved by RSVP.

    4) They new routers are meant for the backbone/ISP level. Your typical business won't have them.

    You are correct so long as you use the word "typical", since large businesses buy a lot of the same gear as even the largest ISPs. Look at how many companies have 7500s today. Many of those same companies are looking at GSRs.

    And I guess this says something about the technical prowess of Wired, if they gave you these misconceptions.

  • Just raise the numbers of his example and you are back to being wrong that the solution is just more bandwidth. An ATM implementation can offer delay variation (jitter) guarantees whereas alternatives will not. This is not only because of the fixed packet size but because it is a connection-based technology, so it will never make a guarantee that it can't support.

    The downside is of course that ATM lives at both layer 2 and 3. Mechanisms to get IP (also 3) working nicely with ATM such as LANE, MPOA, etc, will break its great QoS features. That means with ATM you end up buying an expensive pipe that you can't use the features of instead of a cheap pipe without the same features. (It's also an expensive pipe with a lot of overhead given the ridiculously small size of ATM cells.)

  • re your 1. In an ideal world, charging for different classes of service (and volume) would mean that I could run ca. 1989 Internet services practically free of charge: text-only email and a few dozen text newsgroups. Not time-dependent, delay it as much as you like, bury it in your off-peak times. Alas, my pessimism assures me that the marketroids will utterly screw this up as well.
  • It's not "just because you use different ISPs". It's because your ISPs just don't have enough traffic in your town to justify peering with each other locally. Believe me, if there were a significant amount of local traffic, they'd just as soon avoid forwarding halfway across the country.
  • by peccary ( 161168 ) on Sunday May 20, 2001 @04:51PM (#209391)
    They've already done it. Not every protocol in use on the Internet is "TCP-friendly". I won't name names, since, um, I was one of the offenders.

    In a similar vein, there once was a little project to build a graphical hypertext browser, and they didn't like TCP's slow-start algorithm, so they made it open multiple simultaneous connections to the file server to bypass slow-start. They called that thing Netscape, IIRC.
  • The reason your traffic goes all over the place is largely because of the influence of current or former telecom employees in companies providing internet services.

    Let's take a 10-year-old telco, for example. In 1991, if they sold data services, the value was added in getting bits from one place on their network to another. Salespuke: "Hi. MyTelecom will get your bits from your office in New York to your office in LA over our great network. Latency is X, availability is Y"

    Contrast that with the internet. Where does the company find its <buzzword>value proposition</buzzword>? If their excellent fiber (or whatever physical asset) is what adds value, then they want traffic *on* their network, and not hopping off of it.

    What about private peering? Oh. Well, if I'm selling transit to my customers, why am I going to give it for free to (UUNet, Genuity, Qwest, AT&T, etc.) and if I do, because peering is in my interest, well... let's do it in 3 or 4 places. New York... San Jose... maybe D.C. and Chicago if they're lucky. So... if you're sending something from Dallas to Fort Worth... well... sure, your packets have to go through San Jose, and we're wasting bandwidth on our backbone, but if they want to get better connectivity to us, they're going to have to pay for it.


    OK... enough free clues. Time to go back to looking for a job.

  • Cheaper?

    Internet2, if it's cheaper for anyone, is so because of the services and equipment which get donated.
  • Some CEOs think Paradise for Business is when they have the customer locked into their product, with now way out. Not that we know of anyone like this.

    But there is definitely an elitist viewpoint out there, and inside club for some of these types. I was speaking the other day to may MIS manager, and he recalled dealing with an Upper Level Manager (TM) whose attitude was that if you didn't come from the right kind of school, then you were scum and disposable.

    Whether you know it or not, for some people, there is a caste system, in their own minds, and it is good because they are on top. And if you aren't part of it, well too bad. You were not born lucky.

    This leads us to the viewpoint of "We can do what we want"; it is just that there is less of a social veneer to the whole thing, so that they are being less hidden about it. It is more in the open, because they feel that there is nothing to stop them. Most of the public have been tamed and domesticated. The wild (but educated) Human is a rare breed indeed these days.

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • >Contrast this with the "best guess" method that's used with regular routers or even the QoS features
    >of ATM.

    MPLS/POS QoS still will not rival that of ATM b/c MPLS traffic will be all variable sized and at some points these will need to be buffered and Jitter will increase even for high-QoS streams (take the instance where a 30,000B packet of priority x is just being transmitted and a 30B packet of priorty_greater_than_x is sent to an egress port. that 30B packet will have to wait for the entire 30,000 B packet to be transmitted.). ATM doesn't have this problem, as connections' traffic is all 53 B (there is a small bound on the jitter one will ever experience). The problem with ATM is that it is IP-unaware without MPLS. Many MPLS solutions will, in fact, maintain ATM through the core and will only have POS/MPLS LERs. See the "Best Of" Article in the latest Network World article where they named AT&T's IP-aware ATM network as the best out there.
  • by dalzell ( 190300 ) on Sunday May 20, 2001 @07:34PM (#209396)
    The problem with your argument is that for intelligence, you still need OEO conversion. The all-optical part is the data path only. To distinguish/analyze packets or to set up connections and tunnels still requires OEO conversion.
  • by Papa Legba ( 192550 ) on Sunday May 20, 2001 @03:48PM (#209397)
    The use I would expect to see these put to is shakedown for ISPs by the bandwidth providers (Worldcom, etal.) . With them being able to tell what the packet contains and speed it up or slow it down accordingly it is not any kind of leap to do it based on packet source. This means that they will be able to sell you a T1, but if you want the premium upgrade that will cost you. The premium upgade will contain an automatic speed up of one step for packets originating at your IP range. Want another step, that's another "premium" package purchase. If you are AOL and want you packets to route faster than the packets from mindspring you just need to get the next "premium" upgrade. That way you can run ads saying that your network is faster than theirs. It will make the final days of the ISP wars a bidding adventure.

    Writers of software could kick in also on this. If the packet contains a word document then give it a speed step, microsoft pays for it, Star office on the other hand gets nothing. The individual effect is negligable, but the overall impression people will get is that Office is faster than it's compitition.

    The ultimate effect will be that the larger providers and software publishers will be able to pay to get increased performance on the net. The little guy will be squeezed out of the market by a lack of being able to pay for bandwidth ability.

    And let's not even talk about paying to have your competion slowed down on the net...

  • I dont think anyone can deny that we need some change in the way packet routing happens now. If I am trying to ftp from my machine to a client across town, why on earth does it need to be bounced across the country, because we use different ISP's. The main problem I see with this system is that there really is no way to stop the large backbone providers from selling 'priority' access for particular streams. In other words, your for pay info from foo.com will be a lot faster than your not for pay look at Slashdot. I think that if you CAN abuse it, a corporation will figure out a way to do it, no?
  • This is very much like the concept of Internet2 [psu.edu] that a whole slew of universities, and i believe some non-profit organizations take part in. They have their own private backbone, "abeline", and they have a priority scheme implemented. And what if someone breaks the rules? If they get caught they are kicked off. Which is a great detterant since this network is much more efficient than the standard internet, and cheaper.
  • Most of the problem with net congestion is because there aren't enough streets. No company wants to foot the bill of laying new pipes or connecting their equipment with anybody elses. Just imagine if you wanted to drive to visit your friend across town in Philly. Instead of driving down the street to his house, you have to go through New York or Chicago. That's not too smart now, is it? That's what happens with your net traffic. It's too bad the various goverments don't pay for new pipes like they do for streets.

    My traffic from work here in Philly got sent down through Washington, DC before coming back up to Philly to go to the local university.
  • Those are called dilberries... From the old English "dil" = "asshole".
  • > 1) While this could theoretically be used for what you might call "censorship", the intention is actually a good one. It is quite sensible to give higher priority to realtime data such as audio and video than it is for, say, SMTP traffic, and all users would benefit from this.

    Crepe, this is a good one. Why should I have to put up with all those wanking bandwidth hogs streaming stuff to my detriment. If they set the highest volume to the lowest priority, then it might have some benefit. Streaming stuff can be buffered. It should be.

    > 2) Unlike IPv6, this doesn't require widespread deployment before it can be used (the chicken-and-egg problem that is delaying IBv6 deployment). Even if you are the only ISP on the planet using it, there will still be some benefit to your users.

    Horsefeathers... It won't get used to my benefit. Just for more moronic flash and streaming.

  • The problem with QOS and associated schemes is that it costs more to manage the bandwidth than the bandwidth costs. Each service type needs its own routing table. Routers that don't support this type of routing dissapear from the network leading to a service that is not utilising all the available hardware. Read the fat pipe hypothosis. Modern routing protocols already can choose optimal routes for traffic but telecos only interconnect their network at designated points rather than multipoint connectivity.
  • Where does this go in the OSA Model? Damn CCNA...

  • That's a good point, although I'd think, in order to spoof a protocol, to the extent that a QOS capable router would assign it some higher priority than it would otherwise get, the protocol being modified would have to be extremely close to the target protocol (being spoofed); so much so that it would be unlikely to be successful.

    Spoofing one protocol into another to the point where a router wouldn't be able to distinguish the two is far more complex than spoofing the source or destination of packets.

    Presumably, we're not talking about something simple like wrapping FTP inside SSH. I guess it's concievable that someone might wrap FTO or some other protocol inside Real Networks PNM, or something (that's the first example that came to mind) but I'm not sure how much of a value-add even that would be for a developer, to have his transfer given that marginally higher packet priority. I guess it's an issue that could be debated extensively...

    As I compose this, i'm convincing myself that perhaps prtocol spoofing might be a more substancial problem... hmmm. Well I can see a scenerio where a vendor sels an FTP client ans server product set, where the vendor can garuntee a higher transfer rate than with any competing FTP client/server packaged product set. Customers would only realize a benefit when using this hypethetical across similar systems, for example enterprise remote offices.


  • by hillct ( 230132 ) on Sunday May 20, 2001 @02:48PM (#209406) Homepage Journal
    The point was made that network dynamism will be reduced. While this is certainly true, in that new protocols will be slower to take hold, because with the introduction of new protocols would require each router to be re-tuned to handle them at a suitable priority, this is really no different than current firewalls. If you assume that the first thing a network engineer is going to do when he gets one of these QOS capable routers, is lock down his network, in essance firewalling each subnet, well then the hypothesis will be accurate.

    If, on the other hand, the majority on network engineers are smart enough to know that while QOS is important, it only has business value where the benefit it offers meshed with the services offered by the provider in question, for example, the first thing every network engineer is going to do as soon as he/she gets her hand on one of these is to lock down a test enviroment, but hopefully, they will be smart enough to see if, for example, their company doesn't provide VOIP services, there's no point to tuning the routers to handle it (unless they're just trying to be neighborly or something.

    The example given is, however completely valid, about choking off upstream trafic for residential broadband customers, however, this is already being done , although not with the level of rranularity with which it could be done.

    While router based QOS is neat, it's really only a tiny step forward. We need IPv6 before QOS really becomes a reality. Router based QOS is just no substitute for protocol based QOS.


  • no servers WHATSOEVER allowed, i.e. they explicitly prohibited http and ftp So far, so good. I don't know if the cable operators in my neck of the woods are stupid, or overworked, or don't really care, but it all works out the same. Supposedly we are limited to one IP unless we beg for more, because of some kind of naming scheme. At least here in SW Ontario, @Home's network doesn't do the checking they claim they do. Also, no problems running ftp and http servers at all. And from the amount of portscanning traffic that I see in my logs, I am assuming there are a LOT of servers running on 24.x.x.x...
  • real email addy: kl3pto@hotmail.com : MPLS Multi protocol Layer Switching enables you to do the exact same thing using your existing high end routers. (that is if you use CISCO or Juniper equipment) MPLS does this by utilising the TOS bit in the TCP header.. Check this out http://www.cisco.com/univercd/cc/td/doc/product/so ftware/ios120/120newft/120limit/120s/120s5/mpls_te .htm#xtocid152642
  • I hear ya..

    This idea has already been implemented. We are hearing market-man speak. Custom-queueing in a Cisco router can be done on a very granular level. At the core level (i.e Auton. Sys.), however, this will become more of an issue, as we have seen in the discussion about BGP routing protocols and their sheer memory requirements. BTW, most of the world ain't getting their data via ATM. Most of or WAN infrastructure is still Frame Relay, which has fewer built-in QOS features. In a nutshell, if we were talking about a revolutionary idea, Cisco would have done it already. To truly improve the perfomance of broadcast-type apps, we need caching, not queing.
  • So these routers can analyze my packets and "determine the most efficient way to transmit them?" Is anyone else just a little bit uncomfortable with this?
    I agree with those who say the Internet is not running up to spec, but I'd sure like to see a thorough discussion on how the technology can avoid actually looking at the content, not just the content type. I agree with michael (for once) that this introduces an ability for ISPs which may not be in the best interests of free expression of ideas - something I was led to believe the internet would bring us.
  • 1) On the topic of encryption, you're talking about something at layer 5 or 6, whereas these routers would be looking at layer 4. At least, that's my view of how they work.

    2) QoS. Again, we are talking about the core. The backbone providers presently use a 'dumb' core. It doesn't care about QoS and can't implement it. They route purely at layer 3, usually using IS-IS as a routing protocol. What Caspian is proposing is to enable the backbone to route at a higher layer - presumably 4 - to prioritize packets, and to keep packets of the same stream together, rather than scattering them all over the place, hoping that they all get to the destination in some useful order.

    3) The ISP's and backbone providers to a degree can already favor one customer over another. They can adjust BGP costs, set static routes, etc. so that certain traffic flows in a certain way.

    4) They new routers are meant for the backbone/ISP level. Your typical business won't have them. The biggest barrier is going to be replacing the existing hardware with the new stuff. That's a lot of hardware to replace, and it doesn't look like an ISP can replace things piecemeal.

    If people had bothered to read the Wired article in the first place, a lot of these questions would not have been asked in the first place.

  • The whole idea of smart routers is nice, but it has two major problems.

    1) It is another form of corporate censorship. Before the days of big ISP (i used to use a ma and pa operation!), a host was a host was a host: ie, if you had an ip, be it dialup or a t1, you could use your bandwidth as you pleased. Granted, and FTP server on a 33.6k connection was sad, but that was your choice. Now that bandwidth at the doorstep exists, we're limited in how we can use it: if @Home had their way, all you'd be able to use is their "premium content".

    2) Its a new standard. It will never fly. The internet hasn't really changed since IPv4 & TCP/IP were implemented over a decade ago. Remember: we need IPv6, and we need "intellegent" routers if we want what people have been promised, the great "information superhighway". However, there are 10's of millions of hosts on the internet, and they all have to start using new protocols for packets, and IPv6. Before we start implementing major new changes online, we need an international, independent, governing body for DNS and the internet, not an American-controlled company. The internet used to be open and democratic, lets try and make it that way once more.

  • How hilarious - and how slashdot that 24 posts didn't notice...
    "I'm not downloaded, I'm just loaded and down"
  • Considering the vast majority of broadband ISPs don't even filter for ports 137/139, let alone actively look for them and warn customers that they're sharing their entire Windows machine with the world, do you think they go checking up? I doubt it. They're probably not going to notice, unless you get slashdotted...
    "I'm not downloaded, I'm just loaded and down"
  • Very true.

    To back this with some numbers and an example:

    • Today's backbone linespeeds are in the order of 10 Gbit/s
    • Humans are sensitive to jitter/delay down to the order of 1 ms
    How does QoS work? It puts packets in buffers in a smart way and sends them out at other times to make sure jitter and delay are as low as possible. Since the human resolution is about 1 ms, the buffers need to be able to store roughly 1 ms of network traffic.

    10 Gbit/s * 1 ms = 10 Mbit = ~1 Mbyte

    So, each port on a QoS router in the backbone should have a buffer of 1 Megabyte behind it? And in 1 ms the logic in the router has to sort and shuffle the contents of that entire buffer?

    Let's also calculate the duration of a packet of about 1 kbit big (that's close to the MTU):

    1 kbit / 10 Gbit/s = 100 ns

    Jitter on the network is in the order of a few packetdurations. Compare this to the resolution of a human. On 10 Gbit/s networks, this really isn't an issue anymore.

    Where is the real problem? Where the network is slow. Where is that? Near your own modem: on the edge of the network. QED.

  • QOS, or "quality of service" doesn't mean "we're going to make your service better and better," it means "we're going to give you what you paid for it."

    And it's not draconian, it's business. Anyone who doesn't devolve to the profit-maximizing model will not be profitable, he will be out-competed, and his perhaps shinier technology will be marginalized, then forgotten.

    "This is why in 2001 there are no flying cars, only five oil companies, six banks, and one internet."
  • by Phasedshift ( 415064 ) on Sunday May 20, 2001 @03:39PM (#209435)
    This isn't such a revolutionary idea.

    It says in the article:

    "The Apeiro technique is based on a standard called multi-protocol label switching, which Roberts has tweaked and renamed D/MPLS - the D is for dynamic."

    Basically, in a large network the border routers would take parts of the header of the IP packet (dest/source IP address, etc just like MPLS), along with parts specific to D/MPLS (probably pulling certain information from the payload of packets, among other things), and sticking it in the ATM cell header, allowing things to be switched at Layer 2.

    If your just grabbing data from the header of the IP packet, then MPLS already does this (mind you, it could be better). However, as far as being able to look at portions of the payload of packets, and switching them based on that data.. This is not feasable for a backbone router, the latency this would add would be unacceptable in most cases (versus the benefits), unless the data your looking for is always a certain length, etc.

    Besides, the important part is you can impliment custom queueing on gateway and border routers, along with MPLS allowing you to do the same thing (minus a few "features", but not enough to make this anything revolutionary). Custom queueing on a Cisco will give certain types of traffic priority over others. I.E. it can give packets with a destination port (TCP/UDP) of 1720 (H.323) the highest priority, while giving FTP traffic (port 21, etc) lower priority. How to set it up on a cisco (and information on it) is at http://www.cisco.com/univercd/cc/td/doc/product/so ftware/ios120/12cgcr/qos_c/qcpart2/qccq.htm

    You can also do nifty things like give priority to traffic with a source of a certain subnet (custom queueing can't do that, but using route-map's, etc it can be done), etc.

"Let every man teach his son, teach his daughter, that labor is honorable." -- Robert G. Ingersoll