Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet User Journal

A New Way to Look at Networking 90

Van Jacobson gave a Google Tech Talk on some of his ideas of how a modern, global network could work more effectively, and with more trust in the data which changes many hands on its journey to its final destination. Watch the talk on Google's site The man is very smart and his ideas are fascinating. He has the experience and knowledge to see the big picture and what can be done to solve some of the new problems we have. He starts with the beginning of the phone networks and then goes on to briefly explain the origins of the ARPAnet and its evolution into the Internet we use today. He explains the problems that were faced while using the phone networks for data, and how they were solved by realizing that a new problem had risen and needed a new, different solution. He then goes to explain how the Internet has changed significantly from the time it started off in research centres, schools, and government offices into what it is today (lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient).
This discussion has been archived. No new comments can be posted.

A New Way to Look at Networking

Comments Filter:
  • by charnov ( 183495 ) on Sunday May 06, 2007 @09:57AM (#19009895) Homepage Journal
    There is no reason you can't multicast across a large segmented network, i.e. the internet, and get good delivery. Radio, television, audio, phone, movies are all latency sensitive but not particularly bit sensitive so you can drop some packets here and there. That also means that some things would need QoS (VoIP) while others would need intelligent caching and buffering (movies, etc.).
  • Internet is not TV (Score:5, Insightful)

    by Dun Malg ( 230075 ) on Sunday May 06, 2007 @10:05AM (#19009937) Homepage
    "(lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient)"

    The first part is true, but does not necessarily lead to the conclusion in the second. There is a huge, very important IF that belongs between them. Specifically, "if the recipients are all prepared to receive those bytes at the same time". The problem with the conclusion is that the evaluation of the "if" part is nearly always "they're not". This is yet another case of "if the internet were like television, it'd be more efficient". Yes, but it would then no longer be the internet people like. The great promise of the internet is information on demand. All this bullcrap about broadcast, push, and the like, it's all the efforts of 20th century throwbacks trying to fit the internet into their outdated worldview of "producers" and "consumers". They need to quit it. Broadcast is a square peg and the internet is a round hole. Every time anyone suggests putting the two together, they simply look like a bloody idiot.
  • by The Living Fractal ( 162153 ) <banantarr@hot m a i l.com> on Sunday May 06, 2007 @10:45AM (#19010201) Homepage

    lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient

    So, sending identical packets to everyone is somehow more bandwidth efficient than sending packets to only those who want them? Doesn't that seem backwards to anyone else? Furthermore, couldn't you define broadcasting as precisely the act of sending identical bytes to many consumers?! I'm teh confused.

    TLF
  • by Yvanhoe ( 564877 ) on Sunday May 06, 2007 @11:23AM (#19010503) Journal
    Internet is information on demand, but given a large amount of demands, some of the demands are redundant. For instance, it would make a lot of sense for a local ISP to cache the google homepage. Also, when making a modification on said homepage, it would make sense for Google to broadcast a signal to all ISPs to update their caches, or even to broadcast the new homepage to everyone. It is even more interesting in the case of the homepage of news websites.

    I think that in order to see the benefits of the broadcasting of data, you have to take the ISPs and service providers point of vie, not the final user's. Today, the ISP transmit every request from their users to the service provider, and the service providers answer to each user request. In the case of a dynamic web like online shops or search engines, there are no alternatives. But in the case of semi-static websites like news sites, having a system of cache synchronized at the ISP level thanks to a regular broadcast from the server can actually save a lot of bandwidth to the ISP and the service provider.
    Remember the problem slashdot had with softwares like NewsTicker when it first provided a RSS feed. This is the kind of problems this wants to solve if I understand correctly.

    Disclaimer : I didn't watch the one-hour long video with no transcript. Give me a text and save this bandwidth already, dammit !
  • by Kjella ( 173770 ) on Sunday May 06, 2007 @02:17PM (#19011815) Homepage
    ...but the more he talked, the more it reminded me of some halfbreed between akamai and freenet.
    Basicly, he's speaking of named resources, that an URL would be key like KSKs in Freenet
    Content would self-verify, that's basicly CHKs in Freenet
    Then you need add security into it which pretty much amount to SSKs

    Only in his case, it wasn't talk about making the end nodes treat information this way but rather the core of internet, and it didn't involve anonymity. But the general idea was the same, to grab content from a persistant swarm of hosts who doesn't need a connection to the original source. Unfortunately, most of the examples he gives are simply false, like the NY Times front page. If I want up-to-the-minute news everybody need to pull fresh copies off the original source all the time, reducing it down to a caching proxy. Any sort of user-specifc content, or interactive content won't work. For example take slashdot. I've got my reading preferences set up, which means my content isn't the same as yours. Also my front page contains a link to my home page, which is not the same as yours. Getting a posting form and making a comment wouldn't be possible. Making any kind of feedback like digg, youtube, article feedback etc. isn't possible. Counters wouldn't be possible. The only thing where it'd work is reasonably static fire-and-forget content, and even then there's the problem of knowing what junk to keep. Notice that when asked about BT he said that only worked for big files, so the idea is that everyone will have some datastore where they keep small files until someone needs them. The only good example is the Olympic broadcast, which is exactly the same content at exactly the same time. Oh wait, that's classic broadcast. Classic broadcast works best in a broadcast model? Who'd think that.
  • by btaylor ( 1066810 ) on Sunday May 06, 2007 @03:10PM (#19012189)
    I see one problem with his idea of ignoring where data comes from.

    Corporations make money by restricting access to information.

    It doesn't seem that it will be possible for them to continue to do that with this model, so I don't think any of this will come to pass any time in the near future.
  • Re:8 months ago (Score:2, Insightful)

    by eugene ts wong ( 231154 ) on Sunday May 06, 2007 @05:20PM (#19013225) Homepage Journal
    8 months seems pretty new to me. I notice that many of our discussions seem to focus on 1984. Wake up people! A lot has happened since then, and now it's a brave new world.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...