A New Way to Look at Networking 90
Van Jacobson gave a Google Tech Talk on some of his ideas of how a modern, global network could work more effectively, and with more trust in the data which changes many hands on its journey to its final destination.
Watch the talk on Google's site
The man is very smart and his ideas are fascinating. He has the experience and knowledge to see the big picture and what can be done to solve some of the new problems we have. He starts with the beginning of the phone networks and then goes on to briefly explain the origins of the ARPAnet and its evolution into the Internet we use today.
He explains the problems that were faced while using the phone networks for data, and how they were solved by realizing that a new problem had risen and needed a new, different solution. He then goes to explain how the Internet has changed significantly from the time it started off in research centres, schools, and government offices into what it is today (lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient).
Multicasting on a segmented network (Score:5, Insightful)
Internet is not TV (Score:5, Insightful)
The first part is true, but does not necessarily lead to the conclusion in the second. There is a huge, very important IF that belongs between them. Specifically, "if the recipients are all prepared to receive those bytes at the same time". The problem with the conclusion is that the evaluation of the "if" part is nearly always "they're not". This is yet another case of "if the internet were like television, it'd be more efficient". Yes, but it would then no longer be the internet people like. The great promise of the internet is information on demand. All this bullcrap about broadcast, push, and the like, it's all the efforts of 20th century throwbacks trying to fit the internet into their outdated worldview of "producers" and "consumers". They need to quit it. Broadcast is a square peg and the internet is a round hole. Every time anyone suggests putting the two together, they simply look like a bloody idiot.
Someone educate me please. (Score:3, Insightful)
So, sending identical packets to everyone is somehow more bandwidth efficient than sending packets to only those who want them? Doesn't that seem backwards to anyone else? Furthermore, couldn't you define broadcasting as precisely the act of sending identical bytes to many consumers?! I'm teh confused.
TLF
Re:Internet is not TV (Score:4, Insightful)
I think that in order to see the benefits of the broadcasting of data, you have to take the ISPs and service providers point of vie, not the final user's. Today, the ISP transmit every request from their users to the service provider, and the service providers answer to each user request. In the case of a dynamic web like online shops or search engines, there are no alternatives. But in the case of semi-static websites like news sites, having a system of cache synchronized at the ISP level thanks to a regular broadcast from the server can actually save a lot of bandwidth to the ISP and the service provider.
Remember the problem slashdot had with softwares like NewsTicker when it first provided a RSS feed. This is the kind of problems this wants to solve if I understand correctly.
Disclaimer : I didn't watch the one-hour long video with no transcript. Give me a text and save this bandwidth already, dammit !
Fairly interesting talk... (Score:3, Insightful)
Basicly, he's speaking of named resources, that an URL would be key like KSKs in Freenet
Content would self-verify, that's basicly CHKs in Freenet
Then you need add security into it which pretty much amount to SSKs
Only in his case, it wasn't talk about making the end nodes treat information this way but rather the core of internet, and it didn't involve anonymity. But the general idea was the same, to grab content from a persistant swarm of hosts who doesn't need a connection to the original source. Unfortunately, most of the examples he gives are simply false, like the NY Times front page. If I want up-to-the-minute news everybody need to pull fresh copies off the original source all the time, reducing it down to a caching proxy. Any sort of user-specifc content, or interactive content won't work. For example take slashdot. I've got my reading preferences set up, which means my content isn't the same as yours. Also my front page contains a link to my home page, which is not the same as yours. Getting a posting form and making a comment wouldn't be possible. Making any kind of feedback like digg, youtube, article feedback etc. isn't possible. Counters wouldn't be possible. The only thing where it'd work is reasonably static fire-and-forget content, and even then there's the problem of knowing what junk to keep. Notice that when asked about BT he said that only worked for big files, so the idea is that everyone will have some datastore where they keep small files until someone needs them. The only good example is the Olympic broadcast, which is exactly the same content at exactly the same time. Oh wait, that's classic broadcast. Classic broadcast works best in a broadcast model? Who'd think that.
Re:Superb talk: "data dissemination" not mcast/cac (Score:2, Insightful)
Corporations make money by restricting access to information.
It doesn't seem that it will be possible for them to continue to do that with this model, so I don't think any of this will come to pass any time in the near future.
Re:8 months ago (Score:2, Insightful)