A New Way to Look at Networking 90
Van Jacobson gave a Google Tech Talk on some of his ideas of how a modern, global network could work more effectively, and with more trust in the data which changes many hands on its journey to its final destination.
Watch the talk on Google's site
The man is very smart and his ideas are fascinating. He has the experience and knowledge to see the big picture and what can be done to solve some of the new problems we have. He starts with the beginning of the phone networks and then goes on to briefly explain the origins of the ARPAnet and its evolution into the Internet we use today.
He explains the problems that were faced while using the phone networks for data, and how they were solved by realizing that a new problem had risen and needed a new, different solution. He then goes to explain how the Internet has changed significantly from the time it started off in research centres, schools, and government offices into what it is today (lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient).
8 months ago (Score:1, Informative)
Re:8 months ago (Score:4, Funny)
Re: (Score:2, Insightful)
meapplicationdeveloper (Score:1)
<html>
<head>
<style type="text/css">
<!--
@import "/branding/css/tigris.css";
@import "/inst.css";
-->
Re: (Score:1)
3 hours ago (Score:1)
Re: (Score:1)
eh... (Score:2)
If this means airwaves, same as TV, sure. Why not, since the whole thing is one big info-mercial swamp already. Otherwise, it also means guaranteed next -packet delivery, without any pauses, resends, spinning cursors. And the internet is not going to deliver that, sorry.
Multicasting on a segmented network (Score:5, Insightful)
Re: (Score:1)
Also at 7:01 he says "I'm an old fart" but the subtitle gives this as "I'm an old part".
Who cares?
I'm confused.. (Score:2, Informative)
Re: (Score:1)
Re:Decline of text (Score:5, Informative)
Then I started watching, and at some point noticed I watched the whole thing, without skipping anything.
I think he gives a good talk, and it kept me interested the whole way.
Its a very nice insight he has there, too bad it flies way over Slashdotters head (well, its just that almost all of them probably didn't even read the whole thing).
By the way, I summarized [google.com] his ideas (as I understood them, which may not be the same as he explained them).
Re: (Score:2)
But doesn't this have several major issues?
1) all the freenet problems - that is:
1a)what if people don't want to share bandwidth or just some specific content? At one end, you have the freenet solution where you either share or don't use it (with that driving
Re: (Score:2)
The idea is that the "network stack" would implement this not just in end-points but in the network infrastructure as well.
Routers and peers alike.
Today you already have
Internet is not TV (Score:5, Insightful)
The first part is true, but does not necessarily lead to the conclusion in the second. There is a huge, very important IF that belongs between them. Specifically, "if the recipients are all prepared to receive those bytes at the same time". The problem with the conclusion is that the evaluation of the "if" part is nearly always "they're not". This is yet another case of "if the internet were like television, it'd be more efficient". Yes, but it would then no longer be the internet people like. The great promise of the internet is information on demand. All this bullcrap about broadcast, push, and the like, it's all the efforts of 20th century throwbacks trying to fit the internet into their outdated worldview of "producers" and "consumers". They need to quit it. Broadcast is a square peg and the internet is a round hole. Every time anyone suggests putting the two together, they simply look like a bloody idiot.
Re: (Score:3, Interesting)
bittorrent (Score:2)
I think bittorrent is the internet answer to the broadcast problem. Bittorrent is intrinsically adapted to the way the internet works. Data which is most sought by people will be found on more nodes around the net, less popular data can be downloaded directly from the primary servers.
Re:!!!! (Score:1)
What about bittorrent though? Uses the same TCP/IP protocol, trusts the data, not the source (like he says we must do), answers to th
Re: (Score:3, Informative)
On the contrary, it sounds to me like he's describing an egalitarian network where anybody who is connected to the internet can inject data into it with very little hosting overhead (because the data will be cached inside the network).
The first question at the end of the presentation was, what does he think of Bittorrent? And his problems with it are:
My personal take on this... (Score:2)
let major choke points choke due to not wanting to spend money on equipment,
share the costs like a Co-op if necessary amongst Tier1's to keep the backbones
and choke points scaled up. People who run trace routes see tier1 choke points now.
2) In the Tier1 core make sure it is DWDM folding many layers of Sonet/ATM into
a multi-channel frequency spectrum, at some point plan on phasing out asynchronous
communications and go wit
Re: (Score:2)
I haven't watched the video (hey, this is slashdot), but a system which required me to have ~500GB of local cache wouldn't be out of the question for me. My pipe isn't too big because of where I live, but I've got plenty of storage. If I could keep that pipe full all the time (basically by having my system automatically receive stuff I'm likely to be interested in), that could work.
On the other hand, given how oversold the networks are, it definitely would have to be broadcast/multicast based.
Re:Internet is not TV (Score:4, Insightful)
I think that in order to see the benefits of the broadcasting of data, you have to take the ISPs and service providers point of vie, not the final user's. Today, the ISP transmit every request from their users to the service provider, and the service providers answer to each user request. In the case of a dynamic web like online shops or search engines, there are no alternatives. But in the case of semi-static websites like news sites, having a system of cache synchronized at the ISP level thanks to a regular broadcast from the server can actually save a lot of bandwidth to the ISP and the service provider.
Remember the problem slashdot had with softwares like NewsTicker when it first provided a RSS feed. This is the kind of problems this wants to solve if I understand correctly.
Disclaimer : I didn't watch the one-hour long video with no transcript. Give me a text and save this bandwidth already, dammit !
Re: (Score:2)
Congratulations, your just invented the proxy! Yay! And it doesn't make sense because the "popular part" of the web is not static anymore. Even google's simple homepage lets me sign in and customizes the page for ME. Most news sites do this as well. Google's logo on the orther hand is
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
if the recipients are all prepared to receive those bytes at the same time
You are totally right that the recipients will want those bytes at different times, BUT this is not like current television or radio broadcasts, where once broad casted they no longer exist. The data which is intended to be broad casted (in the example, an Olympic event) is stored at the same time it is broad casted and not only by the producer or creator of the data, but by every other party (or at least the proxy-like ones) who is receiving the data. It's a bit like BitTorrent: the more interested in g
Re: (Score:1)
Re: (Score:1)
Your concern regarding everyone accessing the data at different times is dealt with in the video, and a lot of other interesting ideas are suggested as well.
Good ideas (Score:4, Interesting)
A bit off topic, but there are two things that I want to see happen: a complete upgrade to IPv6 and the creation of an alternative 'public Internet' based on emerging long distance wifi and software that lets people volunteer to be part of this new open grid, and optionally share some bandwidth bridging the 'real' Internet.
It may seem pointless to want both higher performance (multi-casting UDP, essentially infinite IP address space) and low performance and ad-hoc systems, but please consider: the UK and USA seem to be going down the wrong path of surveillance and citizen control, the Internet may someday be viewed as something that the public just should not have because it is too free a source of information. I hope that I am wrong about this, but this unpleasant possible repressive future is a possibility.
Re: (Score:2)
Re: (Score:2)
It may seem pointless to want both higher performance (multi-casting UDP, essentially infinite IP address space) and low performance and ad-hoc systems, but please consider: the UK and USA seem to be going down the wrong path of surveillance and citizen control, the Internet may someday be viewed as something that the public just should not have because it is too free a source of information. I hope that I am wrong about this, but this unpleasant possible repressive future is a possibility.
Although finding alternative methods of networking and sharing information is always a welcome step, ultimately the only thing that will stop the progression of Orwellian "security at all costs" government control is education and subsequent action by the general populace. Otherwise the second your ad-hoc network challenges the censored internet they'll be driving past your home with wi-fi scanners and knocking on your door shortly thereafter. More to the point, it shouldn't be necessary for such action. W
Re: (Score:2)
I disagree with last second point however: I try to find good sources of information to share with people I know that contradict the spin that we see on the news. A small effect, but enough people take the effort it is effective.
I also make it a habit to frequently contact my elected representatives for both things that they do that I don't like, and even the rare compliment when they get something right
Broadcasting (Score:1)
Van Jacobson's quotes (Score:5, Interesting)
Re: (Score:2)
Someone educate me please. (Score:3, Insightful)
So, sending identical packets to everyone is somehow more bandwidth efficient than sending packets to only those who want them? Doesn't that seem backwards to anyone else? Furthermore, couldn't you define broadcasting as precisely the act of sending identical bytes to many consumers?! I'm teh confused.
TLF
Re:Someone educate me please. (Score:4, Informative)
I think he's trying to push the internet into a bittorrent/usenet type of model. Instead of everyone grabbing a copy from the original server and eating up the bandwidth on the major backbones, we get the information from a more local server that have a cached copy. I believe from an ISP level, he's trying to reduce the WAN usage and keep things on the LAN. To an extent I think ISP are favoring this already, bittorrent is kinda frowned on, but they allow you download tv shows off the usenet server with a nzb file.
Re: (Score:1)
Re: (Score:2)
Yeah, this was sort of my reaction as I watched the talk - it does sound like Bittorrent. Basically instead of everyone fetching a page from a server, the server acts like a sort of 'permanent seed', while the data can be had from any location on the 'net where the data is located. Then it is hash-checked, and so forth. Quite like Bittorrent, but with a single worldwide tracker for everything... ok, perhaps I'm taking the anal
Re: (Score:2)
Transcript? (Score:1)
lots of identical bytes being redundantly pushed (Score:1)
Item potent, oc, you hear? (Score:2)
Re: (Score:2)
And if it did try distributing trackers, you'd be in the Nutella world, where...
Re: (Score:2)
"When Copernicus first wrote his paper on planetary motions, the predictions that he gave were really crummy, compared to the tomaic[...] predictions"
Somebody forgot the 'p'.
Re: (Score:1)
Superb talk: "data dissemination" not mcast/cache (Score:5, Informative)
The article submitter didn't seem to "get" what Van Jacobson was saying though, as the talk had almost nothing to do with broadcasting or multicasting. Indeed, Van Jacobson actually pointed out why multicasting and broadcasting were inappropriate in most situations in this new world (they carry implicit time sync), so only use them as accelerators on LANs or in other special cases. The slightly wrong article description may have misdirected some of the posts here since not everybody reads TFA, and even fewer sit through an extended talk. It wasn't about broadcast or multicast at all, except in passing.
Maybe it'll help to summarize his thrust briefly.
What he said was that the network underneath doesn't actually matter, and that the wires and fibre underneath don't actually matter either -- TCP/IP has abstracted away from them. However, the client-server model on which TCP/IP is based is no longer strictly relevant either, because it is founded on a somewhat obsolete concept, the "conversation". The vast bulk of our Internet traffic is no longer "conversations", but "data dissemination" (the migration of identified data objects from place to place), and actual conversations are just a special case of that.
Data dissemination is utterly different to conversation as a communications paradigm, and that's what he's getting at. Fully identified, self-validating items of data as discrete entities are really where our focus needs to be, and how they get to us is rather immaterial, or abstracted away. *Where* they come from (ie. the actual server to which we connect) is quite immaterial too --- getting it from a passing plane would be as good as from a known server, when you can rely on data identity. Furthermore, if the data items were fully self-descriptive then many of the current problems like spam would go away as well. What's more, the nodes of the network would be able to work more intelligently too (and hence efficiently), if they were aware of data identity rather than just treat everything as a conversation.
That's a very brief summary and can't hope to do the talk justice. Go listen! He's dead right.
mod parent up! (Score:4, Informative)
One thing I pulled most out of it was the analogy to 60s and 70s networking and how it is only after technology has been adopted that we see what its used for.
When the telephone was invented Bell didn't know what it would be used for, its a strange concept but he really didn't know what a "phone call" was. He just knew he could transmit voice. Not only that but you had to have wires to connect people, so there was this very expensive business of putting wires everywhere. What happened was that people used those wires to make conversations. To establish a conversation you had to have a path between two nodes. This encouraged a monopoly because the best known way to make paths was to have control of all the wires.
When the idea of what TCP/IP was to become was introduced people thought it was lunacy. What they were proposing was adding all this crap onto your data to explicitly name your destination so that it could travel any path to get to its conversation partner. All the networking researchers didn't get it because they already had implicit addresses by way of making the path. Turns out that the supposed innefficiency solved several problems simply by construction. Being able to take any path meant not caring about the underlying topology.
What Van Jacobsen is proposing is another abstraction. Essentially adding another layer of "crap" that will allow us to ignore the underlying network. He mentions how several technologies are working towards these ends to some degree like bittorrent and akami CDN, but I think he is advocating for something like a new protocol. This new protocol would then end up solving some of our current problems simply by construction. Broadcast and one-to-one will become the same thing. Whether you are sending a secure email (pgp signed and named) or downloading the front page of the nytimes you could rely on the nature of the new protocol to deliver you authentic data, no matter where it comes from.
Personally I think its genius, I'd like to follow the progress of such a protocol if it exists. I just got done watching the talk so I'll be googling around for a little I suppose.
Re: (Score:2)
It's not a protocol. It's a bunch of ideas meant for researchers (or grad students in his talk). It's meant to get people to think about what they are doing - or, to be more precise - what their goal should be. He mentioned some protocols as examples of partial implementations of his ideas, maybe you could start there.
Re:Superb talk: "data dissemination" not mcast/cac (Score:2, Insightful)
Corporations make money by restricting access to information.
It doesn't seem that it will be possible for them to continue to do that with this model, so I don't think any of this will come to pass any time in the near future.
Re: (Score:1)
Re:Superb talk: "data dissemination" not mcast/cac (Score:1)
Fairly interesting talk... (Score:3, Insightful)
Basicly, he's speaking of named resources, that an URL would be key like KSKs in Freenet
Content would self-verify, that's basicly CHKs in Freenet
Then you need add security into it which pretty much amount to SSKs
Only in his case, it wasn't talk about making the end nodes treat information this way but rather the core of internet, and it didn't involve anonymity. But the general idea was the same, to grab content from a persistant swarm of hosts who doesn't need a connection to the original source. Unfortunately, most of the examples he gives are simply false, like the NY Times front page. If I want up-to-the-minute news everybody need to pull fresh copies off the original source all the time, reducing it down to a caching proxy. Any sort of user-specifc content, or interactive content won't work. For example take slashdot. I've got my reading preferences set up, which means my content isn't the same as yours. Also my front page contains a link to my home page, which is not the same as yours. Getting a posting form and making a comment wouldn't be possible. Making any kind of feedback like digg, youtube, article feedback etc. isn't possible. Counters wouldn't be possible. The only thing where it'd work is reasonably static fire-and-forget content, and even then there's the problem of knowing what junk to keep. Notice that when asked about BT he said that only worked for big files, so the idea is that everyone will have some datastore where they keep small files until someone needs them. The only good example is the Olympic broadcast, which is exactly the same content at exactly the same time. Oh wait, that's classic broadcast. Classic broadcast works best in a broadcast model? Who'd think that.
Re: (Score:2)
So take nytimes for example, if you are looking for the uptodate information you could get it from anyone who has it. Just bec
Re: (Score:1)
What is the next logical step then?
We can break down content into these named, secure objects. I may contact
Re: (Score:2)
Now today, over 90% of internet traffic is for named content. Yes there is interactive/conversatio
Re: (Score:1)
Remember...that's server side communications in a conversation based system...
If I walk into a mall I don't have to buy everything...I pick what I want...
I think he only addresses one part of the problem. (Score:1)
We need not only the data but the intelligence to manage that data spread around. I can't think of any news or information site that doesn't r
Re:I think he only addresses one part of the probl (Score:1)
When I'm logged into slashdot and browsing news, there's only a very small part of the page that is customized specifically to me. Everything else is the exact same content that everyone else is seeing. Currently the web browser does separate queries to pull down the images in the page, which are mostly the same for everyone. Perhaps under VJ's scheme the text parts that are specific to me would have to come pretty much directly from slashdot's site, but it could contain references to all the common con
Re:I think he only addresses one part of the probl (Score:1)
In your example of tracking users on Slashdot, parts of that could fall back on the current conversational protocols. It should probably be possib
Akamai (Score:2)
Akamai Technologies is really very much in the business of solving the main problem Jacobson describes. Yes, lots of people want the same information. Jacobson is a very bright man, and he got pretty much everything right except: "You can't Akamize dynamic content." Yes, you can -- unless live feeds of sporting events (NCAA March Madness) aren't considered dynamic enough.
That said, there probably is room for a truly op
Re: (Score:2)
I didn't watch the video, but usually when people talk about "dynamic content" they mean content that is generated on the fly, personalized to a particular user. So, as an example, you typically can't wholesale cache a page generated that way for a user with Akamai - it doesn't make any sense.
Suddenly the Senator Makes More Sense (Score:1)
"It would be nice here to not have big trucks, just little cars."
And suddenly I realized that Senator Stevens had gotten a lecture that he completely misunderstood.
Grante
Water (Score:1)
If you want a drink you don't specify where the water has to come from...you open the tap and out it comes...but when you want to make lemonade or distilled water lets say...you have to have software on your end (pot and fire) to get out of it what you want...
But that presents the problem of too much bandwidth usage at old usage
Dissemination (Score:1)