Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet User Journal

A New Way to Look at Networking 90

Van Jacobson gave a Google Tech Talk on some of his ideas of how a modern, global network could work more effectively, and with more trust in the data which changes many hands on its journey to its final destination. Watch the talk on Google's site The man is very smart and his ideas are fascinating. He has the experience and knowledge to see the big picture and what can be done to solve some of the new problems we have. He starts with the beginning of the phone networks and then goes on to briefly explain the origins of the ARPAnet and its evolution into the Internet we use today. He explains the problems that were faced while using the phone networks for data, and how they were solved by realizing that a new problem had risen and needed a new, different solution. He then goes to explain how the Internet has changed significantly from the time it started off in research centres, schools, and government offices into what it is today (lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient).
This discussion has been archived. No new comments can be posted.

A New Way to Look at Networking

Comments Filter:
  • 8 months ago (Score:1, Informative)

    by Lars T. ( 470328 )
    The talk was held on Aug 30, 2006.
    • by MarkByers ( 770551 ) on Sunday May 06, 2007 @08:32AM (#19009761) Homepage Journal
      8 months old?! Shame. I guess that most of the information about the history of the telephone network is out-of-date already.
      • Re: (Score:2, Insightful)

        8 months seems pretty new to me. I notice that many of our discussions seem to focus on 1984. Wake up people! A lot has happened since then, and now it's a brave new world.
        • <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
          <html>
          <head>
            <style type="text/css">
          <!-- /* <![CDATA[ */
          @import "/branding/css/tigris.css";
          @import "/inst.css";

          /* ]]> */
          -->
          • I was intending to be humourous with my comment. Were you kidding around too? Were you just trying to emphasize that CSS is good? I agree that CSS is good.
    • The summary was posted on May 06, 2007 at 08:20.
    • Thanks for submitting your article! For me it has given me a much broader view of what is meant by the phrase "Microsoft Windows Beyond". I'm a tad bit slow at this so if I make my way back here again I'll do my utmost best to stay on the stick at catching up. Try and affix a message to My TechNet page to bring me up to speed. Poss Unknown/?+nt9 EnterpriseDB Advanced Server is a relational database management system (RDBMS)
  • "...where broadcast would be more appropriate and efficient"

    If this means airwaves, same as TV, sure. Why not, since the whole thing is one big info-mercial swamp already. Otherwise, it also means guaranteed next -packet delivery, without any pauses, resends, spinning cursors. And the internet is not going to deliver that, sorry.
    • by charnov ( 183495 ) on Sunday May 06, 2007 @08:57AM (#19009895) Homepage Journal
      There is no reason you can't multicast across a large segmented network, i.e. the internet, and get good delivery. Radio, television, audio, phone, movies are all latency sensitive but not particularly bit sensitive so you can drop some packets here and there. That also means that some things would need QoS (VoIP) while others would need intelligent caching and buffering (movies, etc.).
  • How is this anything new? Everything in that summary was already covered by Tanenbaum in his excellent book on networks - of course it's easier to hear it from someone if you're too lazy to read :p
  • Internet is not TV (Score:5, Insightful)

    by Dun Malg ( 230075 ) on Sunday May 06, 2007 @09:05AM (#19009937) Homepage
    "(lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient)"

    The first part is true, but does not necessarily lead to the conclusion in the second. There is a huge, very important IF that belongs between them. Specifically, "if the recipients are all prepared to receive those bytes at the same time". The problem with the conclusion is that the evaluation of the "if" part is nearly always "they're not". This is yet another case of "if the internet were like television, it'd be more efficient". Yes, but it would then no longer be the internet people like. The great promise of the internet is information on demand. All this bullcrap about broadcast, push, and the like, it's all the efforts of 20th century throwbacks trying to fit the internet into their outdated worldview of "producers" and "consumers". They need to quit it. Broadcast is a square peg and the internet is a round hole. Every time anyone suggests putting the two together, they simply look like a bloody idiot.
    • Re: (Score:3, Interesting)

      by linvir ( 970218 )
      Just because he says broadcast would be more efficient, doesn't mean that he thinks we should go back to television. Believe it or not, Van Jacobson isn't a 20th century throwback shill for Big Media, or a bloody idiot.
    • Broadcast is a square peg and the internet is a round hole. Every time anyone suggests putting the two together, they simply look like a bloody idiot


      I think bittorrent is the internet answer to the broadcast problem. Bittorrent is intrinsically adapted to the way the internet works. Data which is most sought by people will be found on more nodes around the net, less popular data can be downloaded directly from the primary servers.
       

      • Everything he says applies to server>client. Producers>consumers. And he proposes a change to the current model of conversation to a model of multicast the same data to many consumers. And support this by findings that 99% of data is structured that way. I guess that's wishful thinking though, because he works at google, the massive meta-producer of data we all consume.

        What about bittorrent though? Uses the same TCP/IP protocol, trusts the data, not the source (like he says we must do), answers to th
        • Re: (Score:3, Informative)

          by kubalaa ( 47998 )

          Everything he says applies to server>client. Producers>consumers.

          On the contrary, it sounds to me like he's describing an egalitarian network where anybody who is connected to the internet can inject data into it with very little hosting overhead (because the data will be cached inside the network).

          What about bittorrent though?

          The first question at the end of the presentation was, what does he think of Bittorrent? And his problems with it are:

          1. Only people actively downloading or seeding content
        • 1) Optical only routing at the backbone, and traffic monitoring to ensure Tier1's don't
          let major choke points choke due to not wanting to spend money on equipment,
          share the costs like a Co-op if necessary amongst Tier1's to keep the backbones
          and choke points scaled up. People who run trace routes see tier1 choke points now.

          2) In the Tier1 core make sure it is DWDM folding many layers of Sonet/ATM into
          a multi-channel frequency spectrum, at some point plan on phasing out asynchronous
          communications and go wit
    • by rthille ( 8526 )

      I haven't watched the video (hey, this is slashdot), but a system which required me to have ~500GB of local cache wouldn't be out of the question for me. My pipe isn't too big because of where I live, but I've got plenty of storage. If I could keep that pipe full all the time (basically by having my system automatically receive stuff I'm likely to be interested in), that could work.

      On the other hand, given how oversold the networks are, it definitely would have to be broadcast/multicast based.
    • by Yvanhoe ( 564877 ) on Sunday May 06, 2007 @10:23AM (#19010503) Journal
      Internet is information on demand, but given a large amount of demands, some of the demands are redundant. For instance, it would make a lot of sense for a local ISP to cache the google homepage. Also, when making a modification on said homepage, it would make sense for Google to broadcast a signal to all ISPs to update their caches, or even to broadcast the new homepage to everyone. It is even more interesting in the case of the homepage of news websites.

      I think that in order to see the benefits of the broadcasting of data, you have to take the ISPs and service providers point of vie, not the final user's. Today, the ISP transmit every request from their users to the service provider, and the service providers answer to each user request. In the case of a dynamic web like online shops or search engines, there are no alternatives. But in the case of semi-static websites like news sites, having a system of cache synchronized at the ISP level thanks to a regular broadcast from the server can actually save a lot of bandwidth to the ISP and the service provider.
      Remember the problem slashdot had with softwares like NewsTicker when it first provided a RSS feed. This is the kind of problems this wants to solve if I understand correctly.

      Disclaimer : I didn't watch the one-hour long video with no transcript. Give me a text and save this bandwidth already, dammit !
      • ISP to cache the google homepage. Also, when making a modification on said homepage, it would make sense for Google to broadcast a signal to all ISPs to update their caches, or even to broadcast the new homepage to everyone.

        Congratulations, your just invented the proxy! Yay! And it doesn't make sense because the "popular part" of the web is not static anymore. Even google's simple homepage lets me sign in and customizes the page for ME. Most news sites do this as well. Google's logo on the orther hand is
    • Watch the last 3 minutes: he is answering a question on multicast: why local multicast is sufficient: global UDP multicast is impossible, but not necessary; ideas for local multicast: multicast at room level, enterprise level, etc. I have always liked UDP for some applications (e.g., broadcasting distributed game or VR data), but I always used it locally.
    • Re: (Score:3, Informative)

      I run an IRC server of sorts, and over 90% of my outgoing bandwidth bill is due to identical information being sent at the same time to many clients. Not a day goes by I don't wish there was some sort of error-correcting multicast protocol.
    • Re: (Score:3, Interesting)

      by boteeka ( 970303 )

      if the recipients are all prepared to receive those bytes at the same time

      You are totally right that the recipients will want those bytes at different times, BUT this is not like current television or radio broadcasts, where once broad casted they no longer exist. The data which is intended to be broad casted (in the example, an Olympic event) is stored at the same time it is broad casted and not only by the producer or creator of the data, but by every other party (or at least the proxy-like ones) who is receiving the data. It's a bit like BitTorrent: the more interested in g

    • Note that neither him nor myself said that broadcasting all bits was the ideal solution. In retrospect I probably shouldn't have mentioned the word at all.
    • Seeing as you posted 45 minutes after the Slashdot story was posted, and the video is an hour and 21 minutes long, may I assume you didn't watch the whole thing?

      Your concern regarding everyone accessing the data at different times is dealt with in the video, and a lot of other interesting ideas are suggested as well.
  • Good ideas (Score:4, Interesting)

    by MarkWatson ( 189759 ) on Sunday May 06, 2007 @09:17AM (#19010003) Homepage
    I 'browsed' some of this video and book marked it for later: Van Jacobson's background is awesome.

    A bit off topic, but there are two things that I want to see happen: a complete upgrade to IPv6 and the creation of an alternative 'public Internet' based on emerging long distance wifi and software that lets people volunteer to be part of this new open grid, and optionally share some bandwidth bridging the 'real' Internet.

    It may seem pointless to want both higher performance (multi-casting UDP, essentially infinite IP address space) and low performance and ad-hoc systems, but please consider: the UK and USA seem to be going down the wrong path of surveillance and citizen control, the Internet may someday be viewed as something that the public just should not have because it is too free a source of information. I hope that I am wrong about this, but this unpleasant possible repressive future is a possibility.
    • Something particularly cool, starting around the 1 hour, 4 minute time index on the video: the idea of both naming data resources and versioning: when you "put something out there" on the internet, it is immutable, but you can supersede it with versioning, but older versions are still there - sort of making data on the Internet like our personal or work group subversion repositories.
    • It may seem pointless to want both higher performance (multi-casting UDP, essentially infinite IP address space) and low performance and ad-hoc systems, but please consider: the UK and USA seem to be going down the wrong path of surveillance and citizen control, the Internet may someday be viewed as something that the public just should not have because it is too free a source of information. I hope that I am wrong about this, but this unpleasant possible repressive future is a possibility.

      Although finding alternative methods of networking and sharing information is always a welcome step, ultimately the only thing that will stop the progression of Orwellian "security at all costs" government control is education and subsequent action by the general populace. Otherwise the second your ad-hoc network challenges the censored internet they'll be driving past your home with wi-fi scanners and knocking on your door shortly thereafter. More to the point, it shouldn't be necessary for such action. W

      • Good point on the ease of censorship of an aletrnative Internet. Thanks.

        I disagree with last second point however: I try to find good sources of information to share with people I know that contradict the spin that we see on the news. A small effect, but enough people take the effort it is effective.

        I also make it a habit to frequently contact my elected representatives for both things that they do that I don't like, and even the rare compliment when they get something right :-)
  • This is something that's been on my mind for a long time. In fact, I thought that's how streaming was done because I couldn't understand why the load would increase so much as more people watched. This should be especially true for internet radio/TV, and for ads that really suck up the bandwidth, which why I block them (the ads that is). I didn't know that each stream was being fed to only one viewer/listener. It always seemed kind of odd to do it that way.
  • by diegocgteleline.es ( 653730 ) on Sunday May 06, 2007 @09:24AM (#19010053)
    Based on all the measurements I'm aware of, Linux has the fastest & most complete stack of any OS (source [lemis.com])
  • lots of identical bytes being redundantly pushed to many consumers, where broadcast would be more appropriate and efficient

    So, sending identical packets to everyone is somehow more bandwidth efficient than sending packets to only those who want them? Doesn't that seem backwards to anyone else? Furthermore, couldn't you define broadcasting as precisely the act of sending identical bytes to many consumers?! I'm teh confused.

    TLF
    • I think he's trying to push the internet into a bittorrent/usenet type of model. Instead of everyone grabbing a copy from the original server and eating up the bandwidth on the major backbones, we get the information from a more local server that have a cached copy. I believe from an ISP level, he's trying to reduce the WAN usage and keep things on the LAN. To an extent I think ISP are favoring this already, bittorrent is kinda frowned on, but they allow you download tv shows off the usenet server with a nzb file.

      • It's indeed a bit like P2P but then at the IP nodes level: normal P2P doesn't reduce traffic, it shares it amongst peers. I believe he wants to get rid of the ISP altogether. The problem, obiously, is how to finance it all and enforce people to store others' data.
      • I think he's trying to push the internet into a bittorrent/usenet type of model.

        Yeah, this was sort of my reaction as I watched the talk - it does sound like Bittorrent. Basically instead of everyone fetching a page from a server, the server acts like a sort of 'permanent seed', while the data can be had from any location on the 'net where the data is located. Then it is hash-checked, and so forth. Quite like Bittorrent, but with a single worldwide tracker for everything... ok, perhaps I'm taking the anal

    • Broadcasting makes sense when you're talking about wireless networks instead of wired networks. Isn't everybody getting wireless these days?
  • Is there a transcript of the video available (e.g. just the subtitles pulled out)? It's a bit tedious watching this when reading it will take 1/10th of the time of the video...
  • haven't we solved that with proxies?
  • Wow, some great subtitling here. The guy programs in oc and apparently speaks about item potent data packets...
    • by Handyman ( 97520 )
      More gems:

      ...if I'm connected to SourceForge.net, then the version of nome that I'm pulling over is the most recent nome, because...

      And if it did try distributing trackers, you'd be in the Nutella world, where...
      • by nharmon ( 97591 )
        I liked this one:

        "When Copernicus first wrote his paper on planetary motions, the predictions that he gave were really crummy, compared to the tomaic[...] predictions"

        Somebody forgot the 'p'. :-)
    • There were tons of errors and a few times the sub titler just used a - because they had no idea what he said. The person doing the subtitles was definitely not a networker. Like SHA1 , they totally skipped in the subtitles.
  • by Morgaine ( 4316 ) on Sunday May 06, 2007 @12:27PM (#19011441)
    I enjoyed this talk very much. It was more than just a statement of Van Jacobson's thoughts on data dissemination. It showed his analysis of the relationship between infrastructure and application across two generations of networking, and it pointed out very nicely why it's time now for phase 3: we've moved our usage goalposts compared to when the IP network was designed. Great stuff, and I agree completely.

    The article submitter didn't seem to "get" what Van Jacobson was saying though, as the talk had almost nothing to do with broadcasting or multicasting. Indeed, Van Jacobson actually pointed out why multicasting and broadcasting were inappropriate in most situations in this new world (they carry implicit time sync), so only use them as accelerators on LANs or in other special cases. The slightly wrong article description may have misdirected some of the posts here since not everybody reads TFA, and even fewer sit through an extended talk. It wasn't about broadcast or multicast at all, except in passing. :-)

    Maybe it'll help to summarize his thrust briefly.

    What he said was that the network underneath doesn't actually matter, and that the wires and fibre underneath don't actually matter either -- TCP/IP has abstracted away from them. However, the client-server model on which TCP/IP is based is no longer strictly relevant either, because it is founded on a somewhat obsolete concept, the "conversation". The vast bulk of our Internet traffic is no longer "conversations", but "data dissemination" (the migration of identified data objects from place to place), and actual conversations are just a special case of that.

    Data dissemination is utterly different to conversation as a communications paradigm, and that's what he's getting at. Fully identified, self-validating items of data as discrete entities are really where our focus needs to be, and how they get to us is rather immaterial, or abstracted away. *Where* they come from (ie. the actual server to which we connect) is quite immaterial too --- getting it from a passing plane would be as good as from a known server, when you can rely on data identity. Furthermore, if the data items were fully self-descriptive then many of the current problems like spam would go away as well. What's more, the nodes of the network would be able to work more intelligently too (and hence efficiently), if they were aware of data identity rather than just treat everything as a conversation.

    That's a very brief summary and can't hope to do the talk justice. Go listen! He's dead right. :-)
    • mod parent up! (Score:4, Informative)

      by enjahova ( 812395 ) on Sunday May 06, 2007 @01:14PM (#19011781) Homepage
      You give a pretty good short summary of a long and interesting talk.

      One thing I pulled most out of it was the analogy to 60s and 70s networking and how it is only after technology has been adopted that we see what its used for.

      When the telephone was invented Bell didn't know what it would be used for, its a strange concept but he really didn't know what a "phone call" was. He just knew he could transmit voice. Not only that but you had to have wires to connect people, so there was this very expensive business of putting wires everywhere. What happened was that people used those wires to make conversations. To establish a conversation you had to have a path between two nodes. This encouraged a monopoly because the best known way to make paths was to have control of all the wires.
      When the idea of what TCP/IP was to become was introduced people thought it was lunacy. What they were proposing was adding all this crap onto your data to explicitly name your destination so that it could travel any path to get to its conversation partner. All the networking researchers didn't get it because they already had implicit addresses by way of making the path. Turns out that the supposed innefficiency solved several problems simply by construction. Being able to take any path meant not caring about the underlying topology.

      What Van Jacobsen is proposing is another abstraction. Essentially adding another layer of "crap" that will allow us to ignore the underlying network. He mentions how several technologies are working towards these ends to some degree like bittorrent and akami CDN, but I think he is advocating for something like a new protocol. This new protocol would then end up solving some of our current problems simply by construction. Broadcast and one-to-one will become the same thing. Whether you are sending a secure email (pgp signed and named) or downloading the front page of the nytimes you could rely on the nature of the new protocol to deliver you authentic data, no matter where it comes from.

      Personally I think its genius, I'd like to follow the progress of such a protocol if it exists. I just got done watching the talk so I'll be googling around for a little I suppose.
      • "Personally I think its genius, I'd like to follow the progress of such a protocol if it exists. I just got done watching the talk so I'll be googling around for a little I suppose."

        It's not a protocol. It's a bunch of ideas meant for researchers (or grad students in his talk). It's meant to get people to think about what they are doing - or, to be more precise - what their goal should be. He mentioned some protocols as examples of partial implementations of his ideas, maybe you could start there.
    • I see one problem with his idea of ignoring where data comes from.

      Corporations make money by restricting access to information.

      It doesn't seem that it will be possible for them to continue to do that with this model, so I don't think any of this will come to pass any time in the near future.
      • The data contains all the information, so why not an authentication system. DRM for data. If you can validate if the data is from Company X then the data should be able to validate that you are allowed to see it. You still can have a copy of the data on your server, just can't use it.
    • I got it, I just posted this late at night with a poor description. In any case people are watching and talking about it, which was my goal.
  • by Kjella ( 173770 ) on Sunday May 06, 2007 @01:17PM (#19011815) Homepage
    ...but the more he talked, the more it reminded me of some halfbreed between akamai and freenet.
    Basicly, he's speaking of named resources, that an URL would be key like KSKs in Freenet
    Content would self-verify, that's basicly CHKs in Freenet
    Then you need add security into it which pretty much amount to SSKs

    Only in his case, it wasn't talk about making the end nodes treat information this way but rather the core of internet, and it didn't involve anonymity. But the general idea was the same, to grab content from a persistant swarm of hosts who doesn't need a connection to the original source. Unfortunately, most of the examples he gives are simply false, like the NY Times front page. If I want up-to-the-minute news everybody need to pull fresh copies off the original source all the time, reducing it down to a caching proxy. Any sort of user-specifc content, or interactive content won't work. For example take slashdot. I've got my reading preferences set up, which means my content isn't the same as yours. Also my front page contains a link to my home page, which is not the same as yours. Getting a posting form and making a comment wouldn't be possible. Making any kind of feedback like digg, youtube, article feedback etc. isn't possible. Counters wouldn't be possible. The only thing where it'd work is reasonably static fire-and-forget content, and even then there's the problem of knowing what junk to keep. Notice that when asked about BT he said that only worked for big files, so the idea is that everyone will have some datastore where they keep small files until someone needs them. The only good example is the Olympic broadcast, which is exactly the same content at exactly the same time. Oh wait, that's classic broadcast. Classic broadcast works best in a broadcast model? Who'd think that.
    • I don't think his examples are false. You aren't looking at what this shift would mean. It essentially eliminates the idea of broadcaster AND one-to-one "conversations." Everybody would have the capability of broadcasting or sending to just one person. The data would be named and signed so as long as you trust the signature and you know what you are looking for you can get the data.
      So take nytimes for example, if you are looking for the uptodate information you could get it from anyone who has it. Just bec
      • The parent here does a good job explaining that VJ's model is a *superset* of the conversational model we have now. The GP, however, has a very good point that dynamic content relies on the conversational model.

        What is the next logical step then?

        We can break down content into these named, secure objects. I may contact /. to get the home page using the conversational model we are used to. The page I get back, however, can have my customized content wrapped around references to the common, static pieces.
    • I think you kind of missed the point. He first talked about how originally networking research tried to use the phone system as it was to deliver data. That didn't work so well because in the amount of time it takes to set up a phone call, you can send gigabits of data, so if your conversation is only ms long, it's horribly inefficient. What we needed was a more appropriate model, i.e. packet switched networks.

      Now today, over 90% of internet traffic is for named content. Yes there is interactive/conversatio
    • "Any sort of user-specifc content, or interactive content won't work. For example take slashdot. I've got my reading preferences set up, which means my content isn't the same as yours."

      Remember...that's server side communications in a conversation based system...

      If I walk into a mall I don't have to buy everything...I pick what I want...

  • I think the idea of having a fully transparent networking paradigm is what is of paramount importance to both data and software that manages that data. Adding application layer logic to routers that will effectively cause data I want (or need) to be available everywhere would (in my mind) restrict the things that we can do with the net. I'm a bottom up kind of guy.

    We need not only the data but the intelligence to manage that data spread around. I can't think of any news or information site that doesn't r
    • When I'm logged into slashdot and browsing news, there's only a very small part of the page that is customized specifically to me. Everything else is the exact same content that everyone else is seeing. Currently the web browser does separate queries to pull down the images in the page, which are mostly the same for everyone. Perhaps under VJ's scheme the text parts that are specific to me would have to come pretty much directly from slashdot's site, but it could contain references to all the common con

    • I don't think he plans for the "request/respond" protocols to be used everywhere in a network. The issue is when many users are requesting the same data. Take for example when a site gets "slashdotted," it'd be more efficient if there weren't several thousand requests for the same exact index.html from some host being individually transmitted through the network.

      In your example of tracking users on Slashdot, parts of that could fall back on the current conversational protocols. It should probably be possib
  • That was one of very few useful talks I've *ever* seen on shortcomings in the Internet.

    Akamai Technologies is really very much in the business of solving the main problem Jacobson describes. Yes, lots of people want the same information. Jacobson is a very bright man, and he got pretty much everything right except: "You can't Akamize dynamic content." Yes, you can -- unless live feeds of sporting events (NCAA March Madness) aren't considered dynamic enough.

    That said, there probably is room for a truly op
    • he got pretty much everything right except: "You can't Akamize dynamic content." Yes, you can -- unless live feeds of sporting events (NCAA March Madness) aren't considered dynamic enough.

      I didn't watch the video, but usually when people talk about "dynamic content" they mean content that is generated on the fly, personalized to a particular user. So, as an example, you typically can't wholesale cache a page generated that way for a user with Akamai - it doesn't make any sense.

  • At one point in his talk, Van Jacobsen talks about segmenting. He mentions that segmenting solves a problem that is metaphorically like having trains and cars on the same city streets; one doesn't want to wait around in a car for a train to clear the intersection. Then he says something that I've heard before, but never made the connection:

    "It would be nice here to not have big trucks, just little cars."

    And suddenly I realized that Senator Stevens had gotten a lecture that he completely misunderstood.

    Grante
  • This is the water system...translated for data...that's the only way to make it work...All the information (non-specific-user-defined) would have to be in the system at all times...

    If you want a drink you don't specify where the water has to come from...you open the tap and out it comes...but when you want to make lemonade or distilled water lets say...you have to have software on your end (pot and fire) to get out of it what you want...

    But that presents the problem of too much bandwidth usage at old usage
  • Some of these issues seem to be addressed (or are being attempted, at least in the early stages) by metalink [metalinker.org] which was discussed at http://slashdot.org/article.pl?sid=07/02/25/144209 [slashdot.org] a few months ago, but I don't think people really understood what it was.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...