Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Interview with Sun's Tim Bray and Radia Perlman 76

ReadWriteWeb writes "To celebrate the 15th anniversary of the World Wide Web, Richard MacManus interviewed two senior engineers from Sun Microsystems - Tim Bray (Director of Web Technologies) and Radia Perlman (Distinguished Engineer). The interview discusses the past and future of the Web, including the impact that Sun's servers have had over the years. Also discussed is the reason why Tim and Radia believe that P2P won't be a driving force on the Web going forward. Radia thinks that having central sites where people can register is key to making the Web scalable and more secure."
This discussion has been archived. No new comments can be posted.

Interview with Sun's Tim Bray and Radia Perlman

Comments Filter:
  • P2P (Score:5, Insightful)

    by Rob T Firefly ( 844560 ) on Wednesday August 09, 2006 @11:20AM (#15874041) Homepage Journal
    Tim and Radia believe that P2P won't be a driving force on the Web going forward. Radia thinks that having central sites where people can register is key to making the Web scalable and more secure.
    I'll say. Nothing feels more scalable and secure than when I register and login to all my favorite P2P trackers.
    • Re:P2P (Score:2, Interesting)

      by Anonymous Coward
      P2P is a dead technology, plain and simple. It can't work in a secure network, for several reasons.

      1. P2P requires holes in firewalls. You cannot use P2P applications safely through a firewall, you must also allow incoming connections.

      2. P2P and a distributed attack look identical. There's no way to tell the difference between a P2P application and a worm attacking a network. As such, allowing P2P applications to exist necessarily lessens the security of the network by allowing worms to hide in the P2P traf
      • Re:P2P (Score:3, Insightful)

        by Aladrin ( 926209 )
        You've forgotten 1 very very important thing:

        People like it.

        All the technical reasons in the world don't matter if people prefer it to everything else. Until you have actually created and properly hyped a better 'technology', then P2P is here to stay.
      • Re:P2P (Score:5, Insightful)

        by morgan_greywolf ( 835522 ) on Wednesday August 09, 2006 @12:08PM (#15874461) Homepage Journal
        P2P is a dead technology, plain and simple. It can't work in a secure network, for several reasons.


        Who said anything about the Internet being a secure network?

        Look, the Internet, by its very nature, is inherently insecure. It cannot be secure. Only networks where resources can be controlled and managed can be considered secure. You can only secure your own private network, and if that network is connected to the Internet, even via a firewall, its security must be considered at least compromiseable, if not already compromised (this depends on how important security is to your network -- U.S. military and civillian intellegence consider air gap security to be the only security that is acceptable in relation to the Internet and their classified systems). P2P or no P2P.

        As for holes in the firewall -- any service your network provides to the public internet requires holes in your firewall. If you don't like that, then don't run services on your public facing connections. *shrug*

  • that decentralization was the driving force to even create DARPANET or TCP/IP. If we centralize everything again we might have some overhead on administration and traffic but when one or multiple nodes fail, the internet will still be there. If you centralize everything at say the USA and that country decides to implement the Great Firewall, you're pretty much boned.

    Well, that's what I think of it... Isn't Sun almost dead?
    • Re:And I thought... (Score:5, Informative)

      by mrogers ( 85392 ) on Wednesday August 09, 2006 @11:29AM (#15874118)
      There's a difference between decentralising the infrastructure and decentralising the control. Radia Perlman's thesis [vendian.org] is a good example: a robust, decentralised routing protocol made possible by a centralised PKI.
    • Fortunately the US is not installing the great firewall. Sadly, other countries like China are. The people within those countries are SOL for some content. I also do not think that TFA is talking about centralizing everything to the point of no fault tolerance.
      • > Fortunately the US is not installing the great firewall.

        Well, in many places in USA, schools and libraries are required to use filters
        to remove "bad" WWW sites. Btw, the list of "bad" sites are secret, or you may use
        reverse engineering. Oh wait, I forgot about DMCA.

        > Sadly, other countries like China are.

        US companies are providing the technology and know-how, but hey, "let the market decide".

        • The United States protects the freedom of the Internet. If you want to use a computer that is 100% purchased and maintained by the government, they have a right to control access. If I used you computer, you would probably not want me doing certain things on it either. This is very different from a nation like China, where the whole country's access is blocked for many different things.

          Most oppression software is not American, but I still disagree with selling to certain actors. At a special event la

        • > Fortunately the US is not installing the great firewall.
          [ ... ]
          > Sadly, other countries like China are.
          US companies are providing the technology and know-how, but hey, "let the market decide".

          Presumably that was meant more or less sarcastically. The question I'd ask is whether you can figure out a way of providing only technology that can't be abused in such ways (and yes, IMO, the great firewall of China is an outright abuse of the technology). While it's applied to a much larger number of

          • > Presumably that was meant more or less sarcastically.

            Yes, it meant that way ;-) USA produce so much wonderful technology, but only to have
            it abused so much.

            > The question I'd ask is whether you can figure out a way of providing only technology that
            > can't be abused in such ways (and yes, IMO, the great firewall of China is an outright abuse
            > of the technology).

            Most technologies, as you know, can be used for evil, but that does not mean that the technology
            in itself is evil. However, some techn
  • I guess having a centralized server that is prone to attacks does make the internet more secure? How could I have been so stupid.
    • To make the internet more reliable and secure, maybe we could have a whole bunch of centralized servers, all spread out.
    • Re:Oh I get it (Score:3, Informative)

      by dc.wander ( 415024 )
      I wouldn't be so condescending about the suggestion... Radia Perlman has accomplished more for modern networking and the internet that you probably will in your lifetime. She is more than just a "sun employee." She is inteventer of the Spanning Tree Protocol amoung other things http://en.wikipedia.org/wiki/Spanning_tree_protoco l [wikipedia.org].

      Maybe check out her book, Interconnections, on Amazon to get a feel for the type of work she does.
    • Do you know what the chain of command is? It's the chain I'm going to beat you with till you listen to my commands
      Personally I prefer Jayne's version. "Do you know what the chain of command is? It's the chain I'm going to beat you with until you realize who's in ruttin' command here!"
  • I SWEAR the subject line said "Rhea Perlman" when I first read it.
  • "they own your arse and every search query you ever use" Hang on...
  • by buffoverflow ( 623685 ) on Wednesday August 09, 2006 @11:53AM (#15874336)
    This was a disappointment. I was really hoping for a lot more out of this interview. Two brilliant interviewees, (one of which is arguably the most influential and groundbreaking female engineer to ever work in this industry, the other is the creator of one of the most prevalent markup languages used); an interesting topic, (I'd like to know what these two think of the past 15 years, and more importantly, what they see to come); finally a simpering imp of an interviewer.
    Let the two with the IQ's & overly impressive resumes do the talking. MacManus, I'm really hoping you're leaving all the good stuff for part 2. I didn't see much in the way of a single worthwhile question or topic. The writing was dry and elementary.
    Mr. MacManus.. When you get people of this caliber to speak to you, don't treat it like a freshman project for the campus paper. Please do something before you release part 2... Or just toss that page into the fire before you embarrass yourself any more.

    (P.S. It never hurts to plug your interviewees work either... "Interconnections" kicks ass...)
    • Two brilliant interviewees, (one of which is arguably the most influential and groundbreaking female engineer to ever work in this industry

      I have to disagree. No disrespect to Ms. Perlman intended, but I think the term "groundbreaking" more accurately describes the work of Admiral Grace Hopper [wikipedia.org]. I will give you however, that Ms. Perlman is arguably the most influential and groundbreaking female engineer currently working in this industry.

      • Grace Hopper's main achievement was inventing COBOL. "Groundbreaking" is not the word I'd use....
        • At a time when the only programming was done using assembler??? Nope COBOL was a major step forward. :)
          • 'At a time when the only programming was done using assembler???'

            And FORTRAN and LISP and ALGOL58 mainstream languages.

            • How many of those ran on IBM mainframes? Seriously, outside of Fortran I can't think of one that I would expect to see there.
              • LISP was originally implemented on the IBM 704.
                I believe that an ALGOL58 implementation was begun at IBM, but I don't know if it was ever successfully finished (as you note, FORTRAN was the standard there). A derivitive (MAD) was implemented on the IBM 704.
          • by fm6 ( 162816 )
            COBOL was not the first high-level programming language, not by a long shot. There were already languages that knew how to interpret formulas (FORTRAN), process complex data structures (LISP) and even primitives forms of block structuring (Algol). The one big idea that COBOL added to the mix was that source code should resemble natural language (IF X EQUALS 3 OR 4 ADD 1 TO X). Hopper had to have been pretty ignorant about the sheer ambiguity of natural language to make this mistake.
        • Think about what COBOL represented at the time. Adm. Hopper didn't just invent a new high-level language - she invented the concept of high-level programming languages, and the first compiler as well.
          • Invented high-level languages? Compilers? Have your perchance heard of FORTRAN? Algol? Both are older than COBOL.
            • I may have failed to correct the idea that she invented COBOL - but since you were the one who suggested she did, you'll have to share the blame for that. :-)

              Adm. Hopper's actual invention was A, the first compiler and the first of the so-called "third-generation" of "English-like" programming languages. A was released commercially as FLOW-MATIC, which later led to COBOL.

              • Flow-Matic wasn't the first compiled language either. That honor belongs to Fortran, which was first developed in 1953. Every reference I've ever seen credits John Backus with inventing the compiler.

                And distinguishing between Flow-Matic and COBOL is not useful, since both languages have the design flaw I'm criticizing.

                You're getting your info from Wikipedia aren't you? Well, the entry on Flow-Matic is accurate enough, but is easy to misread. It says that Flow-Matic was the first "English-like compiled l

                • You're getting your info from Wikipedia aren't you?

                  Yep. (Yeah, I know...)

                  Well, the entry on Flow-Matic is accurate enough, but is easy to misread. It says that Flow-Matic was the first "English-like compiled language". Which is perfectly true, but not the same thing as being "the first compiled language".

                  Yes, but I'm not talking about FLOW-MATIC when I refer to the first compiler, I'm talking about A-0. From Adm. Hopper's Wikipedia entry [wikipedia.org]:

                  ... A pioneer in the field, she was the first programmer of t

                  • Why should I blame Wikipedia? You're the one who's lending authority to "facts" edited by anonymous bozos with no indication as to where they got their information.
                    • You're the one who's lending authority to "facts" edited by anonymous bozos with no indication as to where they got their information.

                      Are you suggesting that an anonymous slashdot poster is somehow more credible? Why is that? (I'll ignore the "bozos" part - although I can't help but find it amusing that someone with a sig like yours is resorting to name-calling instead of citing better sources of information...)

                      You want more references, just Google for "invented the compiler" (include the quotes) - ever

    • the other is the creator of one of the most prevalent markup languages used

      You mean XML? Bray didn't "create" it. He was a key member of the committee that designed it. Calling him the "creator" devalues the other members of the committee, especially Jon Bosak [wikipedia.org], who defined the need for a simplified SGML and drove the project to create it [sun.com].

    • Agreed. Why would you paraphrase an interview with two incredibly intelligent and interesting people, instead of just giving us the interview verbatim? We don't give a flying fuck about what the interviewer has to say, so his commentary is irritating and irrelevant.
    • Strong agree. I think Tim Bray may be overrated, but Radia Perlman is on my list of "listen to anything they say" people since I heard her at Usenix this year. An incisive and original thinker. (And funny, as in her anecdote of having someone try to explain the difference between a bridge and a switch.) But this interview gets nothing out of her.
    • Ouch! If it's any consolation, part 2 delves into the future of the Web a bit more. Topics discussed will include: Web-connected devices, Web Office, how Sun fits into 'web 2.0', and I pick Tim's brain about ATOM (an alternative RSS format that Tim helps drive) and GData. I haven't written that up yet (it took me 3 hours to do part 1, I might add). I'll try to do better in pt 2 ;-) Incidentally, what would you consider a "worthwhile question or topic"? I could always follow up with Tim and Radia, if there'
      • Mr. MacManus... Glad to make your acquaintance. I know this reply is a couple of days late, so I hope you get it before you've completed part 2 of this piece.
        First off, I must apologize for the "simpering imp" comment. I have a great deal of respect for most writers, as I do quite a bit of it and know exactly how difficult a profession it is. All that aside, while I maintain my original stance, I'm not one to poke holes in others work without providing anything constructive in return. First, I must admit th
        • Thanks buffoverflow, your comments are helpful. I will indeed adopt the Q&A style next time. I should also mention that I got very short notice about having the chance to interview Tim and Radia (literally I was told of the opportunity the same day I conducted the interview). So I didn't have much time to prepare questions. It's fair to say my interests are in the Web (Tim's focus) than in the security/networking side of things (Radia's focus), so the questions probably were slanted to the Web.

          Live and

  • by nascarguy27 ( 984493 ) <nascarguy27@nosPAM.gmail.com> on Wednesday August 09, 2006 @11:53AM (#15874340)
    IMHO, The central server stucture is the way to go. The entity that owns the central server(s) can concentrate security on those server(s) and thus provide verification that you download what you wanted. You can also track payments and such easier with a central server structure. With P2P, you never know what you are going to get until you run the file, and it's harder to track for liscensing purposes and the like. P2P has been shown to be faster in some applications, but with people getting faster and faster connections to the internet, the speed advantage is going to be less in the future.
    • IMHO, The central server stucture is the way to go. The entity that owns the central server(s) can concentrate security on those server(s) and thus provide verification that you download what you wanted.

      There's no need for a single central server for this purpose. If anything, a really big site becomes enough of a headache to manage at all that in a lot of cases, there seems to be nobody who understands its overall structure well enough to be at all sure they've provided even minimally adequate securi

  • by Zigurd ( 3528 ) on Wednesday August 09, 2006 @12:19PM (#15874533) Homepage
    "You have no privacy, get over it." - Scott McNealy

    Although McNealy spent a lot of time and ink explaining his point of view, and claiming he was taken out of context, he never backed off that statement. In fact, he clarifies this way "If there were no audit trails and no fingerprints, there would be a lot more crime in this world. Audit trails deter lots of criminal activity. So all I'm suggesting, given that we all have ID cards anyhow, is to use the biometric and other forms of authentication that are way more powerful and way more accurate than the garbage we use today."

    The part that is wrong about this is that audit trails are for government and corporate operations, to make sure they are honest and within the law, and within the bounds of their investors' and constituents contracts. Applying the same controls to individuals is oppressive, and McNealy should not have been surprised to find out many people objected to his view.
  • These two experts are talking from the corperate world angle.

    Tracking every minute detail about your customer and being able to control them is #1 priority.

    P2P as we know it is not even an option for business and corperate use. Audit trails, logging and control with recall capability is what they are talking about and is what is wanted by control freaks in the corperate world.

    And they are right, that is what the corps want. Ignore the fact that most people HATE logging in at a site to access thigs and do n
  • Great, I thought - an interview with one of the brightest people I've ever worked with: must be full of insight and wisdom.

    Don't even waste your time reading it. Just a couple of dull, out-of-context remarks about P2P that the interviewer picked out of what I hope was a rather more interesting conversation. Who is Richard MacManus - and why?
  • Radia thinks that having central sites where people can register is key to making the Web scalable and more secure.

    Central sites?
    Hmm... I thought Sun's slogan was, "The network is the computer".

    • Actually, if you read TFA, you would have seen that she wasn't really talking about a few ginormous servers for everyone to connect to. Instead, she's specifically talking about the anonymity of P2P, and that, for the corporate world to embrace it, the anonymity will have to go away. One way to do this is to have 'centralized' gateways to which you authenticate yourself, which, in turn, take care of the P2P transmission of data. Think layers of networks talking to each other.

      Her ideal is having central

  • Google is based on a network of x-number (say 500,000) of low-grade server pcs.
    They layer on a highly redundant, fault tolerant, hot-computer-swappable,
    massively distributed file system.

    This is a much smarter solution for reliability than centralization. Further
    decentralization (even across corporate boundaries) would lead to even less risk of
    information loss.

    Consider that one single corporation, even with massive decentralization, is still
    vulnerable to a single legal attack by a single misguided corporatio
    • I think you need to make a distinction between logical and physical centralization.

      It's possible (as your Google example points out) to have a physically decentralized system which is logically "centralized," at least insofar as it can be made to look like a monolithic system.

      It's this sort of thing which seems to have a lot of possibilities in the future. Having all your eggs in one basket is just asking for trouble (just ask Napster, or the people who had their websites run out of New Orleans datacenters
    • Dhu,

      That's what Sun has been doing for the last 3 years or so, changing their business model away from large irons to scalable commodity based based systems. Today you can get a Sun Galaxy to a lower cost that an Dell....

      But there are still a lot of customers where a large scale system is a way better fit than a cluster of 2 to 4way systems, ask any Bank about their core banking system :)

      Cheers
  • http://www.jxta.org/ [jxta.org]

    Does Radia even know about this? One of few projects Sun funds and hasn't been canned because it actually makes money.

  • Those who want to sell us a centralized internet conveniently forget why the internet was created in the first place and why even before that the old centralized configurations were traded in for decentralized computing in the '70s and '80s. But it's always lucrative to sell you all new stuff, and if you're a server manufacturer there's not much profit margin in P2P...
  • Given some of the comments about wanting more context, I've now done a podcast of the entire interview [readwriteweb.com].

BLISS is ignorance.

Working...