Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Will BXXP Replace HTTP? 229

Stilgar writes: "Seems like one of the Internet architects, Marshall Rose, is at it again. This time he invented a Blocks Extensible Exchange Protocol (BXXP) which seems a *much* nicer alternative to the aging HTTP (if the IETF will give it a chance). Check out the story at NetworkWorldFusion news." From the article: "One special feature of a BXXP connection is it can carry multiple simultaneous exchanges of data - called channels - between users. For example, users can chat and transfer files at the same time from one application that employs a network connection. BXXP uses XML to frame the information it carries, but the information can be in any form including images, data or text."
This discussion has been archived. No new comments can be posted.

Will BXXP Replace HTTP?

Comments Filter:
  • it may be that there are great advantages, but completely changing a protocol that has been the main-stay for so long is difficult, if not impossible!
    kick some CAD [cadfu.com]
  • If BXXP is significantly more complicated than HTTP, I don't see it replacing HTTP. HTTP (and HTML) became widely popular because they are very simple to write code for. If I have a quick program to throw together that makes a socket connection to a remote server to retrieve data from a custom CGI query, I'm going to use HTTP because it's a simple process of sending a URL as a request, then reading until EOF the results of that reqeust. If BXXP requires abstractions such as XML and the overhead of writing multithreaded code just to pull down a single document, then I'll stick to HTTP, thank you.

    BXXP may find a niche, but I doubt it will replace HTTP.

  • Yes, it's funny, I'll concede that. But when you think about it, it's the ideal acronym.

    Blocks
    Extensible
    Exchange
    Protocol.
    That looks like BEEP to me. It's easier to pronounce than "ay she cheat chee peep" or however you end up mispronouncing "HTTP".

    And as for "beex"... "bee ix bix"... "beep ex pexip"... ah, screw it, I'm calling it beep anyways! You can't make me call it "bxxp"!

    (Hell, you can't even help me call it that... I'd rather dictate a thesis on the merits of Peter Piper picking pickled peppers. :) )
  • Just to shout out for a bit, in an age where browser source is open (Mozilla) as well as the servers (Apache), if there is a better protocol, implement it! Hell most of what is trendy in HTML is there only because Netscape/Micro$~1 thought they'd throw it in. If you document it, and release working models of the software into the wild, it will get used.

    The same thing with DNS entries. The only thing stopping people up to now has been the fact that noone wants to foot the bill for alternative root servers. It makes you wonder if something like the distributed gnutella mentality would work for DNS look ups.

    I guess I'm saying ditch this 'they' shit. Do it, and if it's a good idea, or if a ton of people start to use it, you can bet someones going to try and capitalize on it.

    If you build it, they will come.
  • Multicasting isn't easy. Multicasting is not something that just needs to be implemented, it is still a subject of research, trying to find efficient ways to handle millions of endpoints on the Internet. The current experimental implementation (MBone) implements some experimental multicasting algorithms - some of them involve flooding to find out who wants something.

    The question is who should manage receiver lists - the server, ISP's routers or multiple levels of routers - and how to do that without creating megabyte sized lists that can't be handled by routers without slowing them down immensely. Another question is how should the start/stop of a stream receive be handled - regularily initiated by server/router or by the client.

    It's not for ecoonomic reasons that multicast hasn't caught on (hell, there'd be lots of economic reasons to push it on everyone!), it's that there really isn't any widely usable multicast mechanism ready yet.

  • This is functional programming in my book: You describe what to do, not how to do it. Namely a specification.

    OOP and functional programming is not contradictory.

    Of course I wouldn't call this programming at all, or a protocol. I would call BXXP a meta-protocol.

    - Steeltoe
  • There are tons of tools out there that can speak HTTP, infact, HTTP is so simple that you could probably do it by hand (in fact, I know I've been both client and server)

    And as far as 'multiple channels' that can be done with multiple HTTP connections over TCP. BXXP may be a little better, but it's got to be a lot better for people to want to use it over HTTP.
  • Actually, I don't think I'm confused. You've hit the nail on the head: why create a protocol in the *application* layer, when there are protocols that solve these problems in lower-level layers?

    I'm sure there are good reasons to re-implement these in the application layer. But what are they?
  • BXXP (or BEEP) is a good start, but not quite what I want or need as a e-commerce developer.

    Currently if I want to design a "fully" interactive site that behaves the same way an application would I'd have to write some kind of java gui, for portablility reasons, that communicates back and forth to some central server running a specialized program that listens on some unused port and processes my cryptic and hastily thrown together messages. However, this is a lot of work, and with my message structure changing every few days its extremely difficult to manage, since you have to change the server and client code.

    What I'd like to see is a new protocol with its own server, an application server vs. the current web server. I envision something like an asynchronous connection where both ends pass standard messages back and forth in real time, something similar to the messages passed back and forth between an OS and an application. It would also have to have what everybody is screaming for, one persistant connection, encryption, user validation, and a whole slew of other things I can't think of now.

    The main problem is agreeing on a standard messaging system that isn't OS specific, allows users to format data however they wish, and still provides a means for expansion without dozens of revisions before a stable and workable model is created.

    Along with this there would need to be an update to web programming languages, yet another java package, or maybe even a whole new net language. This would turn the current web browser into an application browser, where one program could act as anything you wanted it to be. I suspect this is what MS is doing or intends to do, however I doubt they'll open the specifics to everyone, at least not without signing a disclosure agreement.

    Well with all that out of the way all we need to do is come up with some acronym. I recomend nssmp (not so simple messaging protocol), or how about wrcwicp (who really cares what it's called protocol)?
  • by Fzz ( 153115 ) on Tuesday June 27, 2000 @09:01AM (#972843)
    I was at the IETF meeting in March where Marshall presented BXXP, and I think there's some misunderstanding about what BXXP really is. It isn't a complete protocol in the same whay that complete HTTP is a protocol. It's more of a protocol framework, which you layer new protocols on top of.

    HTTP was designed as a single-use protocol. Because it's understood by firewalls, etc, it gets used for just about everything, even if it's not really appropriate.

    BXXP aims to provide a well thought out common building block for a whole class of new network protocols. If HTTP was being designed after BXXP, then the obvious thing would have been to layer HTTP on top of BXXP.

    So, really BXXP isn't intended to replace anything. It's intended to make it easier to deploy new protocols in the future.

    -Fzz

  • You and others keep making these "example" XML documents, but who is to say that a BXXP document will look anything like them? Why couldn't it be:

    <bxxp>
    Content-Type: text/html
    Content-Length: whatever
    Other-Headers: Other-Info

    <HTML>
    my HTML document here
    </HTML>
    </bxxp>

    There's no reason you need any more of a wrapper than one tag. All these assumptions are totally baseless. It _could_ be bloated, or it _could_ take no more than 13 extra bytes per request.

    ---
    Tim Wilde
    Gimme 42 daemons!
  • Comment removed based on user account deletion
  • Actually, Dan Bernstein, the author of qmail [cr.yp.to], is already ready with QMTP [cr.yp.to] (Quick Mail Transfer Protocol), a replacement for SMTP.
  • Firewalls are there for a reason - if you tunnel other protocols over HTTP, you are bypassing the firewall and had better have a very good reason for doing so. Lots of vendors do this, including Microsoft with DCOM and many CORBA vendors as well, but that's not much of an excuse.

    Firewall-hostile is a better term for protocols that carry many different types of data.

    As a QoS weenie (see http://www.qosforum.com/) I also don't like the way that HTTP can carry many different types of data requiring different QoS (performance) levels, e.g.:

    - DCOM over HTTP: needs fairly low latency
    - CGI transactions: ditto
    - RealAudio over HTTP: needs low loss, consistent bandwidth
    - static pages and images: no particular QoS needed

    The only way to handle this is to classify every type of HTTP interaction separately, using URLs to map packets into Gold, Silver, and Bronze QoS levels. This is feasible but fragile - as soon as the web-based app or website is updated, you may have to re-write these rules.

    Even worse, HTTP/1.1 puts all interactions with a given host into a single TCP session (a good idea in many ways), which makes it a waste of time to prioritise some packets ahead of others - the receiving TCP implementation simply waits until the out of order packets arrive, sending a duplicate ACK in the meantime. Severe packet re-ordering may even lead to slow start in the sending TCP (three duplicate ACKs means slow start from a single packet window size).

    Similar issues apply to security - you might want to encrypt only the transaction data and not bother encrypting images, for example, or route some data via a more secure network. SSL does solve some of these problems but requires application support.

    Oh well, enough ranting - HTTP is a great protocol in many ways, but there are costs as well as benefits to abusing it to do things it was never meant to do...
  • Do you have references on the reliable unicast protocols being worked on for IPv6 and Linux?

    There are many reliable multicast protocols around, some of which at least should work for reliable unicast as well, e.g. RAMP (
    http://www.metavr.com/rampDIS.html). See http://www.faqs.org/rfcs/rfc2357.html for criteria for reliable multicast protocols, has some good references.

    There is also T/TCP, a variant of TCP which is intended for request/reply transactions - it lets you send data with the initial SYN request and get data back with the ACK, so it's not much less efficient than UDP-based protocols. It would be ideal for HTTP/1.0 but has not really been deployed much.

    RADIUS uses its own reliable unicast approach over UDP, and is widely deployed, but it's not a separate re-usable protocol. See www.faqs.org for details.

    Some problems with reliable unicast are:

    - congestion control - there's no congestion window so it's very easy to overload router buffers and/or receiving host buffers, leading to excessive packet loss for the sender and for other network users

    - spoofing - it's even easier to spoof these protocols than with TCP (I think this is why T/TCP never took off)

    As for BXXP - I agree about difficulty of understanding what's going on. QoS-aware networks and firewalls all prefer one application mapping to one (static) port.
  • One big reason for multicast not catching on is the ability for a single multicast sender to cause congestion at many different points of a network, along with the problems of multicast network management. Until multicast becomes manageable its adoption will be quite slow. A useful paper on this is at
    http://www.winsock2.com/multicast/whitepapers/ma naging.htm

    There's some interesting work on fixing this, though - I forget the details but a recent IEEE Networking magazine had a special issue on multicast including info on why it's not been deployed yet.

    Also, for small groups, there is a new protocol called SGM that runs over unicast UDP and unicast routing - the idea is that if you want to multicast to a group of 5-10 or so people, and there's a large number of such groups, you're better off forgetting about IGMP and multicast routing, and just including the list of recipients in the packet. Very neat, though it still requires router support. Some URLs on this:

    http://www.computer.org/internet/v4n3/w3onwire-b .htm

    http://www.ietf.org/internet-drafts/draft-boivie -sgm-00.txt

    http://icairsvr.nwu.icair.org/sgm/
  • .... look at how long it's taking for HDTV or IPv6 to be adopted! Moving from a legacy communications protocol will probably take at least 5-10 years in a best case scenario.

    Except that the difference between rolling out HDTV and a replacement for http is where the changes take place. With HDTV you have to upgrade the equipment at the broadcast and receiver end. You also have to deal with limited bandwidth for broadcasting that makes it difficult to serve both technologies at the same time. With a protocol change, you can transition more easily by including both standards in your browser and on the websites. How many sites already keep multiple formats on hand? Do you want frames or no frames? Do you want shockwave or no shockwave? Would you like to view the video in RealPlayer or Quicktime? I can update my browser for free. How much does that new HDTV box cost?

    carlos

  • by Phexro ( 9814 ) on Tuesday June 27, 2000 @08:13AM (#972861)
    on the other hand, http is fairly simple, is proxy and firewall friendly, and it's ubiquitous. http is going to be here for quite some time, simply because of the amount of deployment - bxxp or no bxxp.

    of course, look at how fast http superseded gopher.

    of slightly more interest to me is the security implications of bxxp. since it's two-way, it could be difficult to filter, and opens up all sorts of interesting possibilities for propagation of virii and spam.

    --

  • From what I understood, HDTV is a failure because it doesn't solve problems.

    When you ask what's wrong with TV now, you find "content", "it's boring",...

    The problems are with the content (that's why you are reading Slashdot instead of watching TV), not the image quality.

    1050 o 1250 horizontal lines are not enough to spend the money at both the emitting and the receiving ends.
    Of course, prices would go down after massive engagement, but there is not enough of inital adoption.

    Digital TV on the other side promises more channels on the same medium. The content will as bad or worse, but there will be more of it. So Digital TV has more of a chance to replace PAL, SECAM & NTSC
    __
  • by account_deleted ( 4530225 ) on Tuesday June 27, 2000 @08:14AM (#972865)
    Comment removed based on user account deletion
  • For some years now, the W3C has been playing with the idea of URNs or URIs (I don't remember). These are an evolution of URL that don't specify machine location. But I have never found if there are suitable implementations.

    Hacking HTML for multiple destinations is at most a kludge.

    A promising thing is at Freenet. You specify what document you want and say nothing about it's location.

    About load balancing, I remember that the WBI web proxy (somewhere at IBM's AlphaWorks) had an option to do a ping for every link in the page and insert an icon marking slow from fast ones. I found it interestin until I realized that final layout of the page had to wait for the slowest ping.
    __
  • and niether should anything else. Datagrams are by nature unreliable. Anyone who believes otherwise needs to go right now to someone in protocol research and tell them the solution to the two generals problem.

    NFS is stateless and works by sending the same datagram repeatedly until it receives an aknowledgement. That is, IMHO, a terrific use of UDP and shows the lack of need for a so called reliable datagram protocol. For a datagram protocol to be reliable it will have to send (and cache) some sort of packet number or something and then send a response back. You could already do the same thing by doing your own packet number chaching and acknowledgement with UDP.

  • There's an RFC covering an extension to TCP for LFPs (long, fat pipes). It allows larger packets to be sent by increasing the space used to hold packet size info (I think it uses some unused field). I don't know which RFC exactly, but it should be easy to look up.

    The reason this is an issue is because a link that can transfer a huge amount of data but with high latency will end up being limited by having to wait for ACKs before advancing the window. Of course you have more ACKs to wait for if you have more, smaller packets. The easy fix is to increase the packet size. You have to counter this however with the reliability of the link, or you end up constantly resending a huge amount of data.

  • Actually, it really is me. There's a bug in the system - I can't login correctly, which is why I have to "shotgun" the system by posting without logging in - which is why all my posts for the past week have been +2'd by default. Sorry.. normally I wouldn't inflict this on people.. however, on with the show...

    I'm going to skip over you calling me a troll. I may be misinformed. I may be wrong. but I'm not a troll.

    Now, first, you got it backwards - TCP is not rate limited perse, however, if you consider that your max RWIN is typically 64k, and that it'll take 5ms to ack it, this means that 64 * (1000/5) = 12800, or alittle over 12MB/s. That's a good lan connection! However, if you increase it to about 100ms, which is my typical ping for my cable modem, my maximum bandwidth is a mere 640KB/s. Ow. Biiig difference. So, there's where my numbers come from. Now, about the persistence..

    Persistence does you no good as you still need to request the data. Now, let's assume you have a 50kb webpage to download. We'll say the html is 5k, and there are 3 graphics on the page, 15k each.

    TCP handshake will take about 150ms. This means 50ms to get there, 50ms to get back, and another 50 to get the final SYN/ACK. Now for the first request, another 50ms to send it. We're at 200ms. Server gets HTML page, we'll say it takes 5ms to do that and pipe it back out. We'll also say it does this all at once. 255ms later, we have the page. Now, we'll open 3 new connections for the images - 150ms for each. Now we're at 405ms. Images each take another 5ms to grab it, 50 ms to come back. 460ms. Rendering begins on your system by your browser. Now, we want to close the connection - the server already closed its remote connection probably with an RST pkt on the last piece of data. So another 50ms to send your RST pkt, 50ms to ack that, and now we're done.. grand total: 560ms.

    Now, let's assume we could have done this via UDP all at once... again.. 50ms to send the query. 5ms to retrieve all 4 parts. another 50 to send it back. 105ms, your system begins rendering. As this is a stateless connection, no need to close it - if it failed, we'll retry. Congrats, you're done at a smoking 105ms vs. 560ms.

    Now then, about me not being informative...

  • by elandal ( 9242 ) on Tuesday June 27, 2000 @12:21PM (#972872) Homepage
    That's not an SMTP problem. It's a problem with RFC822 (STANDARD FOR THE FORMAT OF ARPA INTERNET TEXT MESSAGES) -based email. SMTP is a *transport* protocol that does the job quite well.

    And actually, MIME extensions allow for multipart email, where each part can be encoded differently. I think that works pretty well, too: You can send a bunch of stuff, all of it gets bundled into a single file which again is transferred to the resipient using transport protocols, and the resipient is then free to do whatever he wants with the bundle - usually opening it using a program that knows how to handle such bundles (mail user agents) is a reasonable option. Using software that tries to run every file it gets its hands on is another thing, unrelated to this.
  • However, if you increase it to about 100ms, which is my typical ping for my cable modem, my maximum bandwidth is a mere 640KB/s. Ow. Biiig difference. So, there's where my numbers come from.

    Well, that would be true if morons designed TCP/IP, but fortunately that sort of protocol hasn't been used since XMODEM. TCP/IP will continue to transmit packets without waiting for an ACK of the previous one. This is referred to as a "sliding window" protocol. Of course, it will transmit only so many packets before it has to wait, which is the "window size".

    Look up "sliding window" in your TCP/IP book ("Internetworking with TCP/IP by Comer is the usual recommendation) or you might even try a web search.


    --

  • by Ron Harwood ( 136613 ) <harwoodr@NOSPam.linux.ca> on Tuesday June 27, 2000 @07:58AM (#972877) Homepage Journal
    Will be for multi-stream transfering of pornography, right?
  • Not to cast dispersions upon fellow slashdot readers, but it seems that many of us don't read beyond the headline. WTF!? There's almost a hundred posts to this story, and like 5 that actually understand why this will be useful. Slow down, and try to understand what's going on.

    As I see it, replacing HTTP is probably not going to be the first application of the BXXP protocol. In order to see the beauty of BXXP, you must consider the plethora of existing protocols (SMTP, HTTP, FTP, AOL-IM, ICQ...) none of which would be seriously hurt by a minor increase in overhead. Using a common language means that you don't have to write an RFC822 (or other RFC) parser for every application that wants to send mail, or request a file, or send an informational message/status update. You can parse the XML, and get the relevant info with much less effort using common code. You could share common headers between email and instant messanger clients. They're similar enough to speak IM to IM, or IM to mail server... Share libraries == less programming. Shared libraries == fewer bugs (theoretically).

    I speak from experience. I'm working on a client/server framework for a project that I've been doing for too long, and I've reached the end of my original communication protocol's useful life. I've switched over to an XML based format, and I'm happy with it. If I'd had BXXP to start out, I could have avoided writing two different communications protocols, and spent that time working on better things.
  • by sunset ( 182117 ) on Tuesday June 27, 2000 @12:34PM (#972882) Homepage
    After reading the article, and as one who has done a fair bit of programming using TCP, I can only see this as a solution to a non-problem. One you've figured out your client/server application requirements, implementing what you need on top of TCP or UDP in a suitable way is just not that big a deal.

    I wish all these brilliant minds would work on providing unique and interesting Internet content, which is the part that is sorely needed now.

  • by cxreg ( 44671 ) on Tuesday June 27, 2000 @08:01AM (#972884) Homepage Journal
    Don't you think that BEEP is more pronouncable than BXXP? Plus, it would be fun to tell people that I write in BEEP. We could even use the censopship icon for related Slashdot stories =)
  • by LaNMaN2000 ( 173615 ) on Tuesday June 27, 2000 @08:01AM (#972885) Homepage
    HTTP is a universally accepted legacy protocol. Unles BXXP is adopted by MS and Netscape for inclusion in their latest browser releases, the average Internet user will probably not have the opportunity to even see it in action. It is nice to think that technically superior methodologies/products will ultimately render older, less efficient, ones obsolete; but look at how long it's taking for HDTV or IPv6 to be adopted! Moving from a legacy communications protocol will probably take at least 5-10 years in a best case scenario.
  • The main reason I like it is because its one of those protocols where you can just telnet to a port and type and you can do something useful. SMTP is similar, as is IRC, whereas FTP you need to set up netcat or something to listen. (Its also more complicated)

    I doubt it would be practical to talk directly to bxxp. Plus, it seems to be jumping on the XML bandwagon.
  • by _xeno_ ( 155264 ) on Tuesday June 27, 2000 @08:15AM (#972888) Homepage Journal
    Unless BXXP is adopted by [Microsoft's] and Netscape['s latest browsers]...

    And by Apache, and in Java... Don't forget the server end. The radio in my car can pick up many different frequencies, but unless someone is actually broadcasting on a given frequency, I'll be getting static.

    Apache 2.0 including BXXP support would go a long way towards it being used, as over half the websites in the world are run on Apache. Support in the Java java.net package for BXXP URL connections would also help enabled BXXP in a wide variety of applications.

    Support for new technologies client-side is nice - but unless there's also support server-side, the technology can't be used.

    HDTV is taking a long time to be adopted simply because of the expense to purchase a set - at $1000 a set, it's not surprising people aren't rushing to get one. Yeah, it'll take time - but all it takes is a few sites to start using BXXP, a few services, and a few web browsers to support that, and eventually, it can come into it's own right as a internet protocol. But it may be coming sooner than you think.

  • Look! They've already raised $12 million! This must be an internet con! Just watch out for their sponsorship of some reunion concert....

    Seriously, this seems like good-looking technology, but I don't think file transfer is a good application of it - with file transfer, you want packets to be as bare as possible, with as much data as possible. No XML wrapping.
  • For those of you who don't actually read the articles, here was an interesting tid bit:
    Standardization of BXXP would be a boon to Invisible Worlds, a start-up founded by Rose that is developing BXXP-based intranet search and data management applications for large corporations. Several Internet luminaries are affiliated with Invisible Worlds including Carl Malamud, who helped get the Securities and Exchange Commission's Electronic Data Gathering, Analysis and Retrieval database online, Internet book publisher Tim O'Reilly and UUNET founder Rick Adams.
    I am not saying this is a good or a bad thing, its just interesting. Draw your own conclusions.

    It will be interesting to see if all these highly intelligent people can get together and make money.

    __________________________
  • by slim ( 1652 )
    ... whereas FTP you need to set up netcat or something to listen. (Its also more complicated) ...

    Telnet is a very handy way of diagnosing FTP problems. If you're not happy with setting up listeners for the data connections, use PASV instead of PORT.

    I'm all for protocols with which you can do this -- and FTP is one of them!

    Unfortunately, the need for secure protocols is going to make this more difficult as time goes on. Don't expect something like OCSP to be human-readable.
    --
  • Comment removed based on user account deletion
  • VNC will do some of these things already. It just isn't particularly efficient as far as bandwidth is concerned. The X protocol will also do, it's easier on the bandwidth, but also more complicated and less powerful (if you use it as intended, and not to emulate another protocol).

    You can run both VNC and X11 inside a web browser using Java these days - not that I see a need to do that.

  • ...Or beep://beep.roadrunner.com, or beep://theclown, or beep://ing.darned.beeping.protocols...

    (Just couldn't resist :)

  • The bibliography for BXXP seems only to consider the old Internet protocols, and doesn't include anything like IIOP. [whatis.com]

    It sounds to me like they're recreating what IIOP provides, and with the added cost that you need to encode data in the rather wasteful XML format.

    I half figure that someone will eventually build an "XML over IIOP" scheme that will allow compressing it.

    The Casbah [casbah.org] project's LDO [casbah.org] provides another alternative.

  • Actually, I think somebody should document this little suggestion. Seeing how things have the tendency to develop in our line of work, it's entirely possible that "BEEP" might end up being the accepted pronounciation.

    I'd enjoy it more than "bee-eks-eks-pee" -- "aych-tee-tee-pee" is hard enough to say four times fast.

    Of course, we all know geeks would _never_ go with "inside jokes", right?
  • Actually, we could build something significantly better than the Snow Crash Multiverse with current standards (i.e. Ditch the major security holes everywhere)

    Yeah, no kidding. Anyone else ever think it's funny how programs in the cyberpunk genre have complex GUIs for their security holes? I mean slashing through ICE with claws and blades? What the hell does THAT actually accomplish code-wise, especially when it's all done with generic interoperable third-party tools?

    --
    I wish I could turn off the auto +1 w/o enabling cookies and logging in...
  • by / ( 33804 )
    HTTP and HTML are two completely different beasts -- the former is a protocol while the latter is a markup language (as one discovers by expanding the respective abbreviations) for describing files which happen to be commonly transfered by the former. BXXP will happily transfer HTML files. At this point, you should realize that your question doesn't make any sense.
  • by dublin ( 31215 ) on Tuesday June 27, 2000 @09:38AM (#972911) Homepage
    It will be interesting to see if all these highly intelligent people can get together and make money.

    They already have to some degree. I know Rose worked with Adams in forming PSI, O'reilly, Rose, and Malamud (I think) were involved in the ahead of its time GNN, arguably the world's first Web "destination" (although they didn't manage to make the transition to portal with the arrival of the search engines), and Rose and Malamud have worked togehter on a number of projects from Malamud's Internet TownHall and radio.com (both now defunct) to the Free tpc.int fax bypass network, which failed to generate much interest, although I think it soldiers on in a few locales.

    In any case, Rose is one of the best protocol jocks in the world, so in general his suggestions should be taken seriously. One of the most enjoyable classes I ever took was his on Internet mail protocols back at the "old" Interop years ago, back when it was a get-together for the people actually building the Internet rather than a slick merchandise mart with suits hawking the latest lock-in.
  • by CMiYC ( 6473 )
    You're implying that BXXP's only purpose (or ability) is to replace HTTP. The article clearly states that replacing HTTP is just one of things it can do. BXXP provides a new way to create protocols, so things like Gnutella, napster, and freenet could be developed more quickly.

    So yes users have a very good chance of seeing BXXP in action, however, I think you are correct in that it won't replace HTTP (yet).


    ---
  • ... that we're going to have to actually use the header "bxxp://".

    I mean, for people who just surf around, I'm sure IE and Netscape will quickly adapt to be able to fill that in themselves, and that just typing "Yahoo" will be enough to find either bxxp://www.yahoo.org or http://yahoo.com depending on how the defaults are set.

    But for those of us writing CGIs, this kind of sucks. Sometimes the "http" has to be very explicit. It's a simple matter to know when to use http://, ftp://, telnet://, etc., because the protocols are so unrelated, but with these related ones, it will be a headache.

    Of course, if bxxp can handle http, I guess this problem shouldn't exist at all. It still will.
  • TCP has a lot of connection overhead, with the three-way handshake and four-way disconnect. And, with small datasets, it is a waste to send too many TCP segments.

    For example, a webpage may contain 4k of text, and say, 20 2k images. Why make 21 TCP connections, when you can make one, and send the whole page in one XML stream? That is a big overhead savings, and the page will load MUCH faster. Even for something like slashdot comments, where there's about 200k of text and maybe 10 images, that's still a savings, although not as much.

    Simply bandwidth wise, (assuming 0 RTT), a TCP/IP segment header is 40 bytes. A connection requires 3 packets + 4 to disconnect = 280 bytes per connection. An sequence of XML tags would probably be smaller, in addition to reducing wait time and processor load.
    ---

  • A lot of people say it "dubya-dubya-dubya"...

    Wait...Oh no!

    George W. Bush has stolen Al Gore's thunder! Gore may have invented the internet, but the World Wide Web is George W. Bush's middle name!


    ---
    Zardoz has spoken!
  • Actually, content was the whole point behind the protocol. We were trying to solve a class of problems, all driven by content requirements. Examples are the SEC's EDGAR database [invisible.net], a variety of other "deep wells", and a class of problems ranging from mapping network topology to creating personalized "maps" (views) of the Internet. See here [mundi.net] for more on the philosophy behind the content requirements.

    The protocol emerged from long discussions about how to solve these content problems. We tried as hard as possible to reuse existing protocol infrastructure, but quickly found that there were no protocols that handled the metadata problems we were trying to attack.

    The (IMHO) brilliant thing Marshall did was to build two levels into the solution. BXXP is the general-purpose framework [mundi.net] that was used for the Simple Exchange Profile [mundi.net] application we were going for in the first place. The nice thing was that BXXP works for a broad range of other applications, such as asynchronous messaging. [ietf.org]

    The bottom line is why reinvent the wheel more than once?

    Carl

  • You can send e-mail in mime and uuencode as a hack to solve a problem of getting files from one user to annother.
    It's a hack solution for a rare (but obnoxous) problem.
    [Oh ok I'll e-mail you an mp3 of the audio recorded at the last meeting]

    The idea of normal e-mail is to send text. Not HTML and certenly not MsWord.

    Windows helps premote this problem (not Microsoft.. just an example of "one world one os" being bad.. Microsofts todays example Linux may be tomarows bad guy in this respect.. maybe some day Apple).
    Basicly a user sees a file format is native to his operating system he mistakenly believes it's "normal" and sends this file in e-mail.
    If he has less than a 50-50 chance of getting someone who can accually use that file then he'll get a clue and stop. But if he has a greater than 60% chance of getting a user who supports it then he'll just assume the others are losers.

    For the most part it's annoying. It's not e-mail and it dosn't do the job.

    Now with e-mail viruses I'd hope that even Windows users would say NO! to Windows files simply becouse this shouldn't be commen practace in the first place. True Windows makes it EASY but that is still with the idea of two executives using commen (or compatable) software with an agreed on format with an agreed on goal (such as e-mailing the database of a daily budget or an audio file of an interview in a format both sides support [such as mp3])
    If both users use Windows and Microsoft office then hay thats great go for it. But if one side uses Linux with KDE Office and the other Mac with AppleWorks then it's a matter of finding a commen format between AppleWorks and KDE Office.

    Anyway.. sending MsWord files as a way to send text is a bad thing. But when there is a greater than 60% chance that the guy on the other end can read MsWord files it dosn't occure that sending e-mail in acuall text will work 100% of the time.

    PS. yes I throw away ALL MsWord files unread
  • Already been done. The standard way people accomplish this is to use URLs like http://www.foo.com/image.gif, and have their DNS server at foo.com rotate what IP addresses it gives out, pointing to machines that are also called www1.foo.com, www2.foo.com, etc.. The simple round-robin model just rotates between them; fancier load balancing servers from a variety of vendors check which servers are least busy.

    Another approach that some fancy load-balancers use is to always give the same IP address, but fake out the packet requests using NAT or other routing tricks, or having the web servers themselves change their IP addresses dynamically. It's a bit uglier, but browsers and DNS resolvers often cache their results, so a web server might die or get overloaded but users are still requesting pages from it because they've cached its IP address - this way you spread out the load more evenly at the cost of more dynamic address-shuffling translation work by the load balancing server.

  • by Uruk ( 4907 )
    Does it offer the user more than they have?

    Or alternatively, can it make us more money by screwing our competitors out of marketshare?

    Is it simpler to maintain? (XML is nasty!)

    It can be if you want it to be. It doesn't have to be. It can be quite elegant, really.

    What's the learning curve?

    XML? If you know HTML, I can teach you XML in about 5 minutes really. For protocols, who really cares what the learning curve is? PHB says to developer, "You will support this", and once it's supported, it's completely transparent to the user. Only the developer has to bother to learn the protocol. And if they built it around XML, it probably just ain't that hard.

    What's the cost to switch? (Time & Cash)

    Potentially huge. Potentially nothing. Depends on who you are. For some people, it will require downloading a new version of a browser. For others, millions on new software licenses for their crappy proprietary web servers, and developing support for this in.

    Can a 5 yr old explain it to an adult?

    Can a 5 year old explain the latest FPS to an adult? That didn't stop their acceptance and humongous sales. :)

  • So what you're saying is that BXXP is an abstract class, and doesn't really do anything by itself, it just exists to be "inherited" by other classes and used in that way.

    Whatever happened to functional programming? Why is earth going into an OOP shithole ever since java showed up?

  • What the hell are you talking about?
    You're griping about "HTTP", which is a *protocol*, reguarding menus and DHTML, which is
    a *FORMAT*.

    And you top it all off with
    "XML is a [...] truly robust protocol". Except again, XML is a FORMAT, not a protocol!

    I really, really hope people aren't paying you money to design websites, if you cant tell the difference between a protocol and a format.
  • Comment removed based on user account deletion
  • BXXP looks good, but while we're creating a new standard, might I suggest a few things?

    What we really need is a protocol that can, upon receipt of a single authenticated request, determine the speed that the remote end is running at, and then rapidly chunk out an entire page in a single request - instead of a few images here, a few javascript files there, and don't forget the stylesheet in the LINK tag!

    It is obvious we are quickly moving into a high-bandwidth network where consumers will routinely have access to multi-megabyte streams. The TCP protocol is, by design, limited to a mere 780kb/s. You cannot go faster due to network latency and the max size of the RWIN buffer. Therefore, it's obvious this protocol needs to be UDP.

    Security is also a concern - we need some way to authenticate that the packet came from where it said it did before we blast out a chunk of data - as likely there won't be time for the remote host to respond before we've sent out most of the request to it by then.. so some form of secure handshake is needed. If you could make this handshake computationally expensive for the client but not the server, so much the better for defeating any DDoS attacks.

    But really.. we need a reliable file transfer protocol that supports serialized queuing and higher speeds than we do now.. and that means either ditching TCP, or setting up parallel streams between each.. with all the overhead that goes with that solution.

    BXXP doesn't do that, unfortunately.. and if we're going to define a new protocol, let's plan ahead?

  • by artdodge ( 9053 ) on Tuesday June 27, 2000 @08:24AM (#972946) Homepage
    Why multiple connections are a bad idea:
    1. Because multiple TCP connections do not share congestion information, it is possible for a greedy user to get more than his "fair share" of a pipe by using many parallel connections
    2. Each TCP socket takes up system (kernel) resources on the server. Forcing the server to maintain a larger number of connections will cause its overall performance to take a hit.
    3. It introduces throttling/aggregation difficulties for proxies (see RFC2616's requirements for maximum upstream/inbound connections from a proxy; a server throwing lots of parallel connections at a complient proxy will be sorely disappointed itself, and will waste the proxy's resources [see 2], ergo degrade its performance).
    Why MUX (HTTP-NG's name for this idea) is a bad idea:
    1. (If not done intelligently) it introduces artificial synchronization between responses, i.e., your quick-to-serve GIF could get stuck "behind" a slow-to-serve Perl-generated HTML page in the MUX schedule
    2. As a general case of [1], creating the MUX schedule and framing the data is hard, and difficult to parameterize (Where do progressive GIFs chunks go in the schedule? What if the browser doesn't support progressive rendering of HTML? What size chunks should be used in the schedule - a low bandwidth client might prefer smaller chunks for more "even" fill-in of a document, while a high bandwidth client would prefer bigger chunks for greater efficiency)
    3. DEMUXing is more work for the client at the application-protocol level.
  • The TCP protocol is, by design, limited to a mere 780kb/s.

    WHAT? I've moved over 10Mbytes/sec over TCP! The receive window in certain alleged operating systems *cough|windows|cough* may be too small for high speed over the 'Net, but any real OS can automatically adjust the TCP windows to be big enough to fill the pipe.
  • by Spasemunki ( 63473 ) on Tuesday June 27, 2000 @08:35AM (#972950) Homepage
    If you read down through the rest of the article, it mentioned that work was already underway to produce Apache support for BXXP. They estimated that there would be the version of Apache due out this fall would include support for BXXP.

    "Sweet creeping zombie Jesus!"
  • Check out a draft of bxxp at:
    http://xml.resource.org/profiles/BXXP/bxxp.html [resource.org]
  • by Cardinal ( 311 ) on Tuesday June 27, 2000 @08:35AM (#972952)
    The difference between FTP/SMTP and HTTP is that FTP/SMTP are still to this day used for exactly what they were speced out to be used for. File transfers, and mail handling. So even if they are older than HTTP, they haven't aged in the sense that very little is being asked of them beyond what they provide.

    HTTP, on the other hand, has been stretched far beyond what it was intended for by today's web. It's stateless and simplistic, yet we that write web applications need it to be more than that. This gives it the sense of being an "aging" protocol. Session management of any form with HTTP is a dirty hack at best, and web applications needs a better answer if they are to expand beyond what it is now. If BXXP can speak to those shortcomings of HTTP, then all the better.
  • Either you didn't read my post and figure out that this IS what I was talking about, or your IQ is smaller than your waistline.
  • If I want to be paranoid, I have the ability to pretty much move about undetected.

    Depends on how paranoid is paranoid. You're not really anonymous anymore. There are things like the anonymizer, remailers, and so on, but due to abuse, I bet they keep bitchin' logs.

    Spoofing used to be an issue, but AFAIK, (and I haven't even thought about it in quite a while) it's not really possible anymore due to bind updates. Everywhere you turn, you're being logged. Doesn't matter if it's an HTTP server, the banner ads on that server, downloading a file through "anonymous" FTP (yeah right) or logging into your own box. I don't see much anonymity at all on the web, since your IP is scattered all over the universe whenever you so much as connect to another server. If anybody knows ways to get around that, please let me know.

    You can be anonymous in the sense that the server only knows that some loser on an @Home cable modem is the one who's looking up this goat pr0n or reading about ralph nader, but when it really comes down to it, you're not.

    I've always wondered if anybody will ever implement some type of reverse lookup system through ISPs. I know it wouldn't be easy, but imagine something like this - you dial up, and connect to goatpr0n.com. Since they want to market to you, they send a request to your ISP's server invader.myisp.com asking which customer is connected to ISP IP hostname foo.bar.baz.dialup.myisp.com. At that point, myisp.com sends back some "relevant" information to the "client".

    Or even completely different servers. I bet pepsi .com would love to have the identities of coke.com visitors for counter-marketing. I bet microsoft would love to have information on non-IE users. I bet some company pitching DSL would love to have information on people who seem to be coming in on slower modems to pitch to.

    In a world where companies are getting busted for backdooring their own software, people are rioting against doubleclick abuses, and you're logged every time you take a shit, does privacy really still exist? The answer is yes, but only as long as you're doing something that nobody thinks they can make money off of.

  • by SONET ( 20808 ) on Tuesday June 27, 2000 @08:24AM (#972969) Homepage
    I think something that would be awesome to include in something like this would be a form of redundancy. I'm not sure how it could be done, but one drawback of HTML is that if you can only provide a single location that the browser can grab an image from an image tag for example. If you could list several locations in the code for the same image and have it be transparent to the browser, that would be helpful for example if a web server went down that was holding your images. Or if you could do the same with links, where there's only one place for the viewer to click, but it will try all of them in the list until the browser finds one that works. This could even be extended to randomly choose one of the servers in the list. This could be a form of HA for people that can't afford the Real Thing(tm), and it goes without saying that it could help at least meet some of the shortcomings of today's web.

    Another thought that I had was a [somewhat primative] form of load balancing for servers that actually took place on the client side (which would sort of crudly happen with the randomized link idea above). Before downloading everything off a site (images, video, etc.) the browser could go through some sort of ping function to see which server was closest to it in terms of latency or availability, or better yet it could be coded into the server to report it's bandwidth/CPU load or some sort of combination of relevant data to the browser and let the browser decide which server would be best to obtain the data from. This routine would be outlined at the beginning of the code for the web page. This could also be extended a great deal to even include things like distinguish between data types to help choose servers accordingly, etc.

    If you think about it, HTML/web servers could be much more intelligent than they are now, and the best time to add intelligence would be when adopting a new format like this. While we're at it, we should also do a bit more to protect privacy. Intelligence combined with privacy... could it be possible? :P

    Something similar to this in terms of redundancy also needs to happen with DNS and is long overdue. Just had to add that.

    My two cents...
    --SONET
    "Open bombay doors!"
  • The beauty of HTTP is it is so damn simple. That is why it so successful. It's simplicity and flexibility has allowed it to grow and adapt. XML may be simple but not at simple as HTTP. HTTP may not be perfect but it works. How are you going to have an efficient caching BXXP proxy. yick! the proxy will have to parse 2 BXXP connections. That is way more complex than parsing a header and just passing the content through. My only grip with HTTP is you can't set a cookie after your started sending the content. (any lazy web programmer will agree)
    Citrix
  • by FJ!! ( 88703 )

    Yeah, it'll take time - but all it takes is a few sites to start using BXXP, a few services, and a few web browsers to support that, and eventually, it can come into it's own right as a internet protocol.

    Only if it is compelling for the user experience. From the article it doesn't seem like it will make current transactions better, but make it easier to create future ways of communicating. If it does not provide a compelling experience now, there is no impetus to adopt it to work or replace HTTP. It will be adopted when something new built on BXXP comes along, and that something new will have to be compelling.

    FJ!!
  • OK, calm down. RTFM already.

    HTTP will continue to be the protocol for 90% of the web, BXXP will only become useful for massive data interchange aspects, such as those for B2B exchanges.

    Expect to see IE, Netscape, and Opera extensions to allow BXXP, but don't expect to see it anytime soon, at least until they start to fully implement XHTML and XML standards. I'd guess 2002 for reasonable code that works fairly well.

    If you're trying to crank out massive data exchanges, you should definitely get into this; if you're trying to do a web site, especially one with lots of text content, you may never use it.

  • by brad.hill ( 21936 ) on Tuesday June 27, 2000 @08:37AM (#972975)
    I constantly rue the fact that the "www.blahblah.com" convention won out over "web.blahblah.com". The amount of time I waste saying and listening to "Double-Yew Double-Yew Double-Yew" instead of the one syllable "Web" is enough to take an extra week of vacation every year. (thank the gods radio announcers have at least stopped saying "forward slash forward slash")

    While "X" is still technically one syllable, it doesn't trip off the tounge quite like "T". There are two tounge motions in that one syllable, "Eks". We need a more streamlined protocol name. Make it something catchy like Extensible Exchange Enabler and call it Tripoli. Easy to type, too.

    Forget the network performance tests! Any new protocol should have to undergo verbal performance tests.

    Also, if I see one more thing with an unnecessary secondcap X in it, I'm going to hurl.

  • by orpheus ( 14534 ) on Tuesday June 27, 2000 @08:28AM (#972976)
    Despite what the author of the linked article suggests, even the experts he interviews agree that BXXP is intended to be a more flexible and capable alternative to HTTP, but its major use will be as an alternative to completely custom (open or proprietary) protocols for new applications, not for webserving ordinary HTML.

    "Think of BXXP as HTTP on steroids," says Kris
    Magnusson, director of developer relations at
    Invisible Worlds. "People are trying to jam all
    kinds of things into HTTP, and it has become over-
    extended. BXXP won't replace HTTP for everything,
    but it can be used when new applications protocols
    are developed.
    "

    As such, it does not conflict with, or supplant HTTP-NG, or many other standards being hammered out by the IETF. It's just another tool developers and info provider may choose, depending on their needs.
  • First you say:

    It is obvious we are quickly moving into a high-bandwidth network where consumers will routinely have access to multi-megabyte streams. The TCP protocol is, by design, limited to a mere 780kb/s. You cannot go faster due to network latency and the max size of the RWIN buffer. Therefore, it's obvious this protocol needs to be UDP.

    Then you say:

    we need a reliable file transfer protocol... [emphasis mine]

    Signal 11, are you insane? TCP was specifically designed for reliability. UDP was designed for unreliable connections. Non-guaranteed. No assurance. Unreliable.

    So maybe TCP needs an overhaul. Maybe another protocol would be better. But UDP? If you're going to "ditch TCP", don't build your new protocol on TCP's just-as-old cousin/brother/whatever.

  • Compared to the vastly superior: ...

    You meant to say vastly smaller. Whether it's superior or not depends on what you want to do. For example, your sample XML had 126 bytes more overhead than the equivalent HTTP, and 126 extra bytes is likely quite acceptable when sending documents that are usually several kilobytes in length, and those bytes buy you all sorts of XML goodies that the HTTP version doesn't have. On the other hand, if your document is on the order of a few bytes and you're sending a jillion of them, you clearly couldn't accept that much overhead.
    --
    -jacob
  • Okay. But look at it another way.
    What if Apache supports it, along side http? This sounds possible. And perhaps mozilla would support it. That's all it would take, really...

    As for how long it's taking ipv6 to be adopted.. the main reason nobody is adopting it is because there is currently not reason to use it!
  • by kevin lyda ( 4803 ) on Tuesday June 27, 2000 @10:16AM (#972989) Homepage
    smtp is being used for what it was designed for? someone in our marketing debt. emailed quicktimes of portugal's three goals against england.

    i sincerely doubt that was considered and mime is not pretty.
  • Let's face it, the internet is still by and large a pretty anonymous place. If I want to be paranoid, I have the ability to pretty much move about undetected.

    Well, that ain't so simple.

    Internet has a lot of pseudo-privacy. This means that for Joe Q. Luser (and even for his sister Joanne D. Not-So-Luser) it is hard to find out who is the person behind a handle or a nick or a screen name or an e-mail address. However that's not true for for law enforcement. If you are not engaging in useful but inconvenient paranoia, it's fairly trivial for law enforcement (==gubmint) to trace you (usually courtesy of your IP) and produce a basic sequence of your activities (generally courtesy of your ISP's logs).

    Thus "normal" internet usage is opaque to public but can be made transparent (at high cost in time and effort) to law enforcement agencies.

    Of course, there are a bunch of tools that are capable, if skillfully used, to mask your identity. However by my highly scientific guess less than 0.001% of internent users actually use them.

    government (and corporations) are kicking themselves that they didn't approach Gore and have him build in monitoring into the protocols since they would LOVE to watch every little thing we do

    And what is it that you want to monitor? Most everything is unencrypted and freely readable by anybody with a sniffer or access to a router. If I want to know everything you do I can get a court warrant and attach a large hard drive to your ISP's gateway. I don't need anything from the protocols: the stream of IP packets is perfectly fine, thank you very much.

    Kaa
  • I think that the central idea behind the protocal is that you can provide additional layers of abstraction (for those not in the know, "libaries" and "objects") between the protocol and the programmer (hence the "extensible" part of it). This would make it easier to write programs that work in similar ways since you could embed all of the protocol work into simple functions. This would make the prototyping of programs much quicker. Always to be remembered is that more function calls mean more overhead, but if the overall functionality and efficiency is increased, this is well justified. Sounds like a great idea to me, as a person who like it when everyone is on the same page (and this would make that a lot easier when designing web apps).

  • Your HTML ideas are very interesting, especially the redundant image sources.

    However, BXXP is a protocol (like HTTP) not an authoring language (like HTML). HTTP and HTML have nothing to do with each other, except that they are both very popular standards for online content. Thus, your post is off-topic.
  • Why multiple connections are a bad idea ...

    Because multiple TCP connections do not share congestion information, it is possible for a greedy user to get more than his "fair share" of a pipe by using many parallel connections.

    And any number of additional protocols, be they layered on top of IP (like TCP is) or TCP (like BXXP is) will not solve this problem.

    Each TCP socket takes up system (kernel) resources on the server.

    One way or the other, multiple connections are going to use more resources. 'Tis inescapable.

    Where those resources are allocated is entirely implementation dependent, and indeed, only makes sense in terms of specific implementations. Look at an MS-DOS system, where there is no kernel, or something like khttpd, which implements HTTP inside the Linux kernel.

    Layering additional protocols on top of existing protocols which do the same thing because some number of existing implementations don't handle resource allocation well is highly bogus.

    ... a server throwing lots of parallel connections at a complient proxy will be sorely disappointed itself ...

    The thing is, any time you flood a system with more connections then it can support (whether those connections are at the TCP level or something higher), it is going to suck mud. BXXP isn't going to change that fact; it is simply going to relocate it. See above about bogus.

    Personally, BXXP looks to me to be redundant and needlessly repetitive. The core is basically TCP running on TCP, and then there is some higher level stuff that doesn't belong in channel-level code to begin with. It would be better to design some standard wrapper protocol to layer on a TCP connection to describe arbitrary data, thus providing something new without reinventing the wheel while we're at it.
  • Comment removed based on user account deletion
  • Hmm...H-T-T-P is much easier and nicer to say than B-X-X-P. But if they just used the first letter instead of going for those X's, we could just say BEEP! Wouldn't that be cool?

    beep://slashdot.org

    I like that much better.

    -JD
  • by NOC_Monkey ( 73018 ) on Tuesday June 27, 2000 @08:28AM (#973004)
    Here [ietf.org] is the IETF working draft of the protocol. Lots of good info on the architecture of the protocol.
  • It would not be in place of TCP, it would be on top of it. But you knew that. My point is, TCP is still necessary, and this new protocol could extend the functionality and efficiency of TCP to deal with more modern applications.

    Sure, TCP is already multiplexed, but there is a lot of overhead inherent in this mulitplexing scheme as TCP cannot distinguish between 10 connections to 10 different machines or 10 connections to one machine. They are all completely seperate connections, all with seperate kernel space buffers, all doing their DNS lookups, all having seperate windows and flow control and connection states, etc, etc. There is a lot of overhead that goes into making a TCP connection; it is very noticeable to the end user.

    This protocol, by allowing ONE TCP connection to describe ALL transactions between two machines, can reduce overhead on both machines, and reduce load times and processing consumption. See my reply to the post above yours.

    It does serve a purpose. And it is in no way a replacement for TCP. It does not provide the connection, reliability, flow control, etc, capabilites of TCP. It is a stream format, not a packet-level format, as such, it MUST be built on top of TCP, as TCP provides a state-oriented, stream oriented connection on top of a connectionless, unreliable, stateless datagram packet protocol that is IP.
    ---

  • by artdodge ( 9053 ) on Tuesday June 27, 2000 @08:45AM (#973010) Homepage
    What BXXP is going to help get rid of, is using HTTP for more than what it was designed to do... which was transfer Hypertext documents.
    More accurately, HTTP was designed for moving documents around in a hypertext context. Which actually encompasses a whole lot of things if you don't artifically constrain your definition of "document".

    And BXXP won't get rid of all the different protocols; what it might do is provide a common framing mechanism for protocols. Which means you have to do just as much work in protocol design determining semantics of what each extension and flag and whatnot else carried within the framework means. (And HTTP has some very well-developed and real-world-motivated concepts and semantics, f.e. in the caching and proxying and transactional areas; don't expect those to disappear any time soon.) It could, I suppose, makes it easier to build parsers, to which I say two things:

    1. Violating protocol synatax/grammar is commonplace. BXXP will not cause people to magically get it right.
    2. Many modern protocols have similar/identical parsing rules already. Consider RTCP's similarity to HTTP... you can basically parse them with the same code, just hook them up to different semantic backends.
  • by Alex Belits ( 437 ) on Tuesday June 27, 2000 @08:47AM (#973011) Homepage

    ...and all are incompatible with things that already exist, I suspect that some intent other than improvement of the protocol is present. For example, XML demands to use Unicode (to be exact, standard was designed to make it really hard to do anything else unless everything is in ASCII). HTTP 1.1 is a one huge spec with various wild demands to applications made across all levels of content, presentation and transmission means, that can't be implemented separately of each other while remaining compliant to a spec. Java is both a language and a bunch of libraries that can't be ported or reimplemented in anything but java itself. And now more or less reasonable proposal of having one more level of multiplexing (compared to multiple TCP connections -- one however may argue that it would be better to attack the problem at its root and make a congestion control mechanism that can work across connections) is tied with use of XML instead of well-known, more efficient and more flexible MIME.

    Good ideas that can be implemented separately are used to push dubious-quality "standards" that provide no benefits but pushing useless, semi-proprietary or simply harmful "inventions" by combining them in the same spec.

  • XML is very slow. A typical transmission might be as verbose as this:

    But doesn't XML have abbreviated forms as well? Like:

    <stream-length/65535/

    et cetera. So it's a little bit of a savings. Plus, there's two other factors. First, if this is just control data, it's still going to be tiny compared to the actual content. Second, I wouldn't be at all surprised if these XML-based protocols use a gzip-encoded stream or something.

  • I disagree. The most popular web server is Apache, which can easily be extended to support new protocols because it is a popular open source project. The question is not whether the capability to serve BXXP is available on the server side, but whether anybody takes advantage of it. Unless there is support on the client side (which is, of course, transparent to the end user), there will be no reason to expend the additional effort to code for it on the server side.

    The difficult part of phasing BXXP in will not be supporting it in popular web servers/browsers, but rather programming for the protocol itself. It only takes two companies/organizations--Apache and either Netscape or MS--to allow the protocol to be used on the Internet, on many machines. However, the development of useful content on the server side requires each we publisher to incur an additional expense. As such, there must already be a critical mass of supported clients available for it to be cost effective for them to rewrite existing applications.
  • I'm not sure that http is inadequite for what it was designed for; what I do know, as someone else mentioned (and right on, if I might add), is that http is one of the few protocals that has been greatly expanded and stretched out in the pursuit of new web apps. When you think about the fact that this protocal was designed to pump simpe, marked up text documents over the Internet, and that it now handles dynamically generated content, binaries, Java applets, and a whole passle of other technologies, it is a miracle that it has aged as well as it has.
    I would say that http does a good job of the basic work that it was designed for, and I doubt that it will dissapear from that too soon. But as networked applications grow more and more complex, http is going to be harder and harder pressed to adapt. Hopefully, simple apps will keep using the simple http protocal, and newer, more complex technologies will have access to a protocal framework with the power and flexability they need.

    "Sweet creeping zombie Jesus!"
  • I think SOAP mandates the use of HTTP, though.

    Other things (like ICE) would work well over BXXP, though.

  • by Proteus ( 1926 ) on Tuesday June 27, 2000 @08:07AM (#973024) Homepage Journal
    #- disclaimer: I don't know what the hell I'm talking about, this is a question.

    From what I can decipher, this seems to be a more extensible protocol, which would allow easy creation of network-aware applications. My question is, since there is an added layer of abstraction, wouldn't there be an overall performance hit?

    Besides, wouldn't multiple "layers" of data on the same connection open tons of potential security risks?

    Or am I off my rocker on both counts?

    --

  • by Mark F. Komarinski ( 97174 ) on Tuesday June 27, 2000 @08:07AM (#973028) Homepage
    HTTP is the youngest of the Internet Building Block protocols. Heck, all the other protocols are at least twice the age of HTTP.

    Inadequade, inefficient, and slow are adjectives I'd use with HTTP. Aging I wouldn't use. That implies that FTP and SMTP are both old and thus should be replaced. The age of a protocol doesn't matter. What matters is if it does the job.
  • One special feature of a BXXP connection is it can carry multiple simultaneous exchanges of data - called channels - between users. For example, users can chat and transfer files at the same time from one application that employs a network connection.

    We have it right there - finally, a TCP/IP protocol that replaces TCP/IP! What'll they think of next? Maybe the next advance in computing technology will be emulating a 386 in my shiny Pentium III?

    Replacing HTTP is a stupid thing to do because any "replacement" isn't a replacement for HTTP at all - it's a replacement for everything. HTTP does what it needs to do (and then some):

    1. It sends documents of any type.
    2. It deals with mime-types of those documents.
    3. If you're using a proxy, it'll even get files over FTP.
    Why would we use a chat protocol (or one that can do chat) to do file transfers (which is what HTTP is all about)? Seems to me, it's trying to replace all of TCP/IP and all established protocols. We already have seperate protocols (IRC, FTP, TFTP, HTTP, RA, etc.) so why do we need what amounts to an XML wrapper around them? Someone obviously is spending too much time trying to reinvent the wheel - all because they like the sound of the letters XML - the sound of the almighty Venture Capital and Media Buzz.
  • NFS used to run without NFS checksumming, meaning that it relied on the error correction of the underlying Ethernet. Nowadays, I suppose it is mostly run with checksumming.

    As for implementing "my own packet number in user code", what kind of argument is that? I could also implement some form of reliable byte streams based on UDP in user code fairly easily.

    But standardizing these kinds of protocols, we get several advantages. For example, we get interoperability. And people don't have to reinvent the wheel every time and the problem can be solved well once and for all (even a simple strategy like "resend if no acknowledgement" has many parameters, and it isn't necessarily even the right thing over a WAN). And, with a standard protocol, routers and other infrastructure actually knows more about the traffic and can help it along.

    You are right that reliable datagrams aren't hard to implement: that's an argument that they should become standardized and become part of the TCP/IP protocol. Because even they are simple to implement, standardizing them has many advantages.

  • BXXP is no more a replacement for TCP/IP than an API is a replacement for the underlying system. As the article said, the purpose of BXXP is to serve as a toolkit for the development of higher level protocals that can piggyback on the existing TCP/IP infrastructure. What it is there to do is to give developers a way to create new protocals for complex applications, without having to reinvent the wheel everytime they want to build a network aware app. It's a tool, so that every time you write network software you don't have to fiddle with low-level details of bitfields and error checking when you want to create a way to let multiple applications talk. I think the "replace HTTP" angle is being overemphasized. That doesn't seem to be the primary purpose of this tool- it has some uses that might be related to giving some new functionality over and above http, but it has a lot more flexible uses. Existing protocals are not appropriate for every application; if there is a good framework for building custom communications methods, why try and shoehorn a square into a circle.

    Man I love mixing metaphors. . .

    "Sweet creeping zombie Jesus!"
  • One thing I like about HTTP and POP3 (and some
    other protocols) is that I can just telnet into
    the appropriate port and type the commands if I
    want to. I actually check my mail this way fairly
    frequently.

    BXXP may complicate this.....



  • The protocol might be a good one. It's tough to tell from the article. (personally I'm skeptical of anything that jumps on the XML band wagon lately)

    However, it's not going to replace HTTP for the following reasons:
    • This appears to be designed for peer to peer communications where the connection is maintained, and multiple messages can be sent back and forth. HTTP on the other hand is like a remote procedure call - you pass in data and get a response. Do you think Yahoo or Slashdot wants to keep an open socket for each person connecting?
    • HTTP is already standardized and firmly in place. I don't think FTP would be done the same way today if they had to do it again, but it's a standard and it's staying. Same with HTTP.
    • HTTP is simple enough for novice programmers to implement on top of BSD socket calls. This new creature is going to require a third party library or a serious investment of time.


  • sigmentation fault?

    I have a bunch of freckles. Is that pigmentation fault?
  • by jetson123 ( 13128 ) on Tuesday June 27, 2000 @08:59AM (#973046)
    I think it would be preferable to have a reliable datagram service in the kernel, as opposed to yet another messaging library built on top of TCP.

    The need for reliable datagrams ("block exchanges") comes up frequently for file servers and similar systems (like the web), as well as distributed computing, and not having such a protocol as part of TCP/IP was a major omissions. For example, NFS simply assumes that UDP is reliable over a LAN, Plan 9 built a reliable datagram server into the kernel, and HTTP/1.0 just treated TCP as a (slow) reliable datagram service (send request/receive response).

    An advantage of reliable datagrams is that it avoids most of the overhead of negotiating and establishing a TCP connection, so it can be almost as fast as UDP. Furthermore, reliable datagrams require fewer kernel resources to be maintained, and the send_to/receive paradigm is a better fit to many applications.

    BXXP looks like it requires quite a bit of stuff: a TCP/IP implementation, the protocol implementation, and an XML parser. That's going to eat up a lot of space on small devices, and it's going to be unpleasant to implement and debug. The connections on which BXXP runs will also eat up kernel resources (file descriptors, etc.), both a problem on small devices and on very large servers, and it requires even more code and modeling to figure out when to bring up and tear down connections ("is the user going to request another page from that site?").

    Furthermore, routers and other devices would have a much harder time understanding what is happening inside BXXP; that's important for things like multicasting and predicting bandwidth utilization. In comparison, reliable datagrams are a simple addition to a TCP/IP stack and require no protocols or parsing. And routers and other infrastructure can deal with them much more intelligently.

    Plan 9 has a reliable datagram service called IL, although it is only optimized for LANs. People have also been working on reliable datagram services as part of IPv6, and for Linux.

  • sounds like it would have a lot of overhead. on the other hand, http isn't all that lightweight either; a simple request to yahoo.com takes 289 bytes. so it has merit.

    it will be interesting to see some http vs. bxxp benchmarks whenever there is code behind it.

    --

  • I'm not surprised multicasting hasn't caught on, and I think the reasons for it are economic. With multicasting, no matter how many users your site serves, you only need a low-end server and a slow network connection; it would be difficult for the web hosting company to extract a lot of money that way. On the other hand, if you send a separate stream to each client, you need a high speed connection, and you need to pay for it. Thus, I think, the economic incentive for web hosting companies is not to install multicasting support.
  • by jd ( 1658 ) <imipak@[ ]oo.com ['yah' in gap]> on Tuesday June 27, 2000 @08:08AM (#973056) Homepage Journal
    There are more protocols than there are flakes of snow in Alaska over the course of a millenium.

    The problems with getting anyone to adopt a new protocol can be summarised as:

    • Does it offer the user more than they have?
    • Is it simpler to maintain? (XML is nasty!)
    • What's the learning curve?
    • What's the cost to switch? (Time & Cash)
    • Can a 5 yr old explain it to an adult?

    To be adopted requires more than functionality. It requires a market. No users, no use.

    Multicasting suffers a lot from this (partly because ISPs are die-hard Scruges (minus any ghosts to tell them to behave), but mostly the lack of users, plus the high cost of ISPs switching will forever prevent this genuinely useful protocol ever being used by Joe Q Public. (Unless he really IS a Q, in which case, he won't need an ISP anyway.)

  • Speaking of HTTP-NG, is it still in development? is it planned to be deployed eventually? I remember reading about it more than 3 years ago! Anyway, maybe BXXP doesn't mean to supplant HTTP-NG, but it does look like it has the one key feature that HTTP-NG is/was supposed to bring over to HTTP: multiple channels.

    I just hope they don't make BXXP a binary protocol. All the major app-level internet protocols (HTTP, SMTP, FTP, POP, IMAP, IRC) are text-based, and it's one of these things that make life much easier on developers.

2 pints = 1 Cavort

Working...