Will BXXP Replace HTTP? 229
Stilgar writes: "Seems like one of the Internet architects, Marshall Rose, is at it again. This time he invented a Blocks Extensible Exchange Protocol (BXXP) which seems a *much* nicer alternative to the aging HTTP (if the IETF will give it a chance). Check out the story at NetworkWorldFusion news." From the article: "One special feature of a BXXP connection is it can carry multiple simultaneous exchanges of data - called channels - between users. For example, users can chat and transfer files at the same time from one application that employs a network connection. BXXP uses XML to frame the information it carries, but the information can be in any form including images, data or text."
changing a standard (Score:1)
kick some CAD [cadfu.com]
KISS (Score:1)
If BXXP is significantly more complicated than HTTP, I don't see it replacing HTTP. HTTP (and HTML) became widely popular because they are very simple to write code for. If I have a quick program to throw together that makes a socket connection to a remote server to retrieve data from a custom CGI query, I'm going to use HTTP because it's a simple process of sending a URL as a request, then reading until EOF the results of that reqeust. If BXXP requires abstractions such as XML and the overhead of writing multithreaded code just to pull down a single document, then I'll stick to HTTP, thank you.
BXXP may find a niche, but I doubt it will replace HTTP.
BEEP - the correct acronym (Score:1)
Blocks
Extensible
Exchange
Protocol.
That looks like BEEP to me. It's easier to pronounce than "ay she cheat chee peep" or however you end up mispronouncing "HTTP".
And as for "beex"... "bee ix bix"... "beep ex pexip"... ah, screw it, I'm calling it beep anyways! You can't make me call it "bxxp"!
(Hell, you can't even help me call it that... I'd rather dictate a thesis on the merits of Peter Piper picking pickled peppers.
What stops a standard? (Score:1)
The same thing with DNS entries. The only thing stopping people up to now has been the fact that noone wants to foot the bill for alternative root servers. It makes you wonder if something like the distributed gnutella mentality would work for DNS look ups.
I guess I'm saying ditch this 'they' shit. Do it, and if it's a good idea, or if a ton of people start to use it, you can bet someones going to try and capitalize on it.
If you build it, they will come.
Re:multicasting (Score:1)
Multicasting isn't easy. Multicasting is not something that just needs to be implemented, it is still a subject of research, trying to find efficient ways to handle millions of endpoints on the Internet. The current experimental implementation (MBone) implements some experimental multicasting algorithms - some of them involve flooding to find out who wants something.
The question is who should manage receiver lists - the server, ISP's routers or multiple levels of routers - and how to do that without creating megabyte sized lists that can't be handled by routers without slowing them down immensely. Another question is how should the start/stop of a stream receive be handled - regularily initiated by server/router or by the client.
It's not for ecoonomic reasons that multicast hasn't caught on (hell, there'd be lots of economic reasons to push it on everyone!), it's that there really isn't any widely usable multicast mechanism ready yet.
Re:OO Shithole :) (Score:1)
OOP and functional programming is not contradictory.
Of course I wouldn't call this programming at all, or a protocol. I would call BXXP a meta-protocol.
- Steeltoe
whats the point? (Score:1)
And as far as 'multiple channels' that can be done with multiple HTTP connections over TCP. BXXP may be a little better, but it's got to be a lot better for people to want to use it over HTTP.
Re:Sounds great...but why? (Score:2)
I'm sure there are good reasons to re-implement these in the application layer. But what are they?
New Protocol (Score:2)
Currently if I want to design a "fully" interactive site that behaves the same way an application would I'd have to write some kind of java gui, for portablility reasons, that communicates back and forth to some central server running a specialized program that listens on some unused port and processes my cryptic and hastily thrown together messages. However, this is a lot of work, and with my message structure changing every few days its extremely difficult to manage, since you have to change the server and client code.
What I'd like to see is a new protocol with its own server, an application server vs. the current web server. I envision something like an asynchronous connection where both ends pass standard messages back and forth in real time, something similar to the messages passed back and forth between an OS and an application. It would also have to have what everybody is screaming for, one persistant connection, encryption, user validation, and a whole slew of other things I can't think of now.
The main problem is agreeing on a standard messaging system that isn't OS specific, allows users to format data however they wish, and still provides a means for expansion without dozens of revisions before a stable and workable model is created.
Along with this there would need to be an update to web programming languages, yet another java package, or maybe even a whole new net language. This would turn the current web browser into an application browser, where one program could act as anything you wanted it to be. I suspect this is what MS is doing or intends to do, however I doubt they'll open the specifics to everyone, at least not without signing a disclosure agreement.
Well with all that out of the way all we need to do is come up with some acronym. I recomend nssmp (not so simple messaging protocol), or how about wrcwicp (who really cares what it's called protocol)?
BXXP is not a protocol (Score:5)
HTTP was designed as a single-use protocol. Because it's understood by firewalls, etc, it gets used for just about everything, even if it's not really appropriate.
BXXP aims to provide a well thought out common building block for a whole class of new network protocols. If HTTP was being designed after BXXP, then the obvious thing would have been to layer HTTP on top of BXXP.
So, really BXXP isn't intended to replace anything. It's intended to make it easier to deploy new protocols in the future.
-Fzz
Re:Oh no (Score:2)
<bxxp>
Content-Type: text/html
Content-Length: whatever
Other-Headers: Other-Info
<HTML>
my HTML document here
</HTML>
</bxxp>
There's no reason you need any more of a wrapper than one tag. All these assumptions are totally baseless. It _could_ be bloated, or it _could_ take no more than 13 extra bytes per request.
---
Tim Wilde
Gimme 42 daemons!
Re: (Score:2)
Re:Aging?? (Score:2)
Firewall friendly != goes through firewalls... (Score:2)
Firewall-hostile is a better term for protocols that carry many different types of data.
As a QoS weenie (see http://www.qosforum.com/) I also don't like the way that HTTP can carry many different types of data requiring different QoS (performance) levels, e.g.:
- DCOM over HTTP: needs fairly low latency
- CGI transactions: ditto
- RealAudio over HTTP: needs low loss, consistent bandwidth
- static pages and images: no particular QoS needed
The only way to handle this is to classify every type of HTTP interaction separately, using URLs to map packets into Gold, Silver, and Bronze QoS levels. This is feasible but fragile - as soon as the web-based app or website is updated, you may have to re-write these rules.
Even worse, HTTP/1.1 puts all interactions with a given host into a single TCP session (a good idea in many ways), which makes it a waste of time to prioritise some packets ahead of others - the receiving TCP implementation simply waits until the out of order packets arrive, sending a duplicate ACK in the meantime. Severe packet re-ordering may even lead to slow start in the sending TCP (three duplicate ACKs means slow start from a single packet window size).
Similar issues apply to security - you might want to encrypt only the transaction data and not bother encrypting images, for example, or route some data via a more secure network. SSL does solve some of these problems but requires application support.
Oh well, enough ranting - HTTP is a great protocol in many ways, but there are costs as well as benefits to abusing it to do things it was never meant to do...
Re:reliable datagrams (Score:2)
There are many reliable multicast protocols around, some of which at least should work for reliable unicast as well, e.g. RAMP (
http://www.metavr.com/rampDIS.html). See http://www.faqs.org/rfcs/rfc2357.html for criteria for reliable multicast protocols, has some good references.
There is also T/TCP, a variant of TCP which is intended for request/reply transactions - it lets you send data with the initial SYN request and get data back with the ACK, so it's not much less efficient than UDP-based protocols. It would be ideal for HTTP/1.0 but has not really been deployed much.
RADIUS uses its own reliable unicast approach over UDP, and is widely deployed, but it's not a separate re-usable protocol. See www.faqs.org for details.
Some problems with reliable unicast are:
- congestion control - there's no congestion window so it's very easy to overload router buffers and/or receiving host buffers, leading to excessive packet loss for the sender and for other network users
- spoofing - it's even easier to spoof these protocols than with TCP (I think this is why T/TCP never took off)
As for BXXP - I agree about difficulty of understanding what's going on. QoS-aware networks and firewalls all prefer one application mapping to one (static) port.
Re:multicasting (Score:2)
http://www.winsock2.com/multicast/whitepapers/m
There's some interesting work on fixing this, though - I forget the details but a recent IEEE Networking magazine had a special issue on multicast including info on why it's not been deployed yet.
Also, for small groups, there is a new protocol called SGM that runs over unicast UDP and unicast routing - the idea is that if you want to multicast to a group of 5-10 or so people, and there's a large number of such groups, you're better off forgetting about IGMP and multicast routing, and just including the list of recipients in the packet. Very neat, though it still requires router support. Some URLs on this:
http://www.computer.org/internet/v4n3/w3onwire-
http://www.ietf.org/internet-drafts/draft-boivi
http://icairsvr.nwu.icair.org/sgm/
Re:HTTP (Score:2)
Except that the difference between rolling out HDTV and a replacement for http is where the changes take place. With HDTV you have to upgrade the equipment at the broadcast and receiver end. You also have to deal with limited bandwidth for broadcasting that makes it difficult to serve both technologies at the same time. With a protocol change, you can transition more easily by including both standards in your browser and on the websites. How many sites already keep multiple formats on hand? Do you want frames or no frames? Do you want shockwave or no shockwave? Would you like to view the video in RealPlayer or Quicktime? I can update my browser for free. How much does that new HDTV box cost?
carlos
Re:Aging?? (Score:3)
of course, look at how fast http superseded gopher.
of slightly more interest to me is the security implications of bxxp. since it's two-way, it could be difficult to filter, and opens up all sorts of interesting possibilities for propagation of virii and spam.
--
Why HDTV? (Score:2)
When you ask what's wrong with TV now, you find "content", "it's boring",...
The problems are with the content (that's why you are reading Slashdot instead of watching TV), not the image quality.
1050 o 1250 horizontal lines are not enough to spend the money at both the emitting and the receiving ends.
Of course, prices would go down after massive engagement, but there is not enough of inital adoption.
Digital TV on the other side promises more channels on the same medium. The content will as bad or worse, but there will be more of it. So Digital TV has more of a chance to replace PAL, SECAM & NTSC
__
Comment removed (Score:4)
URNs or URIs, Freenet, WBI (Score:2)
Hacking HTML for multiple destinations is at most a kludge.
A promising thing is at Freenet. You specify what document you want and say nothing about it's location.
About load balancing, I remember that the WBI web proxy (somewhere at IBM's AlphaWorks) had an option to do a ping for every link in the page and insert an icon marking slow from fast ones. I found it interestin until I realized that final layout of the page had to wait for the slowest ping.
__
NFS does not assume UDP is reliable (Score:2)
NFS is stateless and works by sending the same datagram repeatedly until it receives an aknowledgement. That is, IMHO, a terrific use of UDP and shows the lack of need for a so called reliable datagram protocol. For a datagram protocol to be reliable it will have to send (and cache) some sort of packet number or something and then send a response back. You could already do the same thing by doing your own packet number chaching and acknowledgement with UDP.
Re:A modest proposal (Score:2)
The reason this is an issue is because a link that can transfer a huge amount of data but with high latency will end up being limited by having to wait for ACKs before advancing the window. Of course you have more ACKs to wait for if you have more, smaller packets. The easy fix is to increase the packet size. You have to counter this however with the reliability of the link, or you end up constantly resending a huge amount of data.
Re:Troll Alert (Score:2)
I'm going to skip over you calling me a troll. I may be misinformed. I may be wrong. but I'm not a troll.
Now, first, you got it backwards - TCP is not rate limited perse, however, if you consider that your max RWIN is typically 64k, and that it'll take 5ms to ack it, this means that 64 * (1000/5) = 12800, or alittle over 12MB/s. That's a good lan connection! However, if you increase it to about 100ms, which is my typical ping for my cable modem, my maximum bandwidth is a mere 640KB/s. Ow. Biiig difference. So, there's where my numbers come from. Now, about the persistence..
Persistence does you no good as you still need to request the data. Now, let's assume you have a 50kb webpage to download. We'll say the html is 5k, and there are 3 graphics on the page, 15k each.
TCP handshake will take about 150ms. This means 50ms to get there, 50ms to get back, and another 50 to get the final SYN/ACK. Now for the first request, another 50ms to send it. We're at 200ms. Server gets HTML page, we'll say it takes 5ms to do that and pipe it back out. We'll also say it does this all at once. 255ms later, we have the page. Now, we'll open 3 new connections for the images - 150ms for each. Now we're at 405ms. Images each take another 5ms to grab it, 50 ms to come back. 460ms. Rendering begins on your system by your browser. Now, we want to close the connection - the server already closed its remote connection probably with an RST pkt on the last piece of data. So another 50ms to send your RST pkt, 50ms to ack that, and now we're done.. grand total: 560ms.
Now, let's assume we could have done this via UDP all at once... again.. 50ms to send the query. 5ms to retrieve all 4 parts. another 50 to send it back. 105ms, your system begins rendering. As this is a stateless connection, no need to close it - if it failed, we'll retry. Congrats, you're done at a smoking 105ms vs. 560ms.
Now then, about me not being informative...
Re:Aging compared to how it's being used (Score:3)
And actually, MIME extensions allow for multipart email, where each part can be encoded differently. I think that works pretty well, too: You can send a bunch of stuff, all of it gets bundled into a single file which again is transferred to the resipient using transport protocols, and the resipient is then free to do whatever he wants with the bundle - usually opening it using a program that knows how to handle such bundles (mail user agents) is a reasonable option. Using software that tries to run every file it gets its hands on is another thing, unrelated to this.
Re:Troll Alert (Score:2)
However, if you increase it to about 100ms, which is my typical ping for my cable modem, my maximum bandwidth is a mere 640KB/s. Ow. Biiig difference. So, there's where my numbers come from.
Well, that would be true if morons designed TCP/IP, but fortunately that sort of protocol hasn't been used since XMODEM. TCP/IP will continue to transmit packets without waiting for an ACK of the previous one. This is referred to as a "sliding window" protocol. Of course, it will transmit only so many packets before it has to wait, which is the "window size".
Look up "sliding window" in your TCP/IP book ("Internetworking with TCP/IP by Comer is the usual recommendation) or you might even try a web search.
--
And bXXXp (Score:4)
Headline news ain't. (Score:2)
As I see it, replacing HTTP is probably not going to be the first application of the BXXP protocol. In order to see the beauty of BXXP, you must consider the plethora of existing protocols (SMTP, HTTP, FTP, AOL-IM, ICQ...) none of which would be seriously hurt by a minor increase in overhead. Using a common language means that you don't have to write an RFC822 (or other RFC) parser for every application that wants to send mail, or request a file, or send an informational message/status update. You can parse the XML, and get the relevant info with much less effort using common code. You could share common headers between email and instant messanger clients. They're similar enough to speak IM to IM, or IM to mail server... Share libraries == less programming. Shared libraries == fewer bugs (theoretically).
I speak from experience. I'm working on a client/server framework for a project that I've been doing for too long, and I've reached the end of my original communication protocol's useful life. I've switched over to an XML based format, and I'm happy with it. If I'd had BXXP to start out, I could have avoided writing two different communications protocols, and spent that time working on better things.
What problem is this solving? (Score:3)
I wish all these brilliant minds would work on providing unique and interesting Internet content, which is the part that is sorely needed now.
BEEP? (Score:5)
HTTP (Score:3)
I like HTTP, I think I would miss it. (Score:2)
I doubt it would be practical to talk directly to bxxp. Plus, it seems to be jumping on the XML bandwagon.
Re:HTTP (Score:5)
And by Apache, and in Java... Don't forget the server end. The radio in my car can pick up many different frequencies, but unless someone is actually broadcasting on a given frequency, I'll be getting static.
Apache 2.0 including BXXP support would go a long way towards it being used, as over half the websites in the world are run on Apache. Support in the Java java.net package for BXXP URL connections would also help enabled BXXP in a wide variety of applications.
Support for new technologies client-side is nice - but unless there's also support server-side, the technology can't be used.
HDTV is taking a long time to be adopted simply because of the expense to purchase a set - at $1000 a set, it's not surprising people aren't rushing to get one. Yeah, it'll take time - but all it takes is a few sites to start using BXXP, a few services, and a few web browsers to support that, and eventually, it can come into it's own right as a internet protocol. But it may be coming sooner than you think.
Internet Scam!! (Score:2)
Seriously, this seems like good-looking technology, but I don't think file transfer is a good application of it - with file transfer, you want packets to be as bare as possible, with as much data as possible. No XML wrapping.
We're in the $$ (Score:2)
It will be interesting to see if all these highly intelligent people can get together and make money.
__________________________
FTP (Score:2)
Telnet is a very handy way of diagnosing FTP problems. If you're not happy with setting up listeners for the data connections, use PASV instead of PORT.
I'm all for protocols with which you can do this -- and FTP is one of them!
Unfortunately, the need for secure protocols is going to make this more difficult as time goes on. Don't expect something like OCSP to be human-readable.
--
Re: (Score:2)
existing protocols (Score:2)
You can run both VNC and X11 inside a web browser using Java these days - not that I see a need to do that.
Re:Unpronounceable -- better idea (Score:2)
(Just couldn't resist :)
Agreed... Also consider IIOP, LDO? (Score:2)
It sounds to me like they're recreating what IIOP provides, and with the added cost that you need to encode data in the rather wasteful XML format.
I half figure that someone will eventually build an "XML over IIOP" scheme that will allow compressing it.
The Casbah [casbah.org] project's LDO [casbah.org] provides another alternative.
Re:BEEP? (Score:2)
I'd enjoy it more than "bee-eks-eks-pee" -- "aych-tee-tee-pee" is hard enough to say four times fast.
Of course, we all know geeks would _never_ go with "inside jokes", right?
Re:Ne[x]t geenration (Score:2)
Yeah, no kidding. Anyone else ever think it's funny how programs in the cyberpunk genre have complex GUIs for their security holes? I mean slashing through ICE with claws and blades? What the hell does THAT actually accomplish code-wise, especially when it's all done with generic interoperable third-party tools?
--
I wish I could turn off the auto +1 w/o enabling cookies and logging in...
Re:HTTP (Score:2)
Re:We're in the $$ (Score:3)
They already have to some degree. I know Rose worked with Adams in forming PSI, O'reilly, Rose, and Malamud (I think) were involved in the ahead of its time GNN, arguably the world's first Web "destination" (although they didn't manage to make the transition to portal with the arrival of the search engines), and Rose and Malamud have worked togehter on a number of projects from Malamud's Internet TownHall and radio.com (both now defunct) to the Free tpc.int fax bypass network, which failed to generate much interest, although I think it soldiers on in a few locales.
In any case, Rose is one of the best protocol jocks in the world, so in general his suggestions should be taken seriously. One of the most enjoyable classes I ever took was his on Internet mail protocols back at the "old" Interop years ago, back when it was a get-together for the people actually building the Internet rather than a slick merchandise mart with suits hawking the latest lock-in.
Re:HTTP (Score:2)
So yes users have a very good chance of seeing BXXP in action, however, I think you are correct in that it won't replace HTTP (yet).
---
Please don't tell me... (Score:2)
I mean, for people who just surf around, I'm sure IE and Netscape will quickly adapt to be able to fill that in themselves, and that just typing "Yahoo" will be enough to find either bxxp://www.yahoo.org or http://yahoo.com depending on how the defaults are set.
But for those of us writing CGIs, this kind of sucks. Sometimes the "http" has to be very explicit. It's a simple matter to know when to use http://, ftp://, telnet://, etc., because the protocols are so unrelated, but with these related ones, it will be a headache.
Of course, if bxxp can handle http, I guess this problem shouldn't exist at all. It still will.
Re:Why multiplex over one TCP connection? (Score:2)
For example, a webpage may contain 4k of text, and say, 20 2k images. Why make 21 TCP connections, when you can make one, and send the whole page in one XML stream? That is a big overhead savings, and the page will load MUCH faster. Even for something like slashdot comments, where there's about 200k of text and maybe 10 images, that's still a savings, although not as much.
Simply bandwidth wise, (assuming 0 RTT), a TCP/IP segment header is 40 bytes. A connection requires 3 packets + 4 to disconnect = 280 bytes per connection. An sequence of XML tags would probably be smaller, in addition to reducing wait time and processor load.
---
Re:Noo!! Not another scourge of hard to say letter (Score:2)
A lot of people say it "dubya-dubya-dubya"...
Wait...Oh no!
George W. Bush has stolen Al Gore's thunder! Gore may have invented the internet, but the World Wide Web is George W. Bush's middle name!
---
Zardoz has spoken!
Re:What problem is this solving? (Score:3)
Actually, content was the whole point behind the protocol. We were trying to solve a class of problems, all driven by content requirements. Examples are the SEC's EDGAR database [invisible.net], a variety of other "deep wells", and a class of problems ranging from mapping network topology to creating personalized "maps" (views) of the Internet. See here [mundi.net] for more on the philosophy behind the content requirements.
The protocol emerged from long discussions about how to solve these content problems. We tried as hard as possible to reuse existing protocol infrastructure, but quickly found that there were no protocols that handled the metadata problems we were trying to attack.
The (IMHO) brilliant thing Marshall did was to build two levels into the solution. BXXP is the general-purpose framework [mundi.net] that was used for the Simple Exchange Profile [mundi.net] application we were going for in the first place. The nice thing was that BXXP works for a broad range of other applications, such as asynchronous messaging. [ietf.org]
The bottom line is why reinvent the wheel more than once?
Carl
Defective user (Score:2)
It's a hack solution for a rare (but obnoxous) problem.
[Oh ok I'll e-mail you an mp3 of the audio recorded at the last meeting]
The idea of normal e-mail is to send text. Not HTML and certenly not MsWord.
Windows helps premote this problem (not Microsoft.. just an example of "one world one os" being bad.. Microsofts todays example Linux may be tomarows bad guy in this respect.. maybe some day Apple).
Basicly a user sees a file format is native to his operating system he mistakenly believes it's "normal" and sends this file in e-mail.
If he has less than a 50-50 chance of getting someone who can accually use that file then he'll get a clue and stop. But if he has a greater than 60% chance of getting a user who supports it then he'll just assume the others are losers.
For the most part it's annoying. It's not e-mail and it dosn't do the job.
Now with e-mail viruses I'd hope that even Windows users would say NO! to Windows files simply becouse this shouldn't be commen practace in the first place. True Windows makes it EASY but that is still with the idea of two executives using commen (or compatable) software with an agreed on format with an agreed on goal (such as e-mailing the database of a daily budget or an audio file of an interview in a format both sides support [such as mp3])
If both users use Windows and Microsoft office then hay thats great go for it. But if one side uses Linux with KDE Office and the other Mac with AppleWorks then it's a matter of finding a commen format between AppleWorks and KDE Office.
Anyway.. sending MsWord files as a way to send text is a bad thing. But when there is a greater than 60% chance that the guy on the other end can read MsWord files it dosn't occure that sending e-mail in acuall text will work 100% of the time.
PS. yes I throw away ALL MsWord files unread
DNS and Fancier Load Balancers Already Do This (Score:2)
Another approach that some fancy load-balancers use is to always give the same IP address, but fake out the packet requests using NAT or other routing tricks, or having the web servers themselves change their IP addresses dynamically. It's a bit uglier, but browsers and DNS resolvers often cache their results, so a web server might die or get overloaded but users are still requesting pages from it because they've cached its IP address - this way you spread out the load more evenly at the cost of more dynamic address-shuffling translation work by the load balancing server.
Hah! (Score:2)
Or alternatively, can it make us more money by screwing our competitors out of marketshare?
Is it simpler to maintain? (XML is nasty!)
It can be if you want it to be. It doesn't have to be. It can be quite elegant, really.
What's the learning curve?
XML? If you know HTML, I can teach you XML in about 5 minutes really. For protocols, who really cares what the learning curve is? PHB says to developer, "You will support this", and once it's supported, it's completely transparent to the user. Only the developer has to bother to learn the protocol. And if they built it around XML, it probably just ain't that hard.
What's the cost to switch? (Time & Cash)
Potentially huge. Potentially nothing. Depends on who you are. For some people, it will require downloading a new version of a browser. For others, millions on new software licenses for their crappy proprietary web servers, and developing support for this in.
Can a 5 yr old explain it to an adult?
Can a 5 year old explain the latest FPS to an adult? That didn't stop their acceptance and humongous sales.
OO Shithole :) (Score:2)
Whatever happened to functional programming? Why is earth going into an OOP shithole ever since java showed up?
Re:KISS (Score:2)
You're griping about "HTTP", which is a *protocol*, reguarding menus and DHTML, which is
a *FORMAT*.
And you top it all off with
"XML is a [...] truly robust protocol". Except again, XML is a FORMAT, not a protocol!
I really, really hope people aren't paying you money to design websites, if you cant tell the difference between a protocol and a format.
Re: (Score:2)
A modest proposal (Score:2)
What we really need is a protocol that can, upon receipt of a single authenticated request, determine the speed that the remote end is running at, and then rapidly chunk out an entire page in a single request - instead of a few images here, a few javascript files there, and don't forget the stylesheet in the LINK tag!
It is obvious we are quickly moving into a high-bandwidth network where consumers will routinely have access to multi-megabyte streams. The TCP protocol is, by design, limited to a mere 780kb/s. You cannot go faster due to network latency and the max size of the RWIN buffer. Therefore, it's obvious this protocol needs to be UDP.
Security is also a concern - we need some way to authenticate that the packet came from where it said it did before we blast out a chunk of data - as likely there won't be time for the remote host to respond before we've sent out most of the request to it by then.. so some form of secure handshake is needed. If you could make this handshake computationally expensive for the client but not the server, so much the better for defeating any DDoS attacks.
But really.. we need a reliable file transfer protocol that supports serialized queuing and higher speeds than we do now.. and that means either ditching TCP, or setting up parallel streams between each.. with all the overhead that goes with that solution.
BXXP doesn't do that, unfortunately.. and if we're going to define a new protocol, let's plan ahead?
Re:Why multiplex over one TCP connection? (Score:5)
Re:A modest proposal (Score:2)
WHAT? I've moved over 10Mbytes/sec over TCP! The receive window in certain alleged operating systems *cough|windows|cough* may be too small for high speed over the 'Net, but any real OS can automatically adjust the TCP windows to be big enough to fill the pipe.
Apache Support (Score:3)
"Sweet creeping zombie Jesus!"
draft url (Score:2)
http://xml.resource.org/profiles/BXXP/bxxp.html [resource.org]
Aging compared to how it's being used (Score:5)
HTTP, on the other hand, has been stretched far beyond what it was intended for by today's web. It's stateless and simplistic, yet we that write web applications need it to be more than that. This gives it the sense of being an "aging" protocol. Session management of any form with HTTP is a dirty hack at best, and web applications needs a better answer if they are to expand beyond what it is now. If BXXP can speak to those shortcomings of HTTP, then all the better.
Re:Troll Alert (Score:2)
Paranoia! (Score:2)
Depends on how paranoid is paranoid. You're not really anonymous anymore. There are things like the anonymizer, remailers, and so on, but due to abuse, I bet they keep bitchin' logs.
Spoofing used to be an issue, but AFAIK, (and I haven't even thought about it in quite a while) it's not really possible anymore due to bind updates. Everywhere you turn, you're being logged. Doesn't matter if it's an HTTP server, the banner ads on that server, downloading a file through "anonymous" FTP (yeah right) or logging into your own box. I don't see much anonymity at all on the web, since your IP is scattered all over the universe whenever you so much as connect to another server. If anybody knows ways to get around that, please let me know.
You can be anonymous in the sense that the server only knows that some loser on an @Home cable modem is the one who's looking up this goat pr0n or reading about ralph nader, but when it really comes down to it, you're not.
I've always wondered if anybody will ever implement some type of reverse lookup system through ISPs. I know it wouldn't be easy, but imagine something like this - you dial up, and connect to goatpr0n.com. Since they want to market to you, they send a request to your ISP's server invader.myisp.com asking which customer is connected to ISP IP hostname foo.bar.baz.dialup.myisp.com. At that point, myisp.com sends back some "relevant" information to the "client".
Or even completely different servers. I bet pepsi
In a world where companies are getting busted for backdooring their own software, people are rioting against doubleclick abuses, and you're logged every time you take a shit, does privacy really still exist? The answer is yes, but only as long as you're doing something that nobody thinks they can make money off of.
What about redundancy / load balancing? (Score:4)
Another thought that I had was a [somewhat primative] form of load balancing for servers that actually took place on the client side (which would sort of crudly happen with the randomized link idea above). Before downloading everything off a site (images, video, etc.) the browser could go through some sort of ping function to see which server was closest to it in terms of latency or availability, or better yet it could be coded into the server to report it's bandwidth/CPU load or some sort of combination of relevant data to the browser and let the browser decide which server would be best to obtain the data from. This routine would be outlined at the beginning of the code for the web page. This could also be extended a great deal to even include things like distinguish between data types to help choose servers accordingly, etc.
If you think about it, HTML/web servers could be much more intelligent than they are now, and the best time to add intelligence would be when adopting a new format like this. While we're at it, we should also do a bit more to protect privacy. Intelligence combined with privacy... could it be possible?
Something similar to this in terms of redundancy also needs to happen with DNS and is long overdue. Just had to add that.
My two cents...
--SONET
"Open bombay doors!"
Simplicity is the key (Score:2)
Citrix
Re:HTTP (Score:2)
Yeah, it'll take time - but all it takes is a few sites to start using BXXP, a few services, and a few web browsers to support that, and eventually, it can come into it's own right as a internet protocol.
Only if it is compelling for the user experience. From the article it doesn't seem like it will make current transactions better, but make it easier to create future ways of communicating. If it does not provide a compelling experience now, there is no impetus to adopt it to work or replace HTTP. It will be adopted when something new built on BXXP comes along, and that something new will have to be compelling.
FJ!!HTTP will win, BXXP will be an XML wrapper (Score:2)
HTTP will continue to be the protocol for 90% of the web, BXXP will only become useful for massive data interchange aspects, such as those for B2B exchanges.
Expect to see IE, Netscape, and Opera extensions to allow BXXP, but don't expect to see it anytime soon, at least until they start to fully implement XHTML and XML standards. I'd guess 2002 for reasonable code that works fairly well.
If you're trying to crank out massive data exchanges, you should definitely get into this; if you're trying to do a web site, especially one with lots of text content, you may never use it.
Noo!! Not another scourge of hard to say letters!! (Score:3)
While "X" is still technically one syllable, it doesn't trip off the tounge quite like "T". There are two tounge motions in that one syllable, "Eks". We need a more streamlined protocol name. Make it something catchy like Extensible Exchange Enabler and call it Tripoli. Easy to type, too.
Forget the network performance tests! Any new protocol should have to undergo verbal performance tests.
Also, if I see one more thing with an unnecessary secondcap X in it, I'm going to hurl.
*AHEM* - BXXP is not intended to replace HTTP (Score:5)
As such, it does not conflict with, or supplant HTTP-NG, or many other standards being hammered out by the IETF. It's just another tool developers and info provider may choose, depending on their needs.
Re:A modest proposal (Score:2)
Then you say:
Signal 11, are you insane? TCP was specifically designed for reliability. UDP was designed for unreliable connections. Non-guaranteed. No assurance. Unreliable.
So maybe TCP needs an overhaul. Maybe another protocol would be better. But UDP? If you're going to "ditch TCP", don't build your new protocol on TCP's just-as-old cousin/brother/whatever.
Re:XML for framing ? (Score:2)
You meant to say vastly smaller. Whether it's superior or not depends on what you want to do. For example, your sample XML had 126 bytes more overhead than the equivalent HTTP, and 126 extra bytes is likely quite acceptable when sending documents that are usually several kilobytes in length, and those bytes buy you all sorts of XML goodies that the HTTP version doesn't have. On the other hand, if your document is on the order of a few bytes and you're sending a jillion of them, you clearly couldn't accept that much overhead.
--
-jacob
Re:HTTP (Score:2)
What if Apache supports it, along side http? This sounds possible. And perhaps mozilla would support it. That's all it would take, really...
As for how long it's taking ipv6 to be adopted.. the main reason nobody is adopting it is because there is currently not reason to use it!
Re:Aging compared to how it's being used (Score:3)
i sincerely doubt that was considered and mime is not pretty.
Re:Security and Privacy (somewhat OT) (Score:2)
Well, that ain't so simple.
Internet has a lot of pseudo-privacy. This means that for Joe Q. Luser (and even for his sister Joanne D. Not-So-Luser) it is hard to find out who is the person behind a handle or a nick or a screen name or an e-mail address. However that's not true for for law enforcement. If you are not engaging in useful but inconvenient paranoia, it's fairly trivial for law enforcement (==gubmint) to trace you (usually courtesy of your IP) and produce a basic sequence of your activities (generally courtesy of your ISP's logs).
Thus "normal" internet usage is opaque to public but can be made transparent (at high cost in time and effort) to law enforcement agencies.
Of course, there are a bunch of tools that are capable, if skillfully used, to mask your identity. However by my highly scientific guess less than 0.001% of internent users actually use them.
government (and corporations) are kicking themselves that they didn't approach Gore and have him build in monitoring into the protocols since they would LOVE to watch every little thing we do
And what is it that you want to monitor? Most everything is unencrypted and freely readable by anybody with a sniffer or access to a router. If I want to know everything you do I can get a court warrant and attach a large hard drive to your ISP's gateway. I don't need anything from the protocols: the stream of IP packets is perfectly fine, thank you very much.
Kaa
Abstraction (Score:2)
Sorry, wrong standard (Score:2)
However, BXXP is a protocol (like HTTP) not an authoring language (like HTML). HTTP and HTML have nothing to do with each other, except that they are both very popular standards for online content. Thus, your post is off-topic.
More connections == More overhead (Score:2)
Because multiple TCP connections do not share congestion information, it is possible for a greedy user to get more than his "fair share" of a pipe by using many parallel connections.
And any number of additional protocols, be they layered on top of IP (like TCP is) or TCP (like BXXP is) will not solve this problem.
Each TCP socket takes up system (kernel) resources on the server.
One way or the other, multiple connections are going to use more resources. 'Tis inescapable.
Where those resources are allocated is entirely implementation dependent, and indeed, only makes sense in terms of specific implementations. Look at an MS-DOS system, where there is no kernel, or something like khttpd, which implements HTTP inside the Linux kernel.
Layering additional protocols on top of existing protocols which do the same thing because some number of existing implementations don't handle resource allocation well is highly bogus.
The thing is, any time you flood a system with more connections then it can support (whether those connections are at the TCP level or something higher), it is going to suck mud. BXXP isn't going to change that fact; it is simply going to relocate it. See above about bogus.
Personally, BXXP looks to me to be redundant and needlessly repetitive. The core is basically TCP running on TCP, and then there is some higher level stuff that doesn't belong in channel-level code to begin with. It would be better to design some standard wrapper protocol to layer on a TCP connection to describe arbitrary data, thus providing something new without reinventing the wheel while we're at it.
Re: (Score:2)
Unpronounceable -- better idea (Score:2)
beep://slashdot.org
I like that much better.
-JD
The IETF draft for BXXP (Score:5)
Re:Rewriting the weel, and the axle, and the car.. (Score:2)
Sure, TCP is already multiplexed, but there is a lot of overhead inherent in this mulitplexing scheme as TCP cannot distinguish between 10 connections to 10 different machines or 10 connections to one machine. They are all completely seperate connections, all with seperate kernel space buffers, all doing their DNS lookups, all having seperate windows and flow control and connection states, etc, etc. There is a lot of overhead that goes into making a TCP connection; it is very noticeable to the end user.
This protocol, by allowing ONE TCP connection to describe ALL transactions between two machines, can reduce overhead on both machines, and reduce load times and processing consumption. See my reply to the post above yours.
It does serve a purpose. And it is in no way a replacement for TCP. It does not provide the connection, reliability, flow control, etc, capabilites of TCP. It is a stream format, not a packet-level format, as such, it MUST be built on top of TCP, as TCP provides a state-oriented, stream oriented connection on top of a connectionless, unreliable, stateless datagram packet protocol that is IP.
---
Re:Come on people, read the afticle. (Score:3)
And BXXP won't get rid of all the different protocols; what it might do is provide a common framing mechanism for protocols. Which means you have to do just as much work in protocol design determining semantics of what each extension and flag and whatnot else carried within the framework means. (And HTTP has some very well-developed and real-world-motivated concepts and semantics, f.e. in the caching and proxying and transactional areas; don't expect those to disappear any time soon.) It could, I suppose, makes it easier to build parsers, to which I say two things:
When I see proposal with too many layers... (Score:4)
...and all are incompatible with things that already exist, I suspect that some intent other than improvement of the protocol is present. For example, XML demands to use Unicode (to be exact, standard was designed to make it really hard to do anything else unless everything is in ASCII). HTTP 1.1 is a one huge spec with various wild demands to applications made across all levels of content, presentation and transmission means, that can't be implemented separately of each other while remaining compliant to a spec. Java is both a language and a bunch of libraries that can't be ported or reimplemented in anything but java itself. And now more or less reasonable proposal of having one more level of multiplexing (compared to multiple TCP connections -- one however may argue that it would be better to attack the problem at its root and make a congestion control mechanism that can work across connections) is tied with use of XML instead of well-known, more efficient and more flexible MIME.
Good ideas that can be implemented separately are used to push dubious-quality "standards" that provide no benefits but pushing useless, semi-proprietary or simply harmful "inventions" by combining them in the same spec.
Re:XML for framing ? (Score:2)
But doesn't XML have abbreviated forms as well? Like:
<stream-length/65535/
et cetera. So it's a little bit of a savings. Plus, there's two other factors. First, if this is just control data, it's still going to be tiny compared to the actual content. Second, I wouldn't be at all surprised if these XML-based protocols use a gzip-encoded stream or something.
Re:HTTP (Score:2)
The difficult part of phasing BXXP in will not be supporting it in popular web servers/browsers, but rather programming for the protocol itself. It only takes two companies/organizations--Apache and either Netscape or MS--to allow the protocol to be used on the Internet, on many machines. However, the development of useful content on the server side requires each we publisher to incur an additional expense. As such, there must already be a critical mass of supported clients available for it to be cost effective for them to rewrite existing applications.
Re:Aging?? (Score:2)
I would say that http does a good job of the basic work that it was designed for, and I doubt that it will dissapear from that too soon. But as networked applications grow more and more complex, http is going to be harder and harder pressed to adapt. Hopefully, simple apps will keep using the simple http protocal, and newer, more complex technologies will have access to a protocal framework with the power and flexability they need.
"Sweet creeping zombie Jesus!"
SOAP over BXXP would be great (Score:2)
I think SOAP mandates the use of HTTP, though.
Other things (like ICE) would work well over BXXP, though.
Performance Hit? (Score:3)
From what I can decipher, this seems to be a more extensible protocol, which would allow easy creation of network-aware applications. My question is, since there is an added layer of abstraction, wouldn't there be an overall performance hit?
Besides, wouldn't multiple "layers" of data on the same connection open tons of potential security risks?
Or am I off my rocker on both counts?
--
Aging?? (Score:5)
Inadequade, inefficient, and slow are adjectives I'd use with HTTP. Aging I wouldn't use. That implies that FTP and SMTP are both old and thus should be replaced. The age of a protocol doesn't matter. What matters is if it does the job.
Rewriting the weel, and the axle, and the car... (Score:2)
We have it right there - finally, a TCP/IP protocol that replaces TCP/IP! What'll they think of next? Maybe the next advance in computing technology will be emulating a 386 in my shiny Pentium III?
Replacing HTTP is a stupid thing to do because any "replacement" isn't a replacement for HTTP at all - it's a replacement for everything. HTTP does what it needs to do (and then some):
Re:NFS does not assume UDP is reliable (Score:2)
As for implementing "my own packet number in user code", what kind of argument is that? I could also implement some form of reliable byte streams based on UDP in user code fairly easily.
But standardizing these kinds of protocols, we get several advantages. For example, we get interoperability. And people don't have to reinvent the wheel every time and the problem can be solved well once and for all (even a simple strategy like "resend if no acknowledgement" has many parameters, and it isn't necessarily even the right thing over a WAN). And, with a standard protocol, routers and other infrastructure actually knows more about the traffic and can help it along.
You are right that reliable datagrams aren't hard to implement: that's an argument that they should become standardized and become part of the TCP/IP protocol. Because even they are simple to implement, standardizing them has many advantages.
Re:Rewriting the weel, and the axle, and the car.. (Score:2)
Man I love mixing metaphors. .
"Sweet creeping zombie Jesus!"
telnet host 80 (Score:2)
other protocols) is that I can just telnet into
the appropriate port and type the commands if I
want to. I actually check my mail this way fairly
frequently.
BXXP may complicate this.....
Not going to replace HTTP (Score:2)
The protocol might be a good one. It's tough to tell from the article. (personally I'm skeptical of anything that jumps on the XML band wagon lately)
However, it's not going to replace HTTP for the following reasons:
Re:Have they checked M$? (Score:2)
I have a bunch of freckles. Is that pigmentation fault?
reliable datagrams (Score:5)
The need for reliable datagrams ("block exchanges") comes up frequently for file servers and similar systems (like the web), as well as distributed computing, and not having such a protocol as part of TCP/IP was a major omissions. For example, NFS simply assumes that UDP is reliable over a LAN, Plan 9 built a reliable datagram server into the kernel, and HTTP/1.0 just treated TCP as a (slow) reliable datagram service (send request/receive response).
An advantage of reliable datagrams is that it avoids most of the overhead of negotiating and establishing a TCP connection, so it can be almost as fast as UDP. Furthermore, reliable datagrams require fewer kernel resources to be maintained, and the send_to/receive paradigm is a better fit to many applications.
BXXP looks like it requires quite a bit of stuff: a TCP/IP implementation, the protocol implementation, and an XML parser. That's going to eat up a lot of space on small devices, and it's going to be unpleasant to implement and debug. The connections on which BXXP runs will also eat up kernel resources (file descriptors, etc.), both a problem on small devices and on very large servers, and it requires even more code and modeling to figure out when to bring up and tear down connections ("is the user going to request another page from that site?").
Furthermore, routers and other devices would have a much harder time understanding what is happening inside BXXP; that's important for things like multicasting and predicting bandwidth utilization. In comparison, reliable datagrams are a simple addition to a TCP/IP stack and require no protocols or parsing. And routers and other infrastructure can deal with them much more intelligently.
Plan 9 has a reliable datagram service called IL, although it is only optimized for LANs. People have also been working on reliable datagram services as part of IPv6, and for Linux.
overhead? (Score:2)
it will be interesting to see some http vs. bxxp benchmarks whenever there is code behind it.
--
multicasting (Score:2)
New protocols (Score:4)
The problems with getting anyone to adopt a new protocol can be summarised as:
To be adopted requires more than functionality. It requires a market. No users, no use.
Multicasting suffers a lot from this (partly because ISPs are die-hard Scruges (minus any ghosts to tell them to behave), but mostly the lack of users, plus the high cost of ISPs switching will forever prevent this genuinely useful protocol ever being used by Joe Q Public. (Unless he really IS a Q, in which case, he won't need an ISP anyway.)
Re:*AHEM* - BXXP is not intended to replace HTTP (Score:2)
I just hope they don't make BXXP a binary protocol. All the major app-level internet protocols (HTTP, SMTP, FTP, POP, IMAP, IRC) are text-based, and it's one of these things that make life much easier on developers.