HTTP's Days Numbered 603
dlek writes: "ZDNet is running an article in which a Microsoft .Net engineer declares HTTP's days are numbered. (For those of you just tuning in, HTTP is the primary protocol for the world-wide web.) Among the tidbits in this manifesto is the inference that HTTP is problematic primarily because it's asymmetric--it's not peer-to-peer, therefore it's obsolete. Hey everybody, P2P was around long before Napster, and was rejected when client-server architecture was more appropriate!"
Yeah, but (Score:2, Offtopic)
-Peter
Re:Yeah, but (Score:2, Interesting)
Re:Yeah, but (Score:2, Informative)
What nonsense (Score:5, Insightful)
Yeah, I know with the Linux-hype over, some people feel very "objective" when they treat Microsoft's marketing schemes like the word of god.
However:
Will it become the standard on Windows? Sure, just like the Win32API - just like any api Microsoft pushes.
Will it harm or endange other operating systems? No. Worst case is that everything stays the same and Linux can't run Windows-programs. Best-case is that Mono allows Windows-compatibility which would benefit Linux greatly.
Re:What nonsense (Score:3, Insightful)
The important thing is the API which will be incompatible to everything else. If it isn't it would be very bad for Microsoft.
I suspect that .NET is incompatible to everything else, but projects like Mono could endanger this incompatibility and become big problems for Microsoft.
Linux can only win. Worst case is that everything stays as incompatible as it is now, best case is that .NET becomes multi-platform which would kill Windows in a few years.
The reality will probably be something in between: Compatibility will be better than current Wine/Win32 compatibility, but not 100% - yet Linux could make big inroads also on desktops if Windows-compatibility is good enough.
The way I see it is that Bill Gates and Steve Ballmer fell victim to a big delusion: That Windows could survive on a leveled playing-field.
Re:What nonsense (Score:3, Interesting)
No, you don't. You are much more comfortable in believing that .NET is a grand masterplan invented by the evil geniuses at Microsoft to take over the world.
Reality is that Microsoft is just an ordinary company that makes a lot of mistakes and screws up a lot. Probably even more than other companies. Of course they make tons of cash because IBM gave them the x86-OS monopoly. But *anybody* no matter how incompetent can make tons of cash with that.
Some things that might change your mind:
- Windows was not an evil plot to take over everything. Windows was already abandoned after v2.0 and Microsoft's strategy was to go with OS/2. However a Microsoft employee played with it in his free time and laid the foundation for Windows 3.0 which was picked up after relations to IBM went worse and OS/2 was delayed and delayed. So Windows was essentially an accident, if that employee wouldn't have played with it, OS/2 and IBM would probably dominate today! (which would be much worse BTW, because it's hardware and software)
Most Microsoft projects were big failures: Windows/Mips, Windows/PowerPC, Windows/Alpha, "Homer" Project, Modular Windows, "Otto" Projekt, MMOSA (Set-Top-boxes Operating System), WebTV, Blackbird/Internet Studio (1995), proprietary MSN which should replace the Internet, COOl (C++ Object Orientated Language), PenWindows, Microsoft Bob.
In 12 Months we can also add "XBox" to the list. Just look at Japanese sales which were so low that retailors cancelled orders IN THE FIRST WEEK of the launch!!
And I'm also very excited to see the European launch where the XBox will cost nearly twice as much as the PS2.
(And unlike those other projects, XBox is very visible and will scratch Microsoft's reputation of being invincible.)
but I think that as you point out, reality does not include Windows being killed off "in a few years."
What's the point in running Windows if you can get all the apps on another platform, too? I don't think PC-makers would sacrifice 20% (and increasing) share of their revenue to go directly to Microsoft if there was an other OS that could run all the apps reliably.
And most people will use whatever comes on the PC. (as long as it runs their apps)
Nor do I understand why it necessarily has to die?
Did I say that? I said that it would die, not that it's necessary. I personally wouldn't care if Windows lifes forever if I could run all apps on Linux. But on a leveled playing-field, Windows simply doesn't have much of a chance. On a leveled playing-field Windows will die, not because I want it to, but because of basic market forces. It's not like there is a big community around Windows that writes drivers. If the hardware-maker doesn't do a driver, there will be no driver. Microsoft couldn't write all the drivers, even if they wanted to.
Just look at server and embedded system markets. On those, almost all apps are available on Linux, therefore Windows starts to fade, because there is no added value for the money. Of course a lot of companies use Windows on servers because they are used to it, but show me any startup-company that uses Windows on servers.
Companies grow and shrink, come and go. Without startups, a platform is doomed to fail.
The same is happening on embedded systems. Even in the PDA-sector (which is a tiny part of embedded) where WinCE should be strong, all new PDA-designs are Linux-based. Show me any company that has *started* to produce PDAs in the last 2 years that uses WinCE - there are none.
In non-PDA embedded areas, WinCE's situation is even worse.
Re:Yeah, but (Score:4, Insightful)
Er, why? Am I not being advertised to in the most efficient, flashy manner?
Fuck, the majority of what I use the web for could be handled by Gopher, let alone this fancy pants HTTP protocol.
--saint
Re:Yeah, but (Score:3, Interesting)
Most of them are still on Windows 98 -- and don't see any convincing reason to change. They've seen four "new" versions of windows and many have tried them and gone back to 98. MicroSoft's delivery mechanism has been the Windows Update (which most people without broadband or with a healthy sense of paranoia disable) and Internet Explorer. With IE 5.5 now fairly usable and standardized, they'll need a new app to get
Microsoft's only vehicle now seems to be new systems from OEMs. Unfortunately, hardware is no longer the limiting factor for most users.
Hopefully... (Score:2, Insightful)
Re:Hopefully... (Score:2, Interesting)
Why do we still have TCP? Because of its ubiquity... A company would have to have even more arrogance than M$ to try to push a non-TCP-friendly network product on the market.
So why do we have HTTP, along with a million higher layer protocols riding on top of it, many of which might work "better" without relying on HTTP? Because every machine has either a web server or a web browser. Almost 100% supported, in some form or another. Even cell phones have a web browser.
Throwing away that instant compatibility would seem like insanity to anyone trying to push a new product.
Or, to put it another way, when did you last write an app that works directly on top of IP? Or even lower? Why not? All that "overhead" of TCP, or the unreliability of UDP, *certainly* leaves room for everyone's personal view of a "good" compromise...
(Of course, in this forum, I see a good chance of someone responding that they *have* done just that...)
As an author of HTTP... (Score:5, Informative)
Nah, the problem with HTTP is that it escaped from the lab a little early. As far as optimised hypertext transport goes we could do a heck of a lot better.
Don Box is one of the main authors of SOAP. Another author of SOAP is Henryk Frystyk Nielsen who was also a principal contributor to HTTP and spent several years working on HTTP-NG with Jim Gettys.
The problem HTTP-NG faced is that HTTP did the job too well, there was simply not enough incentive to change. And the folk at Netscape had zero interest in any idea that was NIH, in fact they would work to kill them because they saw anything that implied that others contributed to the Web as a personal threat.
But in any case there was a deployment problem, how does a browser know to upgrade to NG when it makes the request? The principal optimizations in NG were in compressing the initial request to one packet.
Web services answers this problem. All services are going to be supported on HTTP, the legacy infrastructure is too great. Some services will also offer alternative transports, possibly BEEP, possibly something like NG.
What I don't agree with at all is the 'peer to peer' confusion. At the protocol level all protocols have an initiator and a responder (possibly multiple responders in multicast). There can only be one first mover however. All peer to peer means is that any device may act as an initiator or a responder. That was actually the original HTTP model, everyone running a Web Browser/Editor would also have a Web server. The protocol describes only one half of the loop, but that does not mean it cannot be there.
HTTP will always be a client/server protocol but that does not mean that it can only serve the serf/master business model. We designed HTTP to democratise publishing, anyone could publish. In the early days of the Web we had more people publishing on the Web than using it to access stuff. P2P is intrinsic to the Web philosophy, it is not intrinsic to the protocols because that makes no sense. You can only have one initiator in a transaction, at the time we wrote HTTP we used to call the initiator a 'client'. Since then the nomenclature has shifted and SOAP is written somewhat differently, it was not possible to use the term initiator in 1992 even though we understood the issue.
Re:Could you explain your less know acronyms? (Score:5, Insightful)
Hey kid - there's these thingies called "search engines", you go type in words you don't understand and up will pop references to them and also typically definitions. Kinda like a self-help /. but where you don't bother the adults by asking them to explain every other word. There are even online indexes containing technology-industry standard acronyms and what they mean in nice little small words for folks like you.
Or is this all too complicated for you and I should repost with every noun hyperlinked?
In other news... (Score:2, Insightful)
Sigh...why do we bother to listen to this kind of predictions, particularly from a source that is trying to control everything...
Read the article? (Score:4, Informative)
Re:Read the article? (Score:2, Redundant)
Re: (Score:2)
Re:Read the article? (Score:2)
I am getting a little confused.
Re:Read the article? (Score:3, Insightful)
Of course there are lots of other (bad) arguments for SOAP. The idea that XML is somehow superior to XDR even though XDR is significantly more efficient, just as easy to write for and is already a wide-spread standard. (Hey kids... it even has an 'X' in it.. you know you like X's.)
There is the argument that SOAP is stateless and statelessness is better than stateful when it comes to RPC requests. Of course, statefulness will have to creep into SOAP sooner or later when someone decides they need ACID transactions.
There is of course the argument that it's easier to debug text protocols. This one I particularly love as writing a binary protocol to text converter isn't exactly the most difficult thing in the world to do. In fact, IIOP and RPC protocol debuggers already exist.. and they aren't all that big. Plus there is the fact that the amount of time developing the protocol is insignificant to the amount of time the protocol is used... therefore it makes perfect sense to put a little more effort into making it efficient.
Of course, there is the non-RPC related uses of XML as a protocol too. Jabber seems to use it in a sort of odd maligned way. Maybe the idea of documenting your protocol using... well a document language such as XML (IDL in the case of CORBA, UML, etc) and generating code to do protocol marshalling never occurred to people. It's not like extensible binary formats don't exist (the simplest being key/value pairs which you can use to represent nearly anything with.)
History just repeats itself over and over. The web is not the internet. HTTP sits ontop of general-purpose protocols. XML is a document definition language. Java is not as powerful and flexible as C++ and when it finally becomes as powerful and flexible, it will be just as complex. Not everyone has the newest machine, you are not free to waste resources just because it makes your life a bit easier.
Of course, that's just my opinion, I could be wrong.
Re:Read the article? (Score:2, Informative)
"SOAP is good because it goes right through firewalls"
You're exactly right, MS is banking of the fact that most FWs have port80 open. There is a good reason why IP was designed with so many open ports, why not use them. Now, admins are just going to have to spend more money on FWs that can tell the difference between normal web traffic and SOAP.
Well Duh... (Score:4, Insightful)
Before you rush to say Mickeysoft is destroying the web, please realize that he's referring to web services, not your personal home page (although I'd imagine they'd like to make that proprietary too).
not so sure about that... (Score:5, Insightful)
ever wonder why 99% of ANY urls you see start with an http? ever wonder why flash webpages don't start with something like mmfttp and shoutcast streams don't start with plsttp?
wonder.
Re:not so sure about that... (Score:4, Insightful)
I mean, let's take a connection oriented protocol like TCP and add a text based stateless protocol on top of it. Ok, that makes sense so far.... but wait, we want to be able to maintain state, so lets introduce this new concept called "cookies" and we'll use ASCII strings to identify things. And, it would be nice to be able to make multiple requests per TCP session, so let's put together some keep-alive mechanism. Ohh, and I want to be able to talk to multiple servers on a given IP, so let's add a host header field.... But wait, all of this is transmitted in clear text! Let's engineer a set of encyrption protocols to stick between our HTTP layer and our TCP layer. Here we'll solve some of the same engineering problems, like adding an SSL session ID to maintain state. Now, instead of requesting simple documents, how about we design an extensible markup spec to request "web services?" Yeah, that should work.
It is a testament to the design of the protocol that it's still ticking with all these enhancements (aka hacks.) But, all the layers add bits of overhead that could likely be engineered out if one had the luxury of starting from scratch.
Considerations for long duration (Score:3, Insightful)
Now if you want to use HTTP to do P2P, then most likely you're only doing it so corporate users can flow out from behind their various firewalls without getting special permission from the IS department. Perhaps, HTTP isn't right in those situatins, but that doesn't mean something is wrong with HTTP. Put the blame where its due.
What about things that P2P doesn't make sense for (Score:2, Insightful)
But... maybe I just answered my own question. Is this a thinly-disguised way to hide a revive of the much-touted-a-few-years-back "push" technology for the web?
The 'holocaust' (Score:3, Funny)
http over used (Score:2, Interesting)
-jj-
Re:http over used (Score:2)
Of course if they had a clue they would know that you can encapsulate just about anything in http.
HTTP needed replacement long ago (Score:3, Interesting)
There are obvious reasons to replace HTTP - the most obvious being the creation of true stateful transactions. That said, there will be support for HTTP until 2025 at least, and ultimately legacy support for HTTP will be painful and necessary for coders.
He's got a point (Score:2, Insightful)
By design (Score:2)
Re:He's got a point (Score:3, Insightful)
I said (Score:2)
What the title _should_ read: (Score:5, Insightful)
HTTP works great for a large number of purposes. It will continue to work great for a large number of purposes. However, it is not so great when you are trying to build powerful RPC mechanisms like SOAP on top of it. It's the latter where HTTP will slowly loose favor.
Your web browser will still be making HTTP requests for HTML documents many years into the future...
microsoft on interoperability? (Score:2, Funny)
Re:microsoft on interoperability? (Score:2)
the declaration has been made (Score:2, Insightful)
This is another example of M$'s ego. They keep trying to change the direction of personal computing and the internet, and seem to never have any thoughts about "well, what if this doesn't catch on..." What if they move all of their applications over to
NAT & Firewalls (Score:5, Insightful)
The Right Thing would be to get IPv6 out, make local client firewalls and sandboxing standard, and ditch NAT and central firewalls.
Yeah, right.
Instead we have SOAP, a RPC-over-HTTP kludge. We may as well run PPP-over-HTTP and have done with it...
No central firewalls? (Score:3, Informative)
A good centrally administered firewall won't stop you from doing what you need to do. If a protocol is well thought out NAT will not cause a problem.
Re:NAT & Firewalls (Score:2)
To secure a machine you shut down services. This closes open ports, and prevents untrusted user input. They hit a closed port, and they get dropped by the OS proper.
Client firewalls do the same thing, only they are not OS proper, and they usually do packet inspection as well. Thus spending much more CPU power to do something simple like dropping packets. What would I an attack do then? Right. I'd send odd packets at the firewall trying to peg your CPU.
I may not control your machine, but I can probably make sure you can't use it.
Re:NAT & Firewalls (Score:3, Interesting)
Sadly, pretty much everyone agrees that HTTP isn't suited for the things it's being used for today, but everything gets built on top of it because port 80 is pretty much the only thing that's guaranteed to get through firewalls, most of which are stupidly configured and require the corporate equivalent of an act of Congress to get opened up.
By the way, Marshall Rose is relevant here for another reason, too: his proposed BEEP protocol is (IMO) a far better way to deal with providing a multipurpose transport suitable for a wide variety of things. There's a good BEEP Q&A document [beepcore.org] by Rose on the beepcore.org site. We should be using things like BEEP to avoid having the same arguments and reinventing the same wheels over again every few years/months/weeks. BEEP seems to be gaining traction: The IESG recently approved APEX4 the application datagram protocol over BEEP as a proposed standard, and SOAP over BEEP was similarly approved last summer. Let's hope these get through the grinder in time to do some good, and that the true Internet standard of "Rough Consensus and Running Code" prevails over corporate landgrabs by Microsoft, et al.
Re:NAT & Firewalls (Score:2)
Re:NAT & Firewalls (Score:5, Insightful)
If you are behind a firewall that only allows HTTP, no. If it only allows outgoing connections, not really. All you can do is what HTTP can do, which is much less than IP can.
It would be really sad if the net were reduced to the web. "You say you need IP connectivity? Are you some kind of hacker?"
Re:NAT & Firewalls (Score:5, Insightful)
[Sheesh] Security and Convience will ALWAYS play off against each other. If you have locks on your house, you're not really getting the full benefit of a house!? Sure, locks make life more "inconvenient." But you trade some convenience for security. I close off all those ports because I don't know what might be used to exploit the openings.
Now, we'll get to arguing about packet filtering vs. proxy filtering and how proxies are better...blah blah blah.
In short, I want a BALANCE of convenience and security. Blocking some content (ports/hosts) is a way to do that. That's a good thing in a system that's setup right. Does your company let anyone into the building that wants to get in, and only disallow those that it activly sees doing mischief? No (at least for your sake I hope) they don't. They say, do you have some purpose here? Are you explicitly allowed. Then you get in.
Frankly you can argue about NAT and unblocked connections all you want. What my clients want is functionality. The functionality of the network is compromised by too open a security (too much functionality) of the internet. They want the machines to work, the data to get processed, and to spend as little money as possible fighting battles. The solution is only to open that which needs to be open to acomplish the business objectives.
Cheers!
HTTP workarounds (Score:2, Insightful)
i thought basically all web development fell into this category
But seriously, i've been involved in projects requiring using HTTP for purposes which it was not well-suited for - workaround is the name of the game. Old problems, lots of old solutions.
So well this looks like more
What's left out of the ZDNet article is ... (Score:2, Insightful)
However, the web based UI could easily be implimented such that the actual communication between web services is done through any IP based protocol. Right now HTTP is the one that jumps to most developers minds, but by no means is it the one that's expected to be used for longer-running services. Personally, I would expect that the web based UI would interact with some running process that would dispatch and receive Web Service data through a message queueing system that provides some form of transactional validity and security. If it's a really long-running service, then this intermediary process could exist much like a state machine, and the web UI could get status updates by hitting that state machine and getting the appropriate response (ie: "Still waiting to hear back from Microsoft's UDDI server!" or "Still waiting for that order to go through!")
So this will help from Mars? (Score:3, Funny)
This guy is baked (Score:2, Interesting)
Why does it need to be done at a protocol layer? If I need to submit something to a server that is going to take 5 days to get back to me, I should probably have an account with that server, and when I log back in, can get that information.
It sounds to me like he's fishing for an excuse to design a new protocol for a need no one really has.
Not obsolete - inappropriate for web services (Score:2, Interesting)
a) The way most of the Internet's IP infrastructure treats port 80 traffic will not allow long durtation web service transactions to work reliably (presumably because things like NAT mapping tables will get cleaned up before the transaction finishes)
b) Because the server can't initiate a connection to a client.
It's NOT talking about HTTP being unsuitable for pushing web server content around the Internet.
In related news... (Score:4, Funny)
Oh yeah, and all operating systems besides Windows XP are obsolete.
ROFL.
By the way, the funniest quote in the article was:
Microsoft has some ideas (on how to break the independence on HTTP)
Now that was a Freudian slip... ;-)
299,792,458 m/s...not just a good idea, its the law!
eXtensible Application Transport Protocol (XATP) (Score:5, Interesting)
I've seen other proposals for HTTP replacements and have been less-than-pleased by their complexity and design. Based on what I've learned from Jabber, and great feedback from many in the open source and standards communities, XATP was born:
http://xatp.org/ [xatp.org]
XATP, the eXtensible Application Transport Protocol is very simplistic and geared to operate at a layer below content, identity, framing, and other application-level issues. Check it out and offer feedback or participate if your interested.
Jer
Everything not MS is obsolete (Score:2, Redundant)
And of course I am obsolete since I refuse to view MS products as anything else than toys. Admittedly by now toys that actually have some level of stability and can be used for some (limited) tasks without too much hassle. But as long as they insist on sitting on their island (admittedly a large one, but instable and plagued by document-rot), I will not consider their products "professional" in any sense.
Hmm, this sounds familiar (Score:4, Interesting)
Didn't Cringely claim several months ago that they were going to try to do this? Well, not quite, but back in August he wrote [pbs.org]:
So they decided to go up one level of abstraction. Hell why not, that way they break even more competing products.
Actually, Cringely is relying on a web of trust .. (Score:3, Interesting)
Just because I get an email from some machine, doesn't mean that it really originated there or that it wasn't maliciously crafted or altered by some sleeper virus.
You know why M$ wants to get rid of the TCP/IP stack don't you. They didn't write it, and it works. It replaced their own, which didn't work.
They want to stamp out any trace of non M$ code in their OS.
Maybe BelCore or Multics organization, or even IBM should sue them for copyright or patent violation on the use of recursive structures like sub-directories.
If they rip out the stack, I predict a wave of new virus exploits the likes of which hasn't been seen yet.
Good points (Score:2, Insightful)
Seemed to me like Mr. Box raised some good points. Unfortunately, he works for Microsoft, which means that your first impression is "Oh my gosh, Microsoft wants to stamp out HTTP and replace it with some evil, proprietary protocol" (it was my first impression, anyway). Looks like it just means that we'll also be making requests like "newtp://blah.blah.blah" someday.
I'm in the middle of a project where the one-way nature of HTTP is a bit inconvenient at times so I can see where he's coming from.
Web services w/out HTTP? (Score:2)
Yet this is the whole idea behind Web services, the banner MS inisists it waves and waved first. I should be able to issue a request into the Ether and there should be a server sitting there waiting to handle my request. I realize that he's talking about P2P apps, but is one arm of Microsoft not paying attention to what the other is doing? I agree that HTTP probably isn't the best way to send P2P messages, but it's not going away, at least if the Web Services division of MS has anything to do about it.
psxndc
A new FUD campaign, I swear (Score:3, Insightful)
Gee, I wonder WHAT shape will that holocaust take. Maybe it'll be a killer protocol that pursues and assasinates other protocols? Damn, Mr. Box, use the proper words, will you?
This works for small transactions asking for Web pages, but when Web services start running transactions that take some time to complete over the protocol, the model fails. "If it takes three minutes for a response, it is not really HTTP any more," Box said.
Well, of course it isn't. Is it, then, HTTP's fault that it doesn't work perfectly when used for stuff it wasn't designed to do? Hell, I'd love to see telnet-over-HTTP done while we're at this.
"We have to do something to make it (HTTP) less important," said Box. "If we rely on HTTP we will melt the Internet. We at least have to raise the level of abstraction, so that we have an industry-wide way to do long-running requests--I need a way to send a request to a server and not the get result for five days."
Maybe if we get back to use the proper protocols (say, why don't we rely on ftp for transferring files, for example?), we wouldn't have the current "problem".
Another problem with HTTP, said Box, is that it is asymmetric. "Only one entity can initiate an exchange over HTTP, the other entity is passive, and can only respond. For peer-to-peer applications this is not really suitable," he said.
Of course it isn't, HTTP is designed with a client-server model in mind.
In my humble opinion, this is just the first step from Microsoft for a new FUD campaign against HTTP: "First, we show everyone how HTTP isn't any good, then we roll over our brand new protocol that supports all of HTTP's capabilities, and lacks its limitations. Buy it from us, your beloved Microsoft!".
"Microsoft has some ideas (on how to break the independence on HTTP), IBM has some ideas, and others have ideas. We'll see," he said. But, he added, "if one vendor does it on their own, it will simply not be worth the trouble."
This, of course, implies that Microsoft won't control the new protocol on its own... not at first. They'll just "embrace and extend" it later.
I think you're missing the point (Score:5, Informative)
Simply because this guy now works at Microsoft does not mean he has an agenda for evil. As a matter of fact before working for Microsoft Mr. Box started a little company called DevelopMentor [develop.com], He's also written a few books [amazon.com] One of which is concedered "The" book on COM, Essential COM [amazon.com], ask any COM developer worth their salt if they own a copy, they do.
I've known of Mr. Box for years now and trully recpect him as a technical writter and developer and I honestly don't think that he would shill for Microsoft.
-Jon
I'm Glad to Hear this from Microsoft (Score:2, Insightful)
We need a wide range of new protocols for web services with security and scalability in mind while they are being developed. We don't want to use HTTP for more than HTML. We want to be able to control who does what, where and to whom.
I hope the
Good points (Score:2)
Client: Hey! I want X!
Server: ok...
[time passes]
Server: (X)
Client: got it!
In TCP, the "ok..." and "got it!" phases are implicit in that TCP will tell you your message got through. Lots of overhead that the protocol doesn't really need though. In my Networks class we hear the End-to-End argument, that end to end the protocol should be designed to exibit only the state and information transfer it actually needs. Using TCP is a shortcut, and lazy. Good for getting things working fast, not optimal in the long run. Just like the STL, but that's another rant.
doh! (Score:3, Insightful)
Until you end up reimplementing half of it on top of UDP. Badly. And yes, I've seen this multiple times.
Enough with the NIH, please? There are many years of effort in the common TCP stacks, and many subtle things they do right that you'll miss the first dozen implementations.
For the love of god, if you need a substancial subset of TCP's features, and can live with the overhead, use TCP!
bxxp, I mean beep (Score:2, Informative)
www.beepcore.org [beepcore.org]
UDP and Parchive (Score:2)
While I'm certain a lot of this is about leveraging Microsoft to control the 'next' major form of web transport, the engineer in question is right about one thing... HTTP is overused.
A lot of P2P stuff could be a lot more efficiently and resource-considerate if it were to use UDP-style transmission like email and some online games rather than 'Virtual Circuit' style TCP connections. Another sweetener to add in the pot is to use Parchive (PAR) style error correction on your datagram packects in order to be more tolerant to faults, etc...
sender transmits udp0-6, upd7 is lost by receiver, receiver requests par(0,6), sender transmits, receiver self-generates upd7, sender transmits upd8-999 with no further par requests without ever trying to figure out if receiver got all those packets.
It's async. It's resource considerate, and it could do a great deal to ease download over p2p architecture.
Re:UDP and Parchive (Score:2)
Huh? SMTP, POP, and IMAP all use TCP.
Embrace and Extend (Score:2)
Or maybe not - I suppose one way to look at it is if big biz, sucking from the MS teat, all herd off onto MS's "better" protocol, the rest of us can continue to use HTTP without the control freaks (read: corporations) trying to own it.
"Guru" my ass (Score:2, Interesting)
It was designed to --- get this, kids --- deliver WEB PAGES!!! I once heard rumors of this other evil, called ftp, that was once used as a "file transfer protocol!!" The nerve of some of those early networking types!
Really, though... there are just about as many potential protocols as their are potential uses. So http doesn't lend itself to your insecure privacy invading microsoft-enriching wet dream of global domination. GET OVER IT or LEARN TO USE AN APPROPRIATE PROTOCOL. Really!
I don't see my linux box making http requests every time I want to mount an NFS share, so why should microsoft's next weapon use it to rob me of my money? HAH! Guru indeed...
I don't think so (Score:2)
Usual MS FUD (Score:5, Interesting)
And of course I am obsolete since I refuse to view MS products as anything else than toys. Admittedly by now toys that actually have some level of stability and can be used for some (limited) tasks without too much hassle. But as long as they insist on sitting on their island (admittedly a large one, but instable and plagued by document-rot), I will not consider their products "professional" in any sense.
Incidently the only argument in the article (aside from the "argument" that P2P is better than client-server, given as dogma) is that there are problems with transactions that have several minutes connection time. I am sorry, but I don't see how that makes http obsolete. First these long transaction are not that common and second they work fine. Or are we going towards an Internet where a telnet/ssh connection will be terminated after 3 minutes, because the backbone cannot cope?
Pure FUD, as far as I can tell.
Re:Usual MS FUD (Score:2)
I still don't quite get the argument. Af course HTTP is not really suitable for RPC. That much can be deduced from its name. Is anybode except MS using it for RPC?
And I still don't see the problem with the long connections.
Editor trained at Weekly World News? (Score:2)
Inflammatory headlines and spinning of the stories really does no one any good, and we have to slog through dozens of responses by people frothing at the mouth because they only read the intro paragraph.
HTTP Info (Score:2, Insightful)
Was designed to transfer hypertext, not be the end-all-be-all RPC transport of the Internet.
Microsoft and MANY others made a big mistake of using it as their protocol of choice for everything Internet related.
Using HTTP as a catch-all protocol defeats the whole purpose of having different ports if everything is on 80. It makes administration a headache, and it lulls people into a false sense of security.
(Oh, it's only HTTP, we can leave that open...what did you say about a SQL Server HTTP interface? And the SA password is blank on your local development system?)
HTTP, The HyperText Transfer Protocol; use it for what it was designed for.
HTTP is good if used for the correct porpose. (Score:2)
Time for new thinking.. (Score:2)
I guess Microsoft needs to think outside the box.
Proper use of the word "hack" (Score:2, Funny)
The "Cockroach" (Score:2, Insightful)
Of course, that is as it should be. Even bad standards have a tendency to live much longer than anticipated and good standards are rarer than hen's teeth. As a good standard, HTTP rightly deserves a long and fruitful life.
The nefarious implication is that Microsoft is pushing their own propriety replacement for HTTP in order to lay down their infamous hammerlock on the 'net just as they have on so many other sectors of the industry.
While the engineer raises some fairly valid points regarding the applicability of HTTP to alternative networking models such as P2P, I'm sure that most people will read these comments as a thinly veiled plot to extend Redmond's Global Dominance (TM) - and I'm not sure that they would be mistaken.
Certainly, the issues mentioned regarding high latency network operations smacks of the distributed applications model of
While few would (should) argue that HTTP has room to grow, and may ultimately be supplemented or even supplanted by other standards, I am very leery of such spin coming from such a notoriously anti-standards organization.
Be afraid. Be very afraid.
Stateful vs. stateless (Score:5, Insightful)
The problem with HTTP, as with any stateless protocol, is that there often are (or should be) relationships between requests. Ordering relationships are common, for example, as are authentication states. Stateless protocols are easier to implement, and thus should be preferred when such "implicit state" is not an issue, but in many other situations a protocol that knew something about state could be more efficient. All of this session-related cookie and URL-munging BS could just go away if the RPC-like parts of HTTP were changed to run on top of a generic session protocol.
Another error embodied in HTTP - and it's one of my pet peeves - is that it fails to separate heartbeat/liveness checking from the operational aspects of the protocol. Failure detection and recovery gets so much easier when any two communicating nodes track their connectedness using one protocol and every other protocol can adopt a simple approach of "just keep trying until we're notified [from the liveness protocol] that our peer has died". This is especially true when there are multiple top-level protocols each concerned with peer liveness, or when a request gets forwarded through multiple proxies. As before, having the RPC-like parts of HTTP run on top of a generic failure detection/recovery layer would give us a web that's much more robust and also (icing on the cake) easier to program for.
I don't know if any of this is what Don Box was getting at, but in very abstract terms he's right about HTTP being a lame protocol.
Mis-information (Score:2)
The article does not say that HTTP's days are numbered. They are simply saying that HTTP does not work for RPC, and they are completely correct. If I may quote from Dan Box, the
"However, there is nothing wrong with HTTP per se, as its ubiquity and high dependability means it is the only way to get an a reliable end-to-end connection over the Internet"
That doesn't sound to me like he's trying to get rid of HTTP, or that it's going away.
HTTP is perfect for what it is used for, but when you get into things where you need real-time processing of data both ways, HTTP simply does not work well. This is all that Dan Box is saying.
Remember folks, just because it's Microsoft does not mean they are always wrong or evil.
problems with microsoft + HTTP (Score:2, Insightful)
In a very abstract view, HTTP could be a RPC protocol, but it isn't the same kind of RPC that Sun RPC or even java's RMI (Remote Method Invocation) cover. Sure you can send data back and forth and even cause the server to do some action, but that isn't the design of the protocol. Unlike RPC, HTTP provides no inherent mechanism for passing arbitrary objects -- only text. There is no marshaling of data types at the protocol level. The protocol isn't designed to be used by an application to do anything but retrieve data.
With XML there is some standard mechanism for packaging arbitrary data types to be sent over HTTP, but this isn't an inherent part of the protocol. The unpacking and reconstructing of these is still at the application level (at best the interpreter of the call will do it so the programmer doesn't have to think about it), but the web server won't have it's primary purpose be marshaling of datatypes -- just executing the requested file (assuming it's a CGI type object) or returning the contents of the file for a normal web page.
There's more to RPC than just a request and a reply -- generally more than just a few functions are made available, HTTP only really has GET, POST, HEAD, and maybe CONNECT for proxy servers. How these are handled is up to the server author -- in the case of Microsoft, they want to think of it as RPC, are we suprised that they have so many security flaws in IIS?
NEWS: Single Protocol Not Good For Everything (Score:2)
I guess you need to be a Microsoft employee to have an article written about stating the obvious. It's like saying Radio is not good for sending Television broadcasts.
Hmm, not exactly... (Score:5, Insightful)
It is symmetric, though. If you stick a server on both end points, the you just send a request down one side and get a response down the other. I think the *real* thing people are complaining about is that it is not *stateful*. If I send you 10 requests, when those connections are closed, there is no way to determine what order or when the responses will come back, and there is no inhereant state tracking in the HTTP protocol...
BUT, to the half clueful application developer, there IS extensibility in the protocol, which means you could just use your own header, "X-REQUESTID: 214132dbbcdee43221c", or whatever and track in that way.
That being said, however, I think this article is not so much about application developers using HTTP, but people who have some other agenda attempting to pull the wool over the eyes of the reporter. Quotes like "If people can't search the Web they call the IT department, so the IT department makes sure HTTP is always working." just sound stupid. People don't take calls because thier protocol has gone down, they take calls becuase thier servers have gone down.
Right after that comes the quote, "We have engineered the hell out of it". WTF? HTTP is one of the most simplistic protocols I have ever seen. You can implement the entire RFC is an afternoon. (Maybe, however, MICROSOFT has spent a lot of time "engineering the hell out of it", in it's attempts to twist it into something less usable)
What this article looks like, to me, is just a way of explaining to the non-programmer that M$ is smart and forward thinking becuase they have seen the "melting of the internet" by the backward and old-tech protocols. Any *real* application programmer knows if you want to use HTTP as a transport, you don't just sit there and leave the connection open for a week while the server finshes processing whatever you were doing, instead you send a message that says "do X" the server responds *instantly* with "okey dokey, I will", and then it calls you back later with the results. This is just common sense.
It has begun (Score:3, Insightful)
Why bother with HTTP, FTP, SMTP, POP, IMAP, etc when they control most of the clients and almost half the servers on the Internet. They could replace all those with their own set of protocols or, more likely, a single MS-specific protocol. They say they're already working on some new RPC solution right here in this article. It isn't too hard to imagine them introducing this WindowsProtocol on the server and in some beta of MSIE. Then MSIE starts to try to use WindowsProtocol for any network communications before falling back to the standard protocols. In 3-5 years when they're up to 60% or 70% of the server market, server side Windows has an option that is default "on" that disables non-WindowsProtocol connections and client-side Windows starts asking the user if they want to enable connections to "legacy" services, while warning them that it isn't Microsoft so it can't be good. After that, who would run a server that can't accept connections from 90% of consumer computers?
Of course I don't want this to happen, but what's to stop them? I doubt the <5% of us that realize its wrong will be able to.
Microsoft's days are numbered! (Score:3)
Maybe I'm just getting a little George Carlin- grumpy lately, maybe it's because I'm writing a eulogy for a friend's funeral, maybe it's because I'm sick of people at MS attempting to form competent sentences (please, stick with those inspired dance routines!), but please tell me: What's days aren't numbered?
HTTP has its issues, but referring to it as "the cockroah of the internet" and saying its days are numbered, and then saying that MS has a P2P solution!, just goes to show that not only are they power hungry in Redmond but seriously power-tripping...
Arrgghhhh....
Reporters line of logic. (Score:5, Funny)
2. Since the web runs using HTTP, http runs the Internet.
3. HTTP can't do everything the Internet can offer.
4. While there are other protocols out there (like ftp, p2p, telnet), only hackers and pirates use them, so they must be insecure.
5. Therefore, we must change http or the Internet is doomed.
duh (Score:5, Funny)
Try sending an email to MS customer support
HTTP is being replaced with... (Score:5, Funny)
Microsoft will be anouncing Microsoft Transfer Protocol
exaggeration (Score:3, Interesting)
"If we rely on HTTP we will melt the Internet. We at least have to raise the level of abstraction, so that we have an industry-wide way to do long-running requests--I need a way to send a request to a server and not the get result for five days."
If I am reading his statement correctly, Box feels HTTP is not suitable for processes that take long period to get a response. Even if you remove HTTP layer from SOAP, you would still have a problem. Say some one decides to by pass HTTP, use raw sockets and establish persistent connections. This means a stateful application has to be built on top of SOAP. I'm just guessing, but if Box is saying RPC has to have sessions and be stateful, that isn't a full solution. If a process like "place a stock order for MSFT when the price is less than 50.00 buy," a stateful application may not be the best solution. It might take 1 day or 2 months for the price to drop below $50.00.
Microsoft is a supporter of XLang [coverpages.org] which tries to address the problem of stateful transactions. One of the problems of this approach that I can see is it is limited in scalability and timeout. Once you say all transactions need to be stateful, what is an acceptable timeout? Do all transaction require the same timeout period? What are the rules governming timeout, persistence, and garbage collection of invalid/expired states?
Why not use event based protocol with rules to provide a higher level of abstraction than XLang. The way XLang treats transaction is with case statements. On the surface that sounds fine, until you realize for every company you do business with, you will have to add cases to handle new situations, which rapidly makes the system harder to maintain. EBXml in my mind uses a better approach, which divides business and functional logic and suggests rules as a possible mechanism. HTTP isn't really the problem for long processes (as in weeks and months). A better solution is event based protocol, so using HTTP isn't a big deal. This doesn't mean there are cases where HTTP is really bad for transactions. Cases where response time is a huge factor in processing a transaction, a persistent connection would be more suitable. Things like day trading applications where response time affects the price, you would be better off using persistent connections for RPC. It would suck for a day trading application to loose a buy order because there was a sudden spike in requests and the system couldn't reconnect to send confirmation. Having a persistent connection in this case is the best solution, because response time has to be rapid.
http == qwerty (Score:3, Insightful)
Read between the lines (Score:4, Funny)
"We have to do something to make it (HTTP) less important," said Box. "If we rely on HTTP we will [Never Own The Internet Right Down To The Roots]. We at least have to raise the level of abstraction, so that we have an industry-wide [Monopoly] way to do long-running requests--I need a way to [Make Money Writing Books On How To Use Our Protocol].
Eh? (Score:3, Funny)
No doubt so the server can be rebooted three or four times...
solution (Score:3, Funny)
they can start implementing it in IE and Windows first, then over time completely remove support for things like HTTP.
i think i just threw up in my mouth.
Protocols never die (Score:4, Insightful)
5-day-long requests are an expensive order (Score:3)
From the article:
I need a way to send a request to a server and not the get result for five days.
How about email?
As so many people have said, the whole problem comes from an over-reliance of HTTP. If you need the request in 5 days, you probably need some other kind of service.
However, his complaint about the time-frame of HTTP requests has deeper implications than he perhaps realizes. For example, if your request takes 5 days, you'd better be ready to compensate your content provider for machine usage, because it must be extremely resource intensive. (Maybe MS passport could help out.
If I'm requesting a reply over a 5-day time frame, ideally I would not need to have my machine powered up to receive the replay, as most machines are turned off daily. So, some kind of asynchronous protocol with intermediate storage -- like email -- would be required.
So, we need a service that checks for the latest server responses whenever you start it up, and automatically keeps track of how much you should be charged for each transaction. Actually, I think an HTTP/SMTP implementation would not be poorly suited, at least with a Free Software application server doing the heavy lifting. (See another posting on MS and intellectual "property" sharing.)
A new PHP function: do_5_day_request_and_charge_for_it($user, $args)
{
do_lots_of_stuff($args);
charge_lots_of_money($user);
}
I'll write that function if you promise royalties off each function call
Of course, if you wanted a seriously secure system, you would either require credit card info beforehand or require payment before issuing the response (at least for new users) to discourage fraud.
Horse before the cart... (Score:3, Interesting)
What I want to know is this; what is going to replace http? The article really doesn't say other than alluding to p2p as the way of the future.
Now, I may agree that p2p will be way cool as its uses are just barely beginning to be explored but I don't think we will see http disappear any time soon. I wouldn't be surprised if five years from now things are essentially the same as they are now in this respect and http is still a staple of many things web related.
And this;
Why? If you make a request and it takes you that long to respond... your clients will search for a different source for the data. How many customers do websites lose for loading slow in the first place? You might as well use snailmail for that kind of stuff.
So... mod me down if you must but somebody please explain what Box is talking about here.
Pfeh (Score:3, Interesting)
Old standards have a way of hanging on, even when there are superior replacements. Look at all the strange, vestigial crap in PC hardware. Look at NTSC video, 8.3 filenames. You'd be amazed how many large companies keep important data in VSAM files instead of real databases. People might start using new standards for new applications, but the old standards will still cling in the old applications or eveb in new applications that must interact with old ones.
Are there exceptions where old technology was phased quickly? Sure. The cost of change can be high, and business people generally want technology that is Good Enough rather than following the latest and greates trends just for their own sake.
You should see the old tech that is still used in the finantial sector. In these parts, the rule of thumb is "If it ain't broke, don't fix it." When you are dealing with people's money (and government regulations therof), the cost of botching a change is very high.
--mkb
Careful what you wish for (Score:4, Funny)
*chuckle* (Score:5, Interesting)
Thanks for the laugh. It's always good to be reminded just how out of touch
Re: (Score:2)
Re: (Score:2)
Re:zdnet.com.com? (Score:2)
The link isn't wrong. Com.com is the C|Networks portal, so it's essentially C|Net, which in turn owns ZDNet.
Re:zdnet.com.com? (Score:2)
Indeed. http://www.com.com" is owned by CNET [netsol.com]. A pretty dumb way to name your site, but
It doesn't look like it's used directly though, I get 'connection refused'.
Re:Well (Score:4, Interesting)
However, we don't necessarily use MS tools which require us to obtain a MS license for our development, server and for all the clients we wish to serve...