Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

HTTP 1.1 approved by W3C and IETF 78

fabbe writes "The HTTP 1.1 protocol has been approved by the W3C and IETF. CNET article is here. " Both bodies apparently showed ruthless efficiency getting these standards out there... speeds that make even glaciers jealous.
This discussion has been archived. No new comments can be posted.

HTTP 1.1 approved by W3C and IETF

Comments Filter:
  • by Anonymous Coward
    Gee, I dunno...

    You might have to wait until a couple of years ago.

  • by Anonymous Coward on Thursday July 08, 1999 @09:56AM (#1812892)
    I was watching a show called "Flightpath" on Discovery channel last night and they were showing all sorts of nifty things about the new YF-22 fighter aircraft. This is the new hybrid stealth fighter that is going to replace the F-15 for general figter missions.

    This plane has been under development more or less for the past 19 or 20 years. Although there are 9 or so flight ready versions of this plane which exist TODAY, there will not be any in actual military service for about 5 more years. Some people have described these planes as the most complex machines ever built. They HAVE to work perfectly because if they don't, the pilots will probably be killed and their strategic value will be lessened.

    When much of the software being used today is labelled "mission critical" don't you think it should be well thought out and done PROPERLY as well? Perhaps not 25 years, but don't you find it amazing how stupid is it to spend so much money on software that only partially works?

    The notion of "internet time" is complete and utter BS, invented solely to give software companies excuses for shoddy software and to make "lazy programmers" seem more productive.
  • Actually, the requests themselves still look like

    GET /foo.html HTTP/1.1

    But now among the headers that the browser sends in the request, you'll find one like:

    Host: www.foo.com

    That's what the web server uses to determine which virtual host to use. Apache has supported this for a long time; have a look at the NameVirtualHost directive.

  • When a browser makes an HTTP/1.1 request, it sends along a Host: header which specifies the domain name it wants. The server sees this and responds accordingly.

    In the case you're describing, the header would say Host: 123.123.123.123. If the server adminsitrator had anticipated this and set up a virtual host for the hostname 123.123.123.123, that's what the user would see. Otherwise, I guess it's undefined. Apache just directs it to whichever virtual host is defined first in the configuration file.

  • One thing I do like about it is the ability to use multiple names per IP address. But this sort of kills the elegance of design of domains going from TLD, First level domain, and so on.
    That's up to the administrator :-). You could also use this to host www.foo.domain.org, www.bar.domain.org, and www.baz.domain.org from the same physical machine.

    I wonder what the speed difference is with the fact of concatenating packets into streams rather than placing 1 packet per 1 stream. I'd guess that for small servers it would be trivial but for large ones the change would be enormous.
    By "packet", I think they're referring to individual files. If that's the case, then it may provide a noticeable improvements for people with higher-speed connections. It tends to be a lot faster to do one big transfer that can build up some speed than a bunch of little transfers, even for the same number of bytes.

  • HTTP 1.1 persistent connections allow "pipelining", so new request can be sent while previous ones aren't answered yet. HTTP 1.0 Keep-Alive extension doesn't allow that, and this is why it doesn't allow nasty DoS attack that is possible with HTTP 1.1 -- sending request that definitely will take a lot of time to be processed, then sending large number of requests for large responses (possible large ones by themselves) expecting HTTP server to overgrow its either input or output buffers to ridiculous size.

    Since there is no flow control in HTTP 1.1 other than one in TCP, and TCP one can't be used because requests should be received without blocking the client, and server can't send responses to later requests before the response to earlier one, server can either buffer everything in the input and delay processing, or process them simultaneously and buffer the output -- in either way buffers will grow. The alternative is to discard the requests, however just like with other DoS cases there are no "definitely safe" conditions for this.

    HTTP 1.0 with Keep-Alive requires as many TCP connections as the number of requests that are in progress simultaneously (so the routers along the way and kernel buffers in the sender may not like it), but the end result is healthier for the server -- while server didn't say anything, client won't try to reuse the same TCP connection and leave server wondering, what to do with the new request.

    I don't know, which servers actually support "pipelining" in HTTP 1.1, but this DoS was the main reason why I was unable to add HTTP 1.1 support to my fhttpd -- while I have found what I consider to be reasonable solutions to other DoS in HTTP, for this one every cure that comes to my mind looks worse than the disease.

    HTTP 1.1 could solve this by introduction of higher-level flow control or by multiplexing simultaneous replies (first one will create reasonable safeguards against overloading, second one will eliminate the source of the problem), however HTTP 1.1 developers in their infinite wisdom did neither.

    And this is not the only flaw in HTTP 1.1.

  • HTTP is a transfer protocol, it's mostly ortogonal to HTML or any other document format. Comapre the difference in results from number of ways, one can use to retrieve email (unix mail files, mh mailboxes, every possible mailbox format up to Hudson and Squish, POP, IMAP, M$ Exchange) and the result of differences in standards in email message format (RFC822 with plain text, MIME in all of its incarnations, widely accepted deviations and possible encodings, HTML in email, windoze-specific file formats in email attachments, etc.).
  • Well it's actually pretty close.

    For each packet of data the web browser itself sends, a new stream of data had to be opened.

    Since the web browser itself doesn't do ACK's on packets, all data which the web browser sent were requests.

    Of course, that doesn't change the fact that they did use the wrong terminology throughout the article :)

    --
  • All major web servers implemented HTTP 1.1 looong ago. It's only now, however, that the bodies have actually approved of them as standards.
    Which raises the question...
    What use are standards when everyone's using a newer version the standards body hasn't yet ratified?
  • If this is, as you describe, a standard challenge-response protocol, then it isn't all that secure.

    First, the server has to store your password so that it can work out the "correct" answer to your challenge. Compare this to Unix passwords: an attacker can read everything on your hard drive and still not know what to present in order to log in.

    Worse, if I can intercept just one exchange with the server I can start trying to guess your passphrase. Passphrases tend not to be very well chosen, so guessing attacks are rather too effective and it's important to make them as difficult as possible.

    Anyone who needs to use networked passwords should implement both of the following techniques if at all possible:

    Key stretching: http://www.counterpane.com/low-entropy.html
    SRP:
    http://srp.stanford.edu/srp/index.html

    These papers make clear the problems that arise if you *don't* use these techniques...

    http://srp.stanford.edu/srp/index.html
    --
    Employ me! Unix,Linux,crypto/security,Perl,C/C++,distance work. Edinburgh UK.
  • "Version 1.1 also allows for the secure transfer of passwords"

    hows uncle sam going to take to this? is it a
    munition? crypto laws suck...
  • by pb ( 1020 )
    What's wrong with HTTP 0.9? All I really *need* is "GET /" anyhow... :)

  • Commitment to open standards is all very well, but the standards bodies themselves need to commit to ratifying those standards while there is still a business case to use them.

    Products and markets won't wait if a 'proprietry' solution gets the job done now - budgets need to be spent.
  • If that's the case, then it may provide a noticeable improvements for people with higher-speed connections.

    It can help slow connections even more by saving the latency of the three way handshake and by not having to do slow start (with more latency) for each file transfer.

  • HTTP 1.0 required a new stream for each packet of data sent. But HTTP 1.1 can send multiple packets along the same stream, speeding the flow of information on the Web.

    A new TCP stream for each packet? No wonder the world wide web is so slow! :P

    I wonder if it's hopeless to think the media in general can get terminologies correct.

  • I'm confused - I thought HTTP 1.1 was old news. Don't a bunch of servers and web browsers already support this? I know IE 4.0 does - under the advanced options, you can select whether it uses HTTP 1.1.
    Timur Tabi
    Remove "nospam_" from email address
  • They comment on how this new standard will speed up transfers, but does anyone have an idea of how much? Considering many consumers are still limited by bandwidth on their end, it generally won't get faster for them, but mostly more efficent transfers before it ends up with them. Correct? Or am I just entirely missing the point. =]
    Yes :) It's not about bandwidth, it's about latency (mostly, let's explain); Each connection needs to be initialized (the tcp "3 way handshake" is time consuming), then "tcp slow start" gets in the way, and in the end small objects (pages or small gfx) never reach the full speed of your connection.
    Browsers try to hide the problem, opening many connections at a time so that connection establishments can be overlayed with transfers, and that transfer speeds sum up.
    The point of HTTP/1.1 is to transfer many objects back-to-back using a single tcp connection, so you pay connection time and slow start only once for all those objects. Browsers will still be allowed to open many connections at once, but only up to two per server, to be nice with others :)

    Also, does anyone know how it's going to allow multi-domains on single IPs? Almost sounds like a redirect of some complex (or lack of compexity) sort. Mayhaps the daemon will take the domain requested, and devide from there? The domain of the URL you requested will be sent in the HTTP request. It is already done right now (with HTTP/1.0 ?), but will be _required_ by HTTP/1.1.

    What if you just typed in the IP address? Will it default to some domain? I find this pretty confusing, but I'm no expert.
    That's up to the server.

    But sense connections are made to IPs, not really domains (Or so I thought), I'm just slightly lost on this one.
    You're correct, and that's why it needs to be specified in the HTTP request headers.

    Btw, how new is all this ? Last time I checked, Apache 1.3.x is already HTTP/1.1 compliant, no ?

  • It's not strong crypto. It's a MD5 based thingy. It DOES beat the hell out of the piece of crap base 64 wannabe encoinf used right now. I wrote a server that supported it a while back ... but for some reason cannot remeber how it works right now. Bummer. Go get the RFC.

    /dev
  • It takes a few roundtrips to set up a socket, plus some more roundtrips to get the window size > 1, so the server can send more than one packet before waiting for an ack. Until that happens, you're essentially in half-duplex mode.
  • But under HTTP/1.1, it looks more like

    GET / HTTP/1.1
    HOST www.example.com

    That should actually be "Host: www.example.com"

    With HTTP/1.1 we get MD5 encoding.

    Are you thinking of Digest authentication? Unfortunately I have yet to see a browser that implements it. I have implemented it in my own client that uses the PUT command, but I cannot yet switch the server to use it because no other client knows how to do it.

  • (Not sure this post really merits a response, but:)

    These are the people that invented HTTP/1.1 in the first place. It has been in development for years, and software developers have implemented what's been ready so far, but all the details hadn't been finalized. Those years were spent ironing out every last little contradiction, ambiguity, and other problem that could be found in the spec, so that we'll be able to use it as long as possible into the future. Your "market" has been looking to these people to make the decisions, because anyone who writes network software knows we need standards, and anyone who's developed much software at all knows that underlying systems must be planned carefully, or they'll fall apart sooner rather than later. The people who made HTTP/1.1 include connections with all the major players in the "market".

    By the way, if you think the W3C and IETF are irrelevant, you don't know much about the Internet. And random, vague political bashing doesn't score much in the brains department.

  • 'IPv5' was called ST2 - a connection-oriented protocol that supported QoS. However I don't think anyone really uses it now with a few specialised non-Internet exceptions.
  • It's a kinda neat trick using MD5 hashes. The client has to find the MD5 fingerprint of a string that includes the username, password, and an arbitrary string the server provides. The result? Passwords never pass in cleartext, replay attacks are short-lived (since the server changes that arbitrary string regularly), and no strong crypto is needed since all you're using is a strong hash function which is only useful for authentication anyway (hence Uncle Sam doesn't try to treat it like a munition). See RFC2617 for details.
  • The problem is there are 8 version of HTTP/1.1, including RFC2068, six IETF drafts, and RFC2616. Lots of implementations are against RFC2068, and there are a LOT of problems with that version that are addressed the later ones (expect and TE request header, even MORE excrutiating cache-control detail, etc.)

    Also, since HTTP/1.1 isn't being exercised very well by most current clients, the 1.1 support code in most servers is only lightly exercised and often is buggy/incorrect/broken.

  • One thing I do like about it is the ability to use multiple names per IP address. But this sort of kills the elegance of design of domains going from TLD, First level domain, and so on. But if I'm interpretting it correctly, it should kill alot of the refer jumps. I wonder what the speed difference is with the fact of concatenating packets into streams rather than placing 1 packet per 1 stream. I'd guess that for small servers it would be trivial but for large ones the change would be enormous.
  • Incidentally, I haven't seen too many servers that support this method of authentication, but the http kioslave supports it.
  • http://www.gcn.com/gcn/1998/July13/cov2.htm
  • Apache just directs it to whichever virtual host is defined first in the configuration file. Actually, you can also select a "Default" NameVirtualHost in the apache.conf file (and it can be something "completely different" than any other NameVirtualHost if you want it to be)
  • Andwer to: "allow multi-domains on single IPs?" Currently, under HTTP 1.0, an HTTP request does not include the hostname as part of the request. The requests look like:

    'GET /foo.html HTTP/1.0'

    Becuase the hostname is not included, the web server that responded to the socket request on that port/IP combination would have to serve pages from it's default htdocs root directory. With HTTP 1.1, the reqests are going to include the hostname. Don't quote me on the syntax, but they might look something like this:

    'GET http://www.foo.com/foo.html HTTP/1.1'

    With this format, the web server knows the request for was a website named 'www.foo.com', and can look into the appropriate htdocs root directory. And all of this can be one using a single port/IP combination.

    -jason
  • by jg ( 16880 ) on Thursday July 08, 1999 @09:44AM (#1812921) Homepage
    It wasn't quite as glacial as one might think.
    The draft standard was approved in March; the
    RFC issued recently when the RFC editor caught
    up on backlog. The internet drafts have not had
    a significant change for nearly a year. Most
    vendors have been working to the ID's for a long
    time.

    HTTP/1.1 has already been pretty widely deployed:
    this was the approval of the draft standard,
    rather than the proposed standard.

    As to performance stuff, see:
    http://www.w3.org/Protocols/HTTP/Performance/

    As to recovering IP addresses, most clients
    have been sending the host name as part of the
    request using the HOST header for a long while.
    This means you can distinguish different web sites
    without depending on the IP address to distinguish
    them.
    - Jim Gettys
    HTTP/1.1 editor.



  • The previous draft versions of HTTP 1.1 are flawed and are not fully standardized. Implementations of draft HTTP 1.1 are inconsistant and partial. The standardization of HTTP 1.1 is the actual full, standard, final version of HTTP 1.1. It addresses many issues ignored by RFC 2068 (draft HTTP 1.1).
  • The people behind IETF & W3C amaze me. Seriously, can you imagine how hard it is to get a world-wide collection of people, some of which don't even speak the same language, to actually agree on technical issues? It has to be damn difficult. Thus the slow painful process of getting the standards passed through.


    I wonder if those organizations are built using the same ideas of the original routing protocols :) Route around trouble, with no built-in solution to thrashing.


    Seriously, can such an organization keep up with the explosive growth of the Internet? Will IPv6 get out before I need my toaster to have an IP address? And does anyone know where IPv5 went?


    So many questions.... --Mid

  • I assume this means that Netscape 4.6 supports HTTP 1.1, if everybody has had it, and this is just an official pat on the head. Am I right?

    Just curious,

    Joe

    ...Software Programmers are constantly trying to come up with the next best idiot proof software. The Universe, in the meantime, is trying to come up with the best idiot.
    So far the Universe is winning.
  • It really is all about latency. This matters even more if you are behind a firewall like I am all day.
  • by Chronoforge ( 21594 ) on Thursday July 08, 1999 @10:29AM (#1812926)
    I'll try to answer a couple of the big questions I've seen here from RFC 2616 (HTTP/1.1) and the Apache docs.

    Virtual hosts:
    Currently, a request looks something like
    GET / HTTP/1.0

    But under HTTP/1.1, it looks more like
    GET / HTTP/1.1
    HOST www.example.com

    This way, the webserver knows what domain to serve the request from.

    Now, this assumes that you referred to the webserver by name. If you refer to it by IP, the request looks like:
    GET / HTTP/1.1
    HOST 192.168.12.27

    Or, if you're using an HTTP/1.0 browser, the request would be
    GET / HTTP/1.0

    In either case, Apache (I don't know about other servers cause I don't use them) will serve the request from the first VHost that matches that IP -- see http://www.apache.org/docs/vhosts/name-based.html

    Thus, If you're running a children's educational site and a pr0n site on the same IP, not only are you an idiot, you should have the server direct older browsers and non-DNS users to a page that says 'Oops' (and possibly a list of the sites you serve).

    Authentication:
    Under HTTP/1.0, the only supported authentication mode was basic. The username and password were base64 encoded for transmission, but not encrypted.

    With HTTP/1.1 we get MD5 encoding.

    This whole message will make a lot more sense if you read the Apache docs, RFC 1945 (HTTP/1.0, it's shorter than 2616[HTTP/1.1], and good for the basics), and RFC 2617 (Basic and Digest HTTP Authentication).

    --
    Dave Richardson
  • HTTP 1.1 supports request pipelining, so that the browser could (conceivably) make one big request for everything on a page, and the server would then start sending data as soon as it can. This is similar (possibly the same?) as "keepalive"

    I think it takes 3 packets to setup a TCP connection (correct me if I'm wrong), and since you don't have to open a new socket for each request, you can save quite a few packets.

    Also, I'm not sure, but I think that with pipelining, the browser only needs to send the request headers once for all the requests in a given pieline. If this is true, it could significantly cut down on traffic.

    It's nice to see that "governing bodies" in cyberspace suffer from the same problems as governing bodies in meatspace... particularly insane slowness.

    -nate
  • It doesn't necssarily. I think in most server setups you choose what the default host is, and that is the one that the IP will map to. In practice it's not a big problem, since most people don't go typing or linking IP addy's in their web pages.

    Oh.

    Except for the various piracy scenes.

  • There's a reason they've issued some 6-7 draft revisions of HTTP 1.1, so I'm glad they took their time and got it right.

    Everyone supports it already anyway, so what's it to you.

  • That's all fine and dandy...
    What happens if someone requests http://123.123.123.123 and there are 4 domains registered to it? I don't understand how that would work.
    If there's a kid's cartoon and a porn site on the same IP, how does the server send the proper page back to the browser?
    Anyone wiling to offer an explanation?
  • For anyone wondering how this works, basically the browser has to request the entire URL on a GET, e.g. -
    GET http://www.yourdomain.com/
    as opposed to just GET /.


    That's not how it works. The browser sends a seperate "Host" header:

    GET /foobar
    Host: www.somedomain.com

    The "GET http://www.somedomain.com/foo" form is used only if talking to a proxy server.

  • "That caused the database to overflow and crash all LAN consoles and miniature remote terminal units, the memo said." (from the article at http://www.gcn.com/gcn/1998/July13/cov2.htm)

    Both the ship control app and NT had a problem :

    The control app had the problem with a division by 0, which is quite dumb IMHO.

    NT had a problem with instability, otherwise the consoles wouldn't be crashed isn't it?

    And further in the article you can read:

    "But according to DiGiorgio, who in an interview said he has serviced automated control systems on Navy ships for the past 26 years, the NT operating system is the source of the Yorktown's computer problems."

    So is this an Urban Legend? We may have a response in a century ;)
  • right and i think that w3c/ietf etc. needd to address this. while they are in quorum arguing, standards get set by vendors, including standard which may be designed to benefit CERTAIN platforms. the standards bodies need to streamline and speed up the process - look at whats happened to xml: multiple competing standards, no clear direction, and vendors creating weirdo solutions such as wddx to work around the snail's pace of the w3c
  • I seem to recall hearing something a while back about NT-based systems on a Navy missile boat crashing and the ship being basically defenseless as a result. I'd love to hear the captain calling MS tech support and being told to download a patch. Heh.
  • by coyote-san ( 38515 ) on Thursday July 08, 1999 @11:23AM (#1812935)
    The long ratification times can be annoying, but it's better than the alternatives. We're all frustrated at the Netscape and MSIE "extensions" to HTML. Imagine now that the HTTP was also changing as frequently.

    How much time do shops waste getting "compliant" MSIE browsers and "compliant" Netscape browsers to render the same source document into reasonably close fascimiles? How many shops give up and have separate MSIE and Netscape trees?

    Now multiply that by the HTTP 1.0 server, the 1.1 server, the 1.1b server, the 1.2 server, the 1.3 server, and the 2.0 server. You would either see development slow to a crawl, or a lot of shops simply announcing that they would support a single server/client pair. The one that is bundled with every PC sale due to it's unquestionable (*not* 'unquestioned!') technical excellence. *cough*

    Gee, maybe Microsoft is right and having a strong Imperial hand *does* help competition. King Bill could have simply announced that everyone shall use HTTP 1.1 (after paying another $200 for the privilege of serving his liege, of course) years ago, and by now we would be running HTTP 1.4 complete with 'Microsoft' 'innovations' such as push technology, dedicated "channels", and Lord Bill knows what else.
  • Can somebody comfirm this? I sort of assumed they really meant every file. Or am I confusing stream and connection?
  • Sure, they're doing quite well - because everybody else is paying the cost of lost productivity from MSWord macro viruses, Win95 crashes, et cetera - and that's for non-critical applications (no one with any sense should be using MSWord or Win95 for critical applications.)

    The net economic value of Micro$oft's crappy software is certainly negative. Unfortunately, PHB's buy into the marketing.

  • HTTP/1.1 must have been around FOREVER before becoming a standard now, because without reading any RFCs, I've been building a proxy which is based on parsing

    GET http://some.domain.or.ip:port/

    from the HTTP request (otherwise, how the heck does a proxy know who to connect to to "get" stuff??)

    At least Netscape always formats requests like this...
  • Although I don't disagree with your statement that software companies should be focusing on bulletproof software for mission critical applications...when is the last time a mission-critical piece of software cost 100s of millions of dollars (not to mention 20 years of R&D funding), had the capability of defending/attacking a nation, and whose bugs couldn't be fixed with a downloadable patch? There's sort of a difference. Although, as with Moore's Law, just because they CAN get away with it, doesn't mean they SHOULD.
  • Well according to my big book Unix Network Programming, IPv5 is the Internet Stream protocol.
    I have no bloody idea what that is :)
  • Here are a few more links for more information about HTTP and some neat things that are being done with it...

    • Get the latest dirt from the World Wide Web Consortium [w3.org].
    • RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1 ( text [rfc-editor.org], PostScript [isi.edu], PDF [w3.org])
    • Berkeley's TranSend [berkeley.edu] service is a cluster of workstations working together to act as a massive HTTP proxy. This proxy "transforms" Web pages based on clients settings. Was the basis of the ( now-commercial [proxiweb.net]) Top Gun Wingman [berkeley.edu] Web browser for the PalmPilot.
    • The Anonymizer [anonymizer.com] acts as a proxy that strips out all the unwanted/unneeded header lines that your Web browser sends.

    I had started hacking together an HTTP/1.1-compliant proxy in perl [perl.org] that did on-the-fly compression if the client supported it, but I never got around to completing it. Initial results were impressive, especially when it was paired with a caching proxy like Squid [nlanr.net] or a CacheFlow [cacheflow.com] box. Of course, with DSL and cable modems getting more widespread use, people like myself that are still pinned to a 33.6k connection are being left behind.

    Caching/compressing/proxying is still in widespread usage outside North America (most notably Australia and European countries). Their problem was (is!) outrageous access prices and relatively slow overseas connections, so they've been using caching for a long time to help solve it. The US and Canada have solved their "problem" of Web pages not instantaneously loading by throwing more bandwidth at it...

  • One thing I do like about it is the ability to use multiple names per IP address.

    Apache already has support for this, and a lot of the content providers out there are using it to save on IP addresses. For anyone wondering how this works, basically the browser has to request the entire URL on a GET, e.g. -
    GET http://www.yourdomain.com/
    as opposed to just GET /.

    I wonder what the speed difference is with the fact of concatenating packets into streams rather than placing 1 packet per 1 stream. I'd guess that for small servers it would be trivial but for large ones the change would be enormous.

    Just in case there's still any confusion.. it's one file per TCP connection in the HTTP 1.0. HTTP 1.1 adds a "Keep Alive" feature that can be sent to keep the connection open.

    - coug

  • They comment on how this new standard will speed up transfers, but does anyone have an idea of how much? Considering many consumers are still limited by bandwidth on their end, it generally won't get faster for them, but mostly more efficent transfers before it ends up with them. Correct? Or am I just entirely missing the point. =]
    Also, does anyone know how it's going to allow multi-domains on single IPs? Almost sounds like a redirect of some complex (or lack of compexity) sort. Mayhaps the daemon will take the domain requested, and devide from there? What if you just typed in the IP address? Will it default to some domain? I find this pretty confusing, but I'm no expert. But sense connections are made to IPs, not really domains (Or so I thought), I'm just slightly lost on this one.

  • Enter HTTP/1.1. Because you can re-use the connection for multiple objects, you only need to open _one_ TCP connection to the server to download everything. Less overhead means faster downloads, period.


    Actually less overhead will mean webservers will become more scalable for high trafficked websites.
  • HTTP 1.1 requires the browser to transmit the domain name it has requested information from in the GET header detail. Therefore, with every connection from a browser the browser is actually telling the webserver what domain it should be serving. This is distinct from HTTP 1.0 which did not have this requirement.

    I believe that Apache has had the capability to use this principle for some time.

    If you just type in the IP address, then that is what your browser will report to the webserver which can then act accordingly.

    That's my two penneth anyway!
  • Wasn't IPv5 a strictly experimental streaming protocol?

    I seem to remember reading an article about it in the dim & distant past.

    j.
  • Most servers do, but most clients don't. IE40 says that it does in the advance settings, but it just seems to be another microsoft interface element that wasn't wired up to any actual code. I've tested IE40 with 1.1 on, and my server log the requests as 1.0.

    IE5 does it properly.

Programming is an unnatural act.

Working...