Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

ARIN: No More IP's For IP-Based Virtual Hosts 249

Mike writes: "ARIN (the guys who hand out IP addresses) has a policy change where they will no longer allocate IP addresses for IP-based virtual hosting. They are expecting everyone to move to name-based hosting now. ARIN is solicting comments to their public policy mailing list: ppml@arin.net. What do you guys think? Is name based virtual hosting ready for prime time?"
This discussion has been archived. No new comments can be posted.

ARIN: No More IP's For IP-Based Virtual Hosts

Comments Filter:
  • by posilipo ( 123198 ) on Tuesday August 29, 2000 @10:29PM (#817390) Homepage
    APNIC (Asia Pacific NIC) has had a "move to host header" policy for awhile now, and when we ask for more addresses (we presently have a request for a large block in with them), they want to see your network address plan, and they want to see how many host header boxes versus how many IP'd webservers you have.

    Host header, as dirty a word as it is, seems to work fine (we use Micro$oft IIS, ugh) - oh. there's one sticking point. You cant use bundle per-virtual-server anonymous FTP access on the domain name to clients. This minor problem aside, I think it's a good thing. The number of borign web sites we have wasting IP addresses haunts me every time I open that address database...

  • by Matt Ownby ( 158633 ) on Tuesday August 29, 2000 @10:32PM (#817391) Homepage Journal
    they wouldn't have any problem with ip-based virtual hosting if there were more IPs than people know what to do with floating around.
    I predict IPv6 sees a return to ip-based virtual hosting.

    Name based hosting isn't a bad idea though, since most people use a browser that supports it nowadays.
  • by posilipo ( 123198 ) on Tuesday August 29, 2000 @10:34PM (#817392) Homepage
    http://www.stu d.ifi.uio.no/~lmariusg/download/artikler/HTTP_tut. html [ifi.uio.no] read that, it explains the HTTP protocol. Basically, host header webservers host multiple sites (different domain names, e.g. "http://www.example.com" and "http://www.fred.com") on the same IP address. They distinguish between which site to send to the client based on the HTTP request itself, rather than purely the DNS lookup.
  • by Skorpion ( 88485 ) on Tuesday August 29, 2000 @10:35PM (#817393)
    For normal (http) virtual web sites, hostname based virtuality is OK. But it isn't OK for https (SSL secured) web servers. A web server certificate is issued for name and IP and you can't have two of those on one IP.

    I think moving to name-based virtual servers is a good idea in general, but the https problem needs to be resolved first.

    Alex

  • Name based virtual hosts won't work for FTP and anything that relies on the Reverse DNS name of the host eg. IRC. But for HTTP sites it is great. Actully IP based virtual hosts for pure HTTP sites should be banned. (But I guess that is what they are doing ;))

  • by listuser ( 204477 ) on Tuesday August 29, 2000 @10:37PM (#817395) Homepage

    My letter to Arin:

    Sure you can do web hosting with named virtual hosts, several hundred sites per IP, and it works fine. But what happens when sites start hosting more and more SSL secured websites (i.e. https://store.example.org/)? SSL works at the transport layer, you cannot host multiple domains off of one IP address. Will an exemption be made for this (i.e. I need a CIDR because I want to host a lot of secure websites?). Making it harder for people to implement SSL secured websites will only hurt the Internet, making it a much less secure place to do business, and ultimately stifle growth (well a little bit anyways). Thank you for your consideration.

    Kurt Seifried, Senior Analyst
    SecurityPortal, your focal point for security on the net
    http://www.securityportal.com/ [securityportal.com]

  • by Markonen ( 56381 ) <marko.karppinen@fi> on Tuesday August 29, 2000 @10:37PM (#817396)
    Plain name-based virtual hosting is acceptable for "bulk" or low-end hosting, but there's still plenty of situations where you run into trouble without using separate IPs.

    For example, the hosting provider I work for sets up dedicated Apache installations for each customer -- and this policy gets hailed as heavenlike by our customers, since they're free to install any extensions they could possibly need (or even completely switch servers). With current technology, it's tricky at best to implement something like this with name-based virtual hosts. We would need to run our private address space internally and then have a HTTP-level metaserver to distribute the HTTP/1.1 name-based queries to the right servers.

    Also gone are access lists on the router level. Dedicated ftp/smtp servers listening on the same IP as the site. I could go on forever.

    To the credit of both ARIN and RIPE (ARIN's equivalent in Europe), they seem to be on top of this. If a company DOES use a single Apache for a thousand sites, I think it's justified to ask them to use less than a thousand IP numbers. However, this is a grey issue, and the organizations have been understanding in situations where there really is a need for IP-based virtual hosting.

    IP numbers are not assigned for administrative ease, and that's ok. But the issue of name-based or IP-based virtual hosting isn't about convenience yet. It's still about functionality.
  • Yep, address shortages are the reason; due to the numbers of hoarded addresses, many NICs have nothign to do but hoarde more addresses. That's not to say that there's not alot of un-allocated CIDR blocks sitting out there, however...
  • by kris ( 824 )

    As far as I understand SSL, you must use virtual interfaces to host SSL web servers. How does the policy change affect these servers?

    Also, TLS is supposed to fix that. Which browsers implement TLS correctly?


    © Copyright 2000 Kristian Köhntopp
  • I have to say that I don't quite understand how this works, because AFAIK when you make a GET, you just request a file, but don't tell the server the whole URL (or the hostname, for that matter). So how does it know which of the virtual hosts you refer to? Is this a feature of HTTP 1.1? And even if not, could this be a problem with old browsers that assume that there is just one website per IP address and port?

  • AFAIK certificates are issued for hostnames, not IPs, so there really isn't a problem here.

  • by kinkie ( 15482 ) on Tuesday August 29, 2000 @10:40PM (#817401) Homepage
    Secure sites can't move to name-based virtual hosting, as site and key selection takes place before a single HTTP header line is sent.
    In other words, a secure site requires an unique IP address.
    So as a general policy it's pretty dumb, unless exceptions are made for secure sites, and from the announcement it doesn't seem so.
  • As far as I can tell, it's only due to the limitation of A/PTR resource record mismatches that SSL doesn't work on host-header. The SSL key is actually registered under a domain name, not an IP address.
  • by sparks ( 7204 ) <acrawford@laetabili s . com> on Tuesday August 29, 2000 @10:41PM (#817403) Homepage
    RIPE (The European allocation authority) has had this policy for a few years now. You *can* get space assigned for IP virtual hosts, but there's a "special application procedure" in place meaning you have to justfy each assignment and get approval from RIPE staff.

    The fact is that the Host: header has been a part of HTTP for a very long time now, and the number of HTTP clients which don't support it is trivially small - certainly not enough to justify the vast acrages of IP space it eats up. IP virtual hosting is an idea who's time has gone.

  • It does not really seem that they are not giving anymore IP addresses for IP-based virtual hosting. At ARIN, just like at RIPE [ripe.net], they are just _strongly_ discouraging IP-based virtual hosting, in favour of name-based VH. You can see her e [ripe.net] for a discussion about IP-based VH at RIPE.
  • I just came upon that problem today, as I was debugging an issue with 2 of our co-branded services on our website. Everytime I'd enter the secure area I'd get the default certificate, and therefore the dreaded "certificate error" box popping up.

    I traced it to the fact that we are using name-based virtual hosting: In our Intel 7110 SSL accelerators (nice hardware, but one nasty lingering bug) one cannot assign 2 SSL certificates to one IP address in the mapping table. I expect the same to happen when using software SSL under Apache/mod_ssl or anything else.

    So now I'll have to somehow pull those 2 cobranded sites out of name-based hosting and into IP-based, add to the DNS tables, modify the Cisco LocalDirector tables, and fix the Intel 7110 mappings. That's going to be ugly.

    IMHO for large ISPs that use a lot of SSL, name-based hosting is not an option. Also consider the fact that Verisign has finally simplified somewhat the process by which one can request massive numbers of SSL certificates, and with the new SSL hardware accelerators that came onto the market it becomes rather easy to aggregate all those certificates and manage them from one central point.
    They should reconsider their expectations that everyone move to name-based hosting. For some it's not an option.
  • by Ed Random ( 27877 ) on Tuesday August 29, 2000 @10:49PM (#817406) Homepage Journal
    In the HTTP/1.0 spec, sending a "Host:" header with your GET request was optional. In HTTP/1.1, it became mandatory.

    This means that all requests from your browser to websites will look something like this:

    GET /index.html HTTP/1.1
    Host: mydomain.dom
    <nl>

    This is kind of similar to using a proxy; you need to tell your browser to use a proxy. The browser will then send 'absolute URLs' instead of 'relative URLs' as in my example above. That way, the proxy knows which server you are really trying to reach.

    I think that name-based virtual hosting is a great thing (I run 3 domains off my single IP).

    Unfortunately, I can only run 1 SSL-capable secure website on that same IP address since the SSL handshake needs to complete before the request is interpreted at the HTTP level.

    And I have another issue: I want to run a "reverse proxy" (multiple physical webservers, possibly running different OS's) with name-based virtual hosting. I haven't found a way of doing that [with Apache] yet.

    --
    Greetings,
    Ed.
  • The problem is that the IPv4 adresses are running out, in other parts of the world we have had this policy for years since IP adresses are even harder to get here than in the US. I guess it's about time to start using IPv6...
  • by mariab ( 198250 ) on Tuesday August 29, 2000 @10:52PM (#817408)
    they may be issued on a name basis, but the problem here is that SSL is a transport, you have to negotiate the SSL link complete with the certificate before you get to talk to the actual web server ... at this point the server doesn't know which web site you are looking for, and therefor has no way to know which certificate it should send.

    Where I work right now, SSL is one of the biggest problems, we have 5 servers here running host-header based virtual hosting, but we have had to set aside relatively large chunks of our IP space to cater for the customers who want SSL.

    To top this off, the SSL-hosting IPs can only do one thing each, and cannot be accelerated by our caching system ... a single SSL site on one server generates 3 times as much traffic as the whole of the other sites on that server, because the normal sites can be accelerated, SSL can't.

    So ... how do we fix the SSL https issue?

    I would love to do name based SSL hosting .. but I can't see how

  • Personally, I think ARIN needs to go, and NetworkSolutions with it. They've become monstrosities that don't belong on the Internet. They're plagued with bureaucratic crap, slowness, and idiocy that all harm the public Internet of today.

    Anyway. This policy isn't near as bad as it seems. True, SSL websites require their own IP address, since SSL certificates are bound to both name and IP, and the SSL handshake verifies the certificate before it exchanges hostname data. But, the majority of websites out there are name-based. I host 5 websites on my one machine. My roommate hosts 7 on his. I know of companies with -thousands- of sites on one machine, one IP. HTTPS gets moved to a separate box (which it would be anyway, for security reasons), with IP aliases. So this doesn't affect daily operations near as much as people think it does.

    Of course, FTP is also affected. But it isn't something that can't be overcome by coders. I mean hell, it should be as simple as it was to introduce name based virtual hosting for webservers. Or, just move your ftp files into HTTP, since most people just click links inside IE/Netscape/etc.

    As someone who has a request for a /19 in *pray* *hope*, I can understand this policy. Now, if I can just make sure my announcements wont get filtered.....
  • ... except because SSL has to know which hostname the client is connecting to, so it can fetch the right host key - and because the only way it has to do that is to do a reverse lookup on the IP-address. All this happens befor the client sends any data, so there's no host: header to look at.

    By the way is this really a problem? Wouldn't you want a dedicated server anyway to host your site if it has to hanlde confidencial data? I don't like the idea of a Web-shop running on a 100+ site web-hotel, storing my credit card number - SSL or no SSL

  • by CynTHESis ( 196082 ) on Tuesday August 29, 2000 @10:57PM (#817411) Homepage
    While some organizations use IP-based webhosting to, in part, justify their requests for IP space, ARIN will no longer accept IP-based hosting as justification for an allocation unless an exception is warranted

    Virtual hosting maybe ok for general public web-pages, a.k.a. a step-up from geocities. But for people who provide web servicing to many different entities which all wish to have either SSH/FTP access to the web servers and SSL services this provides a problem. I currently provide services to only a few people but I plan to get a larger subnet within a year, the people I provide services for wish to have these services and in most cases the ability to do reverse lookup for security reasons. Being denied additional IP-space because of a reason such as web-hosting methods, seems to be slightly ludicrous.
  • What do you guys think? Is name based virtual hosting ready for prime time?"

    Sure. We're running almost all of our webhostings off name-based virtual servers now.

    The only thing where you don't have a clean solution are secure servers, as the SSL authentication comes before the server is told which virtual host the client wanted to reach.

    It's really time that ARIN catches up with RIPE on its IP address preservation policy.

    /ol

  • I have to say that I don't quite understand how this works, because AFAIK when you make a GET, you just request a file, but don't tell the server the whole URL (or the hostname, for that matter).

    It's fairly simple - the browser request makes a GET, then follows by passing a series of headers:

    GET / HTTP/1.0
    Host: hostname.domain.tld
    User-agent: Mozilla
    <blank line to terminate request>

    Then expects the return from the server.

    So, when you run off to http://slashdot.org/comments.pl, it's performing:

    GET /comments.pl HTTP/1.0
    Host: slashdot.org
    User-agent: Mozilla
    <blank line to terminate request>

    Both HTTP 1.0 and 1.1 implement this, if you want to read the RFC 1945 [ietf.org] and RFC 2068 [ietf.org] for information on HTTP 1.0 and 1.1 respectively.

  • The client must add a host-line to the request header to get the right virtual-host, a simple GET is not enough. And yes, it is a problem if your browser is VERY old (for example a Netscape 1.0). I also remember having trouble with a Python HTTP library that did not support virtual-host one or two years ago...
  • I meant to include they will assign for IP-based on exceptions (which I would assume would include SSL). You can do FTP, POP, IMAP, etc... on a virtual host bases (even telnet) but it starts getting tricker (and more limiting).

    The link to the article explains the policy changes in better detail.

    -Mike
  • We migrated to name based hosting about 6 months ago - although it's true to say that you still need the IP's for SSL, we still saved around 400 IP addresses, and we're only a small ISP. IPV6 is still far enough away to make the effort to conserve as many IP's as possible and name based hosting helps a great deal. We haven't had any problems or complaints, anyone still using a non HTTP 1.1 compliant browser probably writes with a big slab of rock and a chisel.
  • by bero-rh ( 98815 )
    For www, it's ok by now - all the important browsers (including "telnet server 80" ;) ) support it. (https just requires a policy change in the organizations issuing certificates).

    However, the command for name-based virtual hosting in FTP is not even in an RFC yet (it is included in the latest drafts for a new RFC though, along with MLST).

    However, even when that RFC is passed, it will take a while until it is implemented in servers and clients (and a change IS required on both sides).

    Patches for wu-ftp and the netkit ftp client exist (sample implementations...) - but I can't see a certain large company that refuses to open-source any of their products implementing something just because it's an RFC or because it would make sense... And while that company controls one of the most commonly used ftp clients... :(
  • The actual nature of the problem is not "what SSL certificates are for," -- it's that the SSL is done at a lower level than the HTTP headers.

    Verisign certificates are assigned to what they refer to as a Common Name. A Common Name is pretty much just an FQDN. (www.foo.bar)

    The SSL session is begun before the hostname is known. The problem then becomes that the webserver has to know what certificate to present before it ascertains the hostname request from the client. If the Common Name in the certificate presented differs from the portion of the URL between the // and /, the user's browser pops up an error, as it should.

    It can be done through either IP based virtual SSL hosts or name-based virtual SSL hosts on differing ports.

    -Nev
  • In that case should the certificate not be issued to the hosting site rather than be for the virtual hosted site?
  • > By the way is this really a problem? Wouldn't
    > you want a dedicated server anyway to host your
    > site if it has to hanlde confidencial data? I
    > don't like the idea of a Web-shop running on a
    > 100+ site web-hotel, storing my credit card
    > number - SSL or no SSL

    well, you can either secure the system hosting this "web-hotel" completely and partition all the users, or you can arrange to store the confidential details on a specially designed and secured repository outside of that server .. both of these and just theories of course ..

    or, you can pay us a lot of money and we (and most of the other ISPs in the world) will build, install and host a dedicated server for you

    This is a good point, but you have to choose to do something that is feasible, sensible and cost effective. You have to make a decision about which trade-off's to accept. (just like anything else really)

  • ...which I've always taken to mean HTTP. If you need to have multiple FTP (or whatever) servers, I don't see that this is going to be a problem.
  • by xonix7 ( 227592 ) on Tuesday August 29, 2000 @11:29PM (#817422) Homepage

    What you have to realize is that while virtual based IP adresses are useful in some cases, they are in fact, not secure. The cases that spring to mind where IP-based virtual hosts would be useful would be for DNS server(s). Say Company X can only afford a single rackmount unit. They could configure their box, with virtual interfaces (eth0:1 etc under Linux, or equivalents under NT or other operating systems), and use one box for running 2 name daemons, each bound to different "virtual" IP adresses. But for webhosting?

    For Webhosting, it actually makes sense to make use of Site proxying such as Apache provides. Typically, how this would be set up is this:

    You'd have a Firewall/proxy box sitting on a single legal (routable) IP adress. You'd run Linux, BSD, or (insert any other operating system), and use that box to "NAT" (Network Address Translation) to seperate boxes behind that box - or even virtual interfaces on the same box - which would, undoubtedly, use non-routable addresses (illegal IPs). This way, you could have Apache proxying your site from 197.x.x.y (your legal IP), to the illegal IP running on your "internal" box.

    So when a user types in "www.foo.com", it hits 197.x.x.y, where Apache is running, and Apache, with the VirtualHost directive (VirtualHost 197.x.x.y), uses the "ProxyPass" Function to redirect the request to the site in question, running possibly on your internal box. So you could go to www.foo.com:80(default), which would really go to 192.168.2.10:8080, running a Zope Server, and www.foo2.com:80, would, possibly go to another box running Apache on 192.168.2.11:80 - whatever you want, literally.

    I think this is where Arin wants administrators to start going, and I've been doing it for ages. It works well, and for that - the authors of Apache, Linux, and the many open source utilities that support those Applications must be commended. If you aren't doing this, try it. It's quite brilliant. The way it all fits together, is an echo of the very thoughts that inhabit the minds of the thousands of individuals using - and not using, (but perhaps, subconciously using, or wanting to use) these systems. For the code itself is like a Christmas present. Yes, a year - two years. 10,000 years. In the blink of an eye, the coding time. Think about the implications of 10,000 years of coding tiem in one blink of an eye! Indeed, we live in strange times.

  • No, that isn't really a good idea. It is unprofessional and tells your client's customers that the hosting company is who they say they are but not the retailer.

    AussiePenguin
    Melbourne, Australia
    ICQ 19255837

  • If you're serving many entities who need SSL from one box, and allowing them shell access, then you're doing a disservice by letting any of them think their information is safe.

    -Nev
  • Oh yeah, I forgot to mention, the only otherway to work it would be to use different ports for different sites. It works, but still has an unprofessional look.

    We really do need a standard to hack into SSL to say what hostname it wants.

    AussiePenguin
    Melbourne, Australia
    ICQ 19255837

  • by enneff ( 135842 ) on Tuesday August 29, 2000 @11:45PM (#817428) Homepage
    http://www.apache.org/docs/vhosts /name-based.html [apache.org]

    I just set it up for all my hosted pages, and it works beautifully. It took less than 10 minutes.

  • That's funny I see this thread today, since I had a discussion a few days ago with some important ISP here...

    When I started publishing web page here (I live in Belgium, EU), every vHost had his own IP.

    In the meantime, I moved my web pages to web hostings to Jumpline [jumpline.net], who give an excellent service and an IP per domain name. It was a lot cheaper service thant EU's one at the time.

    A few days ago, I had the discussion with 3 of the most important ISP in Belgium: for some reason, I wanted to vhost my pages in Belgium again (the price is now roughly the same than in the US). My idea didn't last: name-based hosting is the rule here, and they looked at me as if I was a martian when I told them I wanted my own IP by vHost.

    In a more general context, I'd really like to see an quick adoption of IPv6: more and more ISP's here rely on NAT (whith all the problems it can give) and host hundreds of sites by IP.

    That's definitely not a Good Thing.

    Just my thoughts

    Stefano
    --
  • Without the certificate, it would be trivial for any (compromised) router on the path between client and server to mount a "man in the middle attack". Basically, the compromised router would "catch" traffic intended for the server (using ipchains -j REDIRECT for example), negotiate a key with the client (pretending to be the server), then open a second connection to the real server, negotiate a key with the server (pretending to be the client), and plug both of them together, and log the traffic.

    The only thing preventing this is the certificate: that way, a compromised host on the path cannot pretend to be the server, as it wouldn't have the necessary certificate to do pretend to be the server.

  • And, you don't need one IP for every workstation at your company neither. Use NAT. Then you've got some sort of a "firewall" at the same time.

    NAT is evil. Kill it before it multiplies.

    NAT breaks end-to-end transparency and IPSEC. If you want a firewall, buy or build a real firewall.

  • Use NAT. Then you've got some sort of a "firewall" at the same time.

    NOOOOOOOOOOOOOO!!!!! NO! NO! Please don't use NAT. It sucks! It sucks! Aaaaaarggggggh.

    Sorry about that. Seriously, though: NAT is not a security measure. It can imply some firewall functionality because there are many things a NAT-box just can't do, but that doesn't make it any less "security by obscurity".

    NAT has been causing me so many problems this last year. It's nothing more than a clever but nasty hack to keep IPv4 up and running. So is name-based virtual hosting, really.

    We really need to move to IPv6 and be done with all this nonsense.

  • Actual statistics show a huge number of strange hosts like MSIE-2.x and 3.0 and other similar abnormalities. It is strange but true. There are lots of people with NT4.0 with no service packs applied and IE not installed out there. There is a lot of software like junkbuster that reports an IE when asked for browser as well. The actual percentage depends on the target site but it can go as high as 20% of apparently non-host compliant browsers.

    Also, you are mixing RIPE general policy with host policy. RIPE is assuming this stance on all addresses. You have to justify anything above /31 by default with them. If you are a big ISP they may raise this so called assignment window but not by much. This is quite different compared to the US.

    And yes you can get IPs for virtual hosts from them. You just need to know how to write your requests.

  • or, who give every user their own /28 (well, not really ... I pay extra for the extra 8. my ISP usually gives a /29)? Also, why do Hewlett Packard, DEC (defunct), Public Data Network, Apple, MIT, Ford Motor Company??, CSC and a company/group called Royal Signals and Radar Establishment (might be government, so its understandable) get their very own Class A block of addresses? How about asking *them* to return some to the available pool?

    It's time for the Internet community at large to make a decision: are we going to keep applying these little fixes at the bottom end of the totem pole, such as requiring Name Based virtual hosts, or are we actually going to fix the problem? We can fix it short-term by reallocating some of these huge grants made long before the Internet became popular (which is politically sensitive to do), or we can fix it for a good long while by migrating to IPv6.

    My box talks IPv6, how about yours?

    ---

  • With depressing regularity, I see stories related to various virtual "shortages".

    Why is this sad?

    It's sad because there shouldn't be any virtual shortages.

    Whether this is a bureaucratic, technological, societal or other failure is debatable.

    In this particular case name-based hosting may be a perfectly valid workaround, but that's not the point.

    The point is that if we allow these shortages to continue, then the internet and related technologies will miss alot of their potential. It will simply be another case where only a few can afford access to scarce (virtual) resources.

    And that really will be sad.
  • Although name-based hosting works fine for webserving, my virtual services include a number of protocols that have no way of stating the hostname. This includes: FTP, pop/imap, true virtual email (no internal relaying), virtualized telnet... the list goes on.

    To conserve IP space I use a l4 switch to shunt port traffic to different virtual servers, so all a domain's services may be on the same IP, but split over different boxes. So hosting virtual www IPbased is simply a side effect.

    --Dan

  • by JoostFaassen ( 139437 ) on Wednesday August 30, 2000 @01:20AM (#817448)
    You could run virtual hosts on different ports to allow multiple hosts with multiple certificates to serve on 1 IP address... It's 'a-pache' trick I know, but you could do some tricks in hidding urls like https://1.2.3.4:1234/securedstyff.html in a page on a non-ssl virtual host...
  • Just a small point: if you have a Web site that will be handling tons of traffic, and need multiple IP addresses just to handle the large number of TCP connections, how is the new rule going to affect that? I'm in particular thinking about sites that use multiple servers with traffic distributors.

    Or is this supposed to fall in the ill-defined "list of exceptions"?

  • And I have another issue: I want to run a "reverse proxy" (multiple physical webservers, possibly running different OS's) with name-based virtual hosting. I haven't found a way of doing that [with Apache] yet.

    Squid in accelerator mode should do this. You will have to tell it to use the host header though.

  • by Swordfish ( 86310 ) on Wednesday August 30, 2000 @01:45AM (#817455) Homepage
    Try this:
    [akenning@dog]$ fgrep " HTTP/1.0" access_log | wc

    252233 2522331 24313937
    [akenning@dog]$ fgrep " HTTP/1.1" access_log | wc
    151023 1510233 14952893
    [akenning@dog]$ fgrep -v " HTTP/1.1" access_log | fgrep -v " HTTP/1.0" | wc
    188 1521 12028
    I think that means that about 63% of browsers are still using HTTP/1.0 (contradicting the opposite opinion expressed in the O'Reilly Apache book).

    And I think that this means that the net is not ready to abandon IP-based hosting.

  • Been there, done that.
    Would you trust such a site? It's possible that some user seeing a ":portnum" part in the URL would consider (or at least fear, which is more than enough) the site to be a forgery and leave for safer harbours.
  • No longer will I be able to get a shell with it's own IP for £54 a year ($80 US) - bummer.. how will i ever irc from i.graha.ms now :(

    Also this will probably come down hard on ISPs like Demon Internet [demon.net] who give static ips to dialup users. This was a bugger originally since they used to use smtp for mail delivery which wasn't easy on Macs and Windows, but still a very nice feature.
  • It's no problem. When I need an IP address, I'll start doing business as 123.45.67.89, trademark it, and sue the current holder of the address for trademark violation and petition the court to order the holder to turn the address over to me.

    For crying out loud, there is an infinite supply of integers; we shouldn't be squabbling over them. Bring on bigger address fields already.

    By the way, do corporations report their IP address allocations as assets? E.g., the early network participants got big chunks of space. Digital Equipment Corporation (since purchased by Compaq) had all IP addresses 16.*.*.*. That has some economic value now. Maybe the tax assessors would like to take a look.

  • I started doing name-based virtual hosting on my home system in 1996. I only have one IP address, so it was a forced decision. At the time, I had some friends check out the various host names to see what they got. The only ones who had problems were using Mosaic or ancient (1.x) Netscape browsers. Four years later, it's probably safe to assume that almost everyone has Netscape >=3.0 or aIEeee >=4.0 (or else is already having problems much bigger than hitting the wrong NameVirtualHost!).

    If you're setting up a NameVirtualHost setup, and you're truly worried about people hitting the wrong site with an older browser, then you set up a bogus "primary" site on your system (primary meaning, the one that you get if the client browser doesn't indicate the name of the host it's looking for) that contains nothing but links to the names of NameVirtualHosts that exist there. For an example of this, you could look at this site [165.254.158.24] which I've linked here by its IP address instead of its name.
    --
  • I can vouch that name based vhosting works just fine for FTP and telnet. Don't ask me how and why, but it does - I use it daily, and the ISP I use is superb.net.
  • Right now we assign an IP to every website. We have some address space luckilly.. what this will fuck up however is:

    * jails. For some customers, we provide a virtual OS "jail" using FreeBSD. This basically assigns a new copy of FreeBSD to an IP, with it's own /etc and such.

    This very nicely keeps sites with "suspicious" cgi's and such from effecting other sites on the same server, as well as lets them maintain accounts. Takes some disk space however.

    * traffic monitoring. Nothing like just watching trafshow to see who is eating up what

    * Intrusion Detection. With snort, your only going to see that "Yes, they tried to sploit my webserver", not which of the 100,000 virtuals on it. This actually isn't too bad, you can always read the tcpdumps.

    * Virtual ftp sites. Luckilly, theres a new RFC which allows you to do "host based" ftp serving. I haven't seen anything support it yet.

    * DoS attacks. If you host some "contreversial" sites such as www.godhatesfags.com, it's good to know why when someone tries to force an OC3 worth of UDP packets down your T1... If your weak, you can just remove the IP and hope it stops :)

    * SSL stuff. What we did to get around this for now is a secure.ourdomain site, with subdirectories for ordering pages. This of course, doesn't sit well with bigger customers however.

    Just some observations.. helixblue
  • I don't know what you use, but I have hostname based v-host (with an ISP called superb.net) and ALL of the services you have listed (FTP, pop/imap, email, telnet) works just fine AND they are definetely all on the same sun box. I verified it several times.

    Don't ask me how they do it, all I am telling you is that your post is innacurate.
  • Don't run it off of a different port. A lot of companies firewall outgoing traffic or even proxy it only to port 80. :1234 might be bocked...

    ---
  • There are a lot of non-www based services that don't use name based virtual hosting. Name based ftp? Name based finger? Virtualizing is good, but we can't switch to just that for everything, at least not yet. Yes there actually are a few people who continue to run services other than http!
  • They don't do virtual IP. What usually happens is they link your username/password to a directory on the system. If you find another address with the same IP as yours, ftp in with your username and password, you will still find your site. The host name is never transferred in the FTP protocol.
  • Most browsers only support parts of the HTTP/1.1 spec, so they broadcast themselves as HTTP/1.0, even though they are sending the host header. A better metric would be to search for Netscape 1 and IE 1/2 UserAgents. I don't think you'll find any.
  • That's just a joke?

    -Chris
  • > Also, you are mixing RIPE general policy with
    > host policy. RIPE is assuming this stance on
    > all addresses. You have to justify anything
    > above /31 by default with them. If you are a
    > big ISP they may raise this so called
    > assignment window but not by much. This is
    > quite different compared to the US.

    This isn't true. Most RIPE local registries have
    assignment windows of at least /24, and mine was
    /23. You don't need any explicit approval for
    these, but you still need the documentation
    because they randomly audit your decision making.

    Andrew
  • That's not possible. The FTP protocol doesn't include any name information. The FTP server doesn't know what host you think you're connecting to, it just knows it recieved a connection to a particular IP.

    Name-based virtual hosts work because an HTTP 1.1 request includes the hostname. FTP requests don't. It's as simple as that.

    Chances are your isp is doing something like I used to have to do, with the ftp server CNAME'd to the web server, and the web server ip based. The reverse works just as well but looks silly.

    Most people running a web site don't actually need an ftp server for the public tho, and just use ftp to upload their web page. In that case, they should definately be using name-based virtual hosting, and the isp should be up front and honest and not even bother to tell them they can ftp to their "web server" and just tell them to ftp to web-farm.isp.com or whatever.

    The business model this is going to kill is the one wherein the service gives the customer a whole virtual machine. That *Definately* requires an ip per customer.

  • by Karmageddon ( 186836 ) on Wednesday August 30, 2000 @04:31AM (#817514)
    man, everybody's jawing away in here: a lot of good info, reasonable suggestions, and stuff I didn't know.

    But this problem has already been solved: private property and free markets. Just auction IP addresses through a central exchange, all IP addresses, including the sacrosanct class As. You want an IP, or a block of IPs, you pay for them. How much? Who knows, who cares, we'll find out when they go up for sale.

    Some regulations are required: don't allow monopolies or cartels; declare IPs fungible to allow central administrators to reallocate or consolidate blocks for routing purposes.

    Problem solved.

  • I know a few people who serve in some capacity with ARIN and there is a distinct bias against webhosting and the hypertext protocol in general. Too many of the ARIN and Internic directors are they type who lament what they see as the "death" of the Internet at the hands of commercial and individual entities and therefore blame the popularity of this protocol for changing the face of the Internet.

    I wonder how much of their opinion came into play here.

    Furthermore, the real problem is not the webhosting allocations, but the host allocations to large, workstation-based networks. I know of more than a few companies who have /19, /18 and larger allocated blocks who were assigned these networks many years ago. Rather, they should be actively using non-routable IP space, proxies, and DHCP rather than static IP configurations.

    It does not make sense to choose a website, ftp site or any other Internet service host over a workstation.

    Much to their credit, though, ARIN has actively sought out unused IP space from companies, universities and other organizations assigned A and B space in the past.

  • There are lots of reasons why name-based virtual hosting won't work, namly many protocols that are NOT http.

    Why do people seem to insist that "The Internet == The World Wide Web" anymore?

    It reminds me of The Corinthians [slashdot.org] website issue. Just because a guy doesn't have a web page on a domain, or that page hasn't been updated for a while, The Powers That Be consider the domain unused. (May not be the exact case with this example, but in general that seems to be the opinion anymore.)

    Seems like nowadays, if you're NOT running a high-profile website on your domain, you just aren't officially "using it."

    EMail? What's that? FTP? CVS? Telnet? SSH? Huh?

    -=-=-=-=-

  • Users need to be able to upload to their web space, and HTTP can't upload without potentially compromising the system.
    <O
    ( \
    XGNOME vs. KDE: the game! [8m.com]
  • That's for doing IP-based vhosting.

    Read the FAQ [proftpd.net].

    -=-=-=-=-

  • reverse proxy can be done with Apache and mod_proxy, see the documentation for mod_proxy at http://www.apache.org/docs/mod/mod_pr oxy.html [apache.org]. To do name-based vhosting with it, you have two options: either have the <Virtualhost> directives on the rev-proxy and forward to different URL paths on the backend (i.e www.bletch.com/urf becomes backend.serverfarm.com/bletch/urf), or you pass the Host: field as Original-Host to the backend, and then setup a fixuphandler to put it back as the Host: header. There is an example module that does something similar (passes the original request's IP as X-Forwarded-For" at http://www.cpan.org/auth ors/id/ABH/mod_proxy_add_forward.c [cpan.org]; it's originally meant for use with mod_perl, but there's no reason why it wouldnt work with anything else on the backend, with a tiny bit of C hacking.
  • It seems like if that machine was ever subjected to a dDos or went down for some reason

    A well-designed Distributed Denial Of Service is impossible to distinguish from normal heavy site traffic <cough>Slashdot effect</cough>.


    <O
    ( \
    XGNOME vs. KDE: the game! [8m.com]
  • NAT is conceptually ugly because it breaks IP's basic rule "only the endpoints should know or care about established connections". but hey, it works, and it works pretty well. Of course, NAT alone doesn't make a firewall... but it sure gives you a convenient place to put your firewalling rules, and *then* it can be one.
  • All in all, dedicated Apache installations seem to be the smallest compromise. Memory is cheap, at least when compared to administration hassles.

    Also, reconfiguring and restarting an Apache with one site is a few orders of magnitude less critical an operation than doing the same with an Apache that handles 1000 sites.

    On a sidenote, security is always a compromise in shared-server solutions. But a shared HTTP server is, IMHO, one step worse.
  • No customer is going to pay the same price
    for https://your.domain.com:65220/
    as they would for :443. Further,
    the number of ports available doesn't
    provide the kind of scale needed (hundreds
    of thousands or millions of sites for large
    ISP's); and client firewall issues make port-based
    solutions unacceptable.

  • right. and that works fine :)

    I guess it's only a problem for anonymous ftp servers.
  • exactly. only a problem for anonymous FTP servers where the ISP can't use the username/password to determine which customer is being accessed.

    It's only after posting my message that I realized this single issue with name based v-hosts and FTP. I can't say that I feel this is a big deal. Anonymous FTP is typically read only, in which case, why do you absolutely have to use FTP in the first place?

    As far as the ISP telling you to use web-farm-isp.com instead of your own domain name - it's simply a convienience thing. I can (hopefully) remember my own domain name. I have no interest in remember the often obscure name they have for the server that actually host my domain name.
  • Well the main use is if you want to IRC from a certain hostname. IRCd checks when u connect that your forward and reverse dns match so that means if you want a custom hostname - you need your own ip.

    Also as a demon customer (although i use blueyonder hi-speed at my flat, BT at my parents and Lineone on my mobile) actually we just keep demon for email :) i'd like to see them improve the webmail service since it's been in testing for about 3 years now and should be nearing maturity.
  • At the time the server sends a certificate, it doesn't yet have any HTTP headers; all it has is the IP address. Without the Host: header, how will it know which certificate to send?
    <O
    ( \
    XGNOME vs. KDE: the game! [8m.com]
  • by drig ( 5119 ) on Wednesday August 30, 2000 @05:55AM (#817552) Homepage Journal
    It's easy enough to set up a site that changes key/cert upon receipt of the request URI (or Host: header). Simply choose a primary key and cert, do the initial connection with that one. Then, when the client specifies the URI (or Host:), request renegotiation and choose a new key/cert pair. All major browsers support renegotiation.
  • now THAT would be a rather dumb way to go: no way in hell this could be regulated: before you know it, all IP's WILL be owned by one or two telco giants and sold for prime $$$. This would essentially give a bigger share of the internet pie to people with more money than to people who actually wanna do something useful with them. I think they're right to scrutinize your plans before allocating you blocks of IP's.

    IP addresses deal with the essential functionment of the internet, this CANNOT be taken lightly and certainly not put in the dirty hands of capitalism. It would like handing the internet over to a couple of corporations, which would, essentially, rule the world. It is absolutely vital that IP addresses allocations remain in the hands of an INDEPENDENT non-profit organization.

    Domain names do not affect the way the internet works. There can always be a near infinite amount of domain names available. You can't compare auctioning domains and auctioning IP blocks, that's just crazy!@

  • I am worried about a precedent this can set.

    Like companies being required to use NAT, even if they don't want to and want each machine to have an Internet routable IP. Like ISPs that serve residential customors via DSL or cable modem being required to tell their customers they can not be on the net more that 8 hours a day average. Why would they do that? Because even if you have dynamic IPs you don't get any savings of IPs if everyone is holding on to them 24/7. Or even just telling ISPs they have to put all residential customers on NAT. Each ISP would get IPs for themselves, but home customers would only get "private" 10.x.y.z IPs (of course they can't serve content then, but it is likely that neither the ISPs nor ARIN would be at all upset about that.)

  • by Frank T. Lofaro Jr. ( 142215 ) on Wednesday August 30, 2000 @06:16AM (#817560) Homepage
    Maybe the people in charge want it this way.

    Here is what they want:

    Make it so only the rich and powerful can get resources (such as IP addresses). Make it so residential customers aren't allowed to host content, even if their ISP doesn't mind, since their ISP will have beeen ordered to use NAT and hence the customers lack an Internet routable address to host off. No more pesky speech from the masses. Shift information transfer totally from bottom-up to top-down.

    Along those lines, eventually, make it so the shortage is so bad the government comes in and requires mandatory FCC licenses at thousands/millions of dollars each and strict regulations on who can use them and how. The justification would be "scarce resources". Does that sound totally unbelievable? Well, if it does, you need to look at the early history of radio. Used to be free, now it is extremely regulated and restricted.

  • by tshak ( 173364 ) on Wednesday August 30, 2000 @06:22AM (#817561) Homepage
    We run thousands of sites off of one IP and tested Netscape 2.0 (1% of our users) and have had no problems. SSL is no problem because we setup a central secure site for everyone. For example: https://secure.[hostingcompany].com/[customer] Now you've just used 2 IPs to run your entire web service. Then you've got your PIX, your 3600's, mail servers etc. and you don't even need a full class C!
  • 1. HTTPS is a placebo designed to make people 'think' that anything is secure.

    2. Regardless of #1, I think ARIN will understand if you use IP-Virtual for HTTPS sites..

    3. If you are providing entire (virtual or real) servers to customers, as opposed to just simple webhosting, this doesn't apply either.

    All this effects is sites providing plain virtual webhosting service, that still havent migrated to name virtual hosting - listing that use of IP addresses will no longer carry as much weight as if you had assigned those IP's to dynamic dialup ports, or assigned small blocks of them to many customers.

    As far as IPv6, its a neat idea, but there are lots of things to stumble over before it can be implemented.. It also seems that IPv6 is suffering from 'death by committee' - too many people have added too many overcomplications..
  • Welcome back to the consumer notion that the web *IS* the Internet.

    If that were the case and the only protocol running on the Internet that would require something like virtual hosting was HTTP, then we'd be all set.

    For those of you who don't understand what this is all about, think of HTTP like this:

    1) Your machine connects to an IP
    2) Your machine then tells the IP what webserver it wants to be talking to *BY NAME*
    3) Webserver fires back the appropriate content

    If every single protocol on the planet had the client identify the server *BY NAME* this wouldn't be a big deal; however, they don't. Very few protocols do this.

    Mail delivery does. POP3 and IMAP don't; though. Neither does FTP. Any protocol that requires reverse lookups to return a specific hostname is problematic if you are attempting to have one ip with many names (e.g. ident) Oh, and as many have mentioned SSL certs are tied to IP *AND* NAME so they have to be vhosted by IP.

    The only current ways around this seem to be passing the server name with the user name. There are virtual ftp servers, virtual POP3 servers, etc. that allow for this. E.g. the user bob trying to access the mail server mail.foo.com to recieve his email would pass the username as bob:mail.foo.com. Or when logging into a virtual ftp server, the username would be bob:ftp.foo.com.

    For most users this is a terrible inconvienience and anyone who works tech support at a large virtual hosting provider, I'm sure would agree. It's a tech support nightmare. For the majority of lusers out there, logging into 'mail.foo.com' as 'bob' makes life a helluva lot easier than logging into 'mail.foo.com' as 'bob:mail.foo.com' to check mail for the address 'bob@foo.com' .. "Why do I have to do that?" Bob says... Then you have to talk him through setting his Reply-To: header and that's a pain. Let's let the ARIN people do 20K tech support calls about this and see how they like it.

    Perhaps providers of the world could go back on ARIN calling this move 'anti-competitive.' For most providers, it probably removes the ability to market a certain service - IP based virtual hosting - a step in between virtual hosting and dedicated server services that is ideal for midsize hosting accounts.

    Grrrrr.......

    ~GoRK
  • 1) IP based virtual hosting is tremendously more manageable than name based hosting - Primarily because DNS takes 1 TTL (however long) to change over in the event of a problem that requires a workaround where one must move a website from one IP to another. If you can move the IP, instant change.

    2) IP based virtual hosting prevents unnecessary headaches for administrators of medium size sites that must endure access problems to named hosts due to misconfigured client proxies, firewalls, DNS servers, or web browsers. Also extremely old browsers - THEY ARE OUT THERE PEOPLE - even if their numbers are very very few.

    3) DNS is introduced as another point of failure in the entire system. Without proper DNS resolution there would be absolutely *NO WAY* for a website to be accessed if it were on a named host even if you knew the IP. (at least without a bunch of fiddling around) The other problem to consider is what site could potentially be brought up using the IP number of your named host? Your hosting provider's site? Someone else's site maybe? Someone else's PORN site?? -- THis poses a tremendous problem for businesses who cannot afford dedicated server solutions. Pretty much every virtual server on servint.net's network is porn. Imagine if you had a legitimate business site on one of these named virtual hosts and DNS broke, so you accessed the site by IP and got a PORN site! Bad karma!

    Try it - see if your favorite website is name vhosted. nslookup the IP and use it as the URL! You'll be shocked.

    ~GoRK
  • The policy says they will no longer accept IP-based hosting (I presume this means web hosting) as justification for allocating addresses.

    It doesn't say you can't get allocations for other IP-based services.

  • Assuming that you're misinformed rather than being a troll, SSL needs the signature to prevent a man-in-the-middle attacks, or any hosts masquerading as another host. Think about it. You're going to amazon.com to buy books. I poisoned your DNS so that www.amazon.com points to my domain, and I act as a proxy for the real amazon.com until you're ready to make a purchase. When you do, through my SSL server, without the certificate, and I log your credit card information for my future use.

    With certificates, you can be reasonably sure that the www.amazon.com is the real amazon.com and not my little false fpos thing. Without it, well, it opens things up for all sorts of interesting attacks.


    --
  • No, the IPv6 architecture is so designed that IPs will be dynamically allocated. All references to any system should be by hostname only. Besides, who wants to type 3fae:b01:c54:f1f1:da3c:af5c anyway?
  • If anything, they're being too aggressive in not letting out IP addresses.. a whole 1/4 of the IP address space was just opened up a couple of months ago (64/2). That's over a billion machine names.

    Now, I will agree that some of the first allocations should be redone and forced to be returned. (MIT and some others have a class A space, or 16 million machines worth.)

    There really isn't as severe shortage of IP's as everyone makes it out to be.
  • What the hell is wrong with these people. Instead of saying 'we won't assign any more IPV4 addresses for virtual hosting', why not say 'we will only assign IPV6 addresses for virtual hosting purposes.

    Then, at the every least, some of the people on the internet might have a good reason to drive the adoption of IPV6.
  • From my experience of being slashdotted, there are plenty of script kiddies who turn up with DOSes around the same time legit visitors do.


    --
    My name is Sue,
    How do you do?
    Now you gonna die!
  • Like that's a secure approach.


    --
    My name is Sue,
    How do you do?
    Now you gonna die!
  • It is not simply a matter of coding to support name based virtual ftp hosts, a change to the ftp protocol is needed. The http protocol contains a slot for the server name, the ftp protocol does not. An upgrade of the ftp protocol, with the associated upgrade of the client base is a _major_ undertaking.?

    HTTP 1.0 didnt have a slot for it. How many servers run HTTP 1.1 compliant code? And how many are still running HTTP 1.0?

    Yes, it's a semi-major undertaking. But it's not much different than adding Host: header to the HTTP protocol. It just has to be pushed.

  • Your random internet appliance doesn't need a routable IP.
    It does need to be a routable address, or it won't be of much bloody use. If I can't query it from work or on the road, it's not really an internet appliance.
  • It looks cool to appear as graha.ms@graha.ms and clearly identifies who I am to people who know me. It instills a far greater level of trust that I am who I claim to be than if i appear as ~graham@modem13813.uranium.pol.co.uk or any other freeserve like address.
  • Limiting the amount of bandwidth a web site uses also requires a separate IP address for each limited site. This applies to MS IIS AND Cobalt RaQ [Linux] servers. You can't do this with host headers.
  • Ah, but what happens when we start to route packets to the infinite number of other planets out there that need (want) access to our hosts?

    By that point they're going to have to find a different transport method rather than electricity or light anyway (who wants ping times of 2 years, anyway? hurry up with the quantum physics research.) and if they can do that, implemention IPv12 shouldn't be too much of a problem at that point :-)
    --
  • by seebs ( 15766 ) on Thursday August 31, 2000 @05:06AM (#817628) Homepage
    So, I'm not sure about this, but I did notice that HTTP 1.0 (doesn't support the by-name hack) is still about 40% of the hits in our web logs.

    Is that more modern browsers trying to be friendly, or is that people who actually *can't see* the NamedVirtualHost stuff?
  • that's why telnet always has a username/password.
  • It's sort of like that already.

    Any IP request has an IP address (32 bits) which uniquely identifies a network device and a port number (16 bits) Which uniquely identifies a service. So there are 4 billion x 65 thousand = a lot of possiblilities.

    You can use the port number to route to different servers as you suggest but since certain port numbers are associated with certain services HTTP=80, FTP=21, Telnet=23 etc, you only get granularity on the order of services at the IP level. Some protocols, such as HTTP, have the client send the DNS name of the host it requests which then allows you to virtualize based on the (effectively infinite) DNS namespace. To do more than this requires client cooperation such as requesting an HTTP session on an alternate port. But this is impossible in the world of firewalls where it is quite common for only port 80 connections to be allowed.

    There are other problems with the current form of IP addresses such as the difficulty of making a routing table since adjacent IP addresses may be in completely different geographical locations.

    IPv6 is the solution and it contains the concept of global and local portions of the IP address similar to what you proposed as well as a mindbogglingly big address space (128 bits) and other features. Search Google [google.com] for more info.
    --

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...