Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

Deep Linking 2.0 At NYTimes 157

Gentleman Goat writes: "The NY Times has a well-written article exploring the recent court decision about Deep Linking in closer detail. " Free registration required. This one goes deeper and talks about Web crawling bots and other issues related to deep linking. Honestly I think the spider problem is a separate issue. I think people should be able to say, "Please don't spider this page" (robots.txt for example, but it gets stickier with copyrighted content) but I don't think anyone should ever be able to say, "You may not link this page" since that is fundamentally the anti-point of the Web. Check out the ruling from Japan that linking, in some cases, is illegal.
This discussion has been archived. No new comments can be posted.

Deep Linking 2.0 At NYTimes

Comments Filter:
  • This paragraph of the article seems to me somewhat like the software industry's claims of damages resulting from piracy. Given that certain people would never have purchased a product due to various factors, simply downloading a pirated version doesn't really cost them any money.

    Sigh. This is nonsense, and you know it. If it's easy to d/l a pirated version of software, there is no incentive to buy it. I will agree that there is an element that will *never* buy a piece of software, but there is a percentage of people who probably would, but because of the availability of a pirated version, don't. It's the people who could buy the software and who don't because it's "free", that the software companies want to address. And rightly so.

    ... posted anonymously to avoid needless loss of karma to idiotic moderators who actually believe that pirated software doesn't cost money.
  • by Anonymous Coward
    ...that if you put something up on the web, you've made it publicly available for people to link to. There is clearly a limit to this. Just because my financial institutions make it possible or me to conduct business via the web does NOT make it ok to deep link to my bank account information. Neither is it ok to deep link to sites that provide content on a fee paid basis. People providing content that are paid through advertisements rightly view some types of deep linking as a danger to their economic life.
  • by mosch ( 204 )
    as if we don't all know the magic l/p which always works, anyway.
    ----------------------------
  • Now, ideally, they can make up the ways that their information is spread, and they finally have control under their own terms.

    But they *do* have that option via various techniques. It's just that the default for the web was meant to be "link to whatever you are able to". Putting the onus on the linker of making sure he has permission to link goes against the grain of the web.

    Now as I understand it, web servers can determine where the request came from, but it is possible to forge that information. I think that forgery is rather more questionable. Did the court decision touch on this?
  • You are an old skool internet hippie. Information wants to be paid for!


  • A guy is driving. Stops and asks directions:

    Simple link:

    Guy: Excuse me sir ... Where's Eighteenth National Bank?

    Joe Linker: (Pointing with finger) Right over there. They
    are an excellent bank, highly recommended...

    Guy: Thank you, sir.

    Five minutes later, the bank is robbed.


    Deep Linking:

    Guy: Excuse me sir ... Where's Eighteenth National Bank?

    Joe Linker: Next corner. And watch out: there's
    a guard who carries both a automatic .45 gun and a shotgun. He lunches
    at 12:08 everyday, and finishes at 12:32 and is sleepy after that. Now
    its 12:12. There are four cameras but cashier #5 is not well covered and
    images are blurry. There's the automatic alarm if cashier takes out
    all the money. Money arrives Mondays (today!) at 10:30. They've
    been robbed twice in the last 3 months so everyone is scared as hell...
    it can be robbed any day.

    Guy: Thank you, sir.

    Bank is robbed at 12:34:30

    Copyright infringement:

    Guy: Excuse me sir ... Where's Eighteenth National Bank?

    Joe Linker: Open that door, give me that shotgun and let's
    take all fsking the money from there, it's a piece of cake.

    Guy: C'mon buddy!

    Bank is robbed at 12:33:32


    I think that ruling on deep linking should be based on intention of
    linker. Freedom of speech doesn't mean taking others work for profit
    and that's what tickets.com did.


    If your friends forgot you, your enemies never will.

  • There should be NO limit to that. If your financial institutions make it possible for you to conduct business on the web, then they should take the necessary precautions with said data as well. We don't need the courts/government/whoever, telling us that we can't link anywhere. If you put the data up, it is your responsibility to manage it however you wish. If you don't want to do that, then don't put your info up for all to see on a website. That same argument goes for the people living and dying on banner ads...

    The web is basically a free-for-all. A freely accessible library useable by anyone with a web-broswer and a net connection. If you put data up, but want it visible by only a select few, take the time to manage your data that way. Otherwise, you can't bitch.

    Besides, I can't see why people would bitch about it. They are still getting hits, and by others linking to them (deep or not), getting more exposure. This is what the web SHOULD be. It's sad that we may need the courts to decide that for us. In ticketmaster's case, I can't really believe that they are bitching because they still got the sale. Now misrepresentation is another matter entirely, but that is already illegal, web or no. That shouldn't be a determination of whether deep linking is legal or not.

  • I basically wrote this page after a company called Asimba told me I could no longer link to their pages:

    http://w3.nai.net/~perfecto/ejercisio/asimba.html [nai.net]

    They told me i couldn't link to them since I had pornography on another page [nai.net]. The funny thing is that it's probably one of the lamest pornography pages on the 'net. But anyway, I removed the link not because they threatened me but because a company this clueless doesn't deserve the benefits associated with being on the net. The net is the ultimate in free customer referral. If they don't want customers passed on to them, fine. Let them rot in hell. I do just fine with my healthy quarterly Amazon [amazon.com] checks!



    --
    J Perry Fecteau, 5-time Mr. Internet
    Ejercisio Perfecto [nai.net]: from Geek to GOD in WEEKS!

  • So what's the fscking point of him putting this Web page up in the first place, unless he wants people to look at it! If 500,000 people look at his site in the first day, he may get charged an extra $1k, the site may go down, but he's just had more interest in his site than most commercial sites get in a year. Mission accomplished, as far as I can see. He can now take down the site and get on with the project.

    --
    Barry de la Rosa,
    public[at]bpdlr.orgASM,
    tel. +44 (0)7092 005700

  • You can deny based on referer using mod_rewrite. Read some suggestions on it here:
    http://bugs.apache.org/index.cgi/full/968 [apache.org]


    Quidquid latine dictum sit, altum viditur.
  • Greetings brother spiderer,

    I don't bother with robots.txt at all. I asume the user of my bot knows what is right or wrong -- If I was to sell a gun to a guy I wouldn't feel responisble with what (s)he did with it. You just worry about doing what you do, since you can't expect every j(ohn|ane) doe to be an upright citizen.

    When it comes to data mining you don't want to limit yourself - that's why user agent spoofing is so widely used... I wonder how many tracking sites out there count spoofing bots on http as either mozilla or IE.

    Of course I'm an evil basterd that makes a spider lib with an example simple minded mp3 spider...

    *** Information wants to be free ***

  • That would be fine and all, but the problem here is that tickets.com was using ticketmaster's content as part of their business strategy.

    (Good lord, I am defending ticketmaster. I think I am going to be sick!)

    I kinda recall people being up in arms last year, or two years ago, about one of the "Learn Perl in 21 days" books that copied blatantly and heavily from the perl FAQ's, unattributed. That's because it's called plagiarism. Now, if the FAQ is under "Open Content Licensing", probably you would be OK if you at least attributed the source. But to take someone else's work and use it as your own, probably isn't OK.

    (Good lord, I am defending ticketmaster. I think I am going to be sick!)

    And yes, ticketmaster probably should use a more "secure" means of allowing access to those pages (the "lion at the gate" referred by others), but should they have to?

    (Good lord, I am defending ticketmaster. I think I am going to be sick!)

    The primary problem is that we are all suspect of ticketmaster's ability to "play nice" because they are a monopoly, and they are accustomed to their artificially created power over all tickets in the world. They don't like it when some upstart draws back the curtain and proves that there is nothing magical about what they are doing.

    I can't come with an alternate example that would placate the slashdot masses, because very few businesses work like ticketmaster, but I have the sinking feeling that they might have a right to sue for what they are suing for. Now, the deep linking, as an academic point, well that clearly can't be illegal. I think we will need a legal standard similar to the one in academia, that unattributed copying is bad, n'kay?

  • Copying somebody else's material is copyright infringement, plain and simple. On the other hand, linking should be kept legal and mostly unrestricted. The only exception being that you can't deliberately mislead people to believe the link is your material, i.e. when linking another page in a frame on your site.
  • then program in an expiring tracking ID in the url. sid pid pud pig, whatever.
  • All these people spending money of fucking laywers when they could have just spent the money and built the capability into thier web site to check the header. If it did not come from your site, reject it, send people to the front page, or redirect with _top to make sure that you kill any framing.

    Someone should write companies whose web masters allowed laywers to be used, instead of a simple basic technology solution. Of course, since it's not a standard IIS feature, then it does not exist ;)
  • If they brought copyright issues up then google [google.com] would be up shit-creek without a paddle. Storing cached copies of pages that users can view? Suits you sir.....
  • Speaking of terms and conditions on sites...

    I work for a web hosting/design agency here in the UK, and a site I recently did some of the coding for has, in its terms and conditions, a section that specifically forbids the caching of pages by proxy servers.

    Now I very much doubt that they could make it stick in a court, but if nothing else, it shows you the sort of mentality some people coming to the web have these days (not to mentoin showing you the sort of people I have to deal with sometimes :-( )

    Cheers,

    Tim
  • This paragraph of the article seems to me somewhat like the software industry's claims of damages resulting from piracy. Given that certain people would never have purchased a product due to various factors, simply downloading a pirated version doesn't really cost them any money.

    Let's not forget those who *would have* bought the product, but don't have to because they have a free, pirated version.

    I understand that ads generate revenue and that ticketmaster would be upset by people bypassing ads. However, the "offending" deep linking still takes the user to a page containing a banner, and ticketmaster will still receive a service charge, so what are they complaining about? Perhaps they never would have made that sale had the user not gone through the site doing the deep linking. Just a thought...

    I'm sure Ticketmaster wants people to travel through *their* pages with *their* banner ads to get more impressions. Of course, these deep linking lawsuits are nothing but lousy alternatives to hiring a savy web head to design these sites correctly so people can't deep link.


    George Lee

  • I didn't say it made sense. :) I simply said that's what Ticketmaster was complaining about.

    I agree, Ticketmaster does make money on the sales. But when have you ever known a demonstrably greedy corporation like Ticketmaster to not pursue their attempts to squeeze the last dime out of the revenue stream through any means necessary, up to and including suing the living blazes out of anyone that gets in their way?

  • Lest we want to start an all-out war here, I think you're both right and wrong.

    You're right - there is some amount of people that would not buy software because there is a free, pirated version available.

    The flip side, however, does exist - there are some amount of people that would not buy software even if they were unable to obtain a pirated version.

    When the software associations release their figures, they automatically assume that every pirated release constitutes a lost sale. You've demonstrated that in some cases, that is true. In others, however, it's not. That means that some subset of the figures that the piracy associations use are incorrect - at best, exaggerated, and at worst, purposefully misleading in order to make the problem seem worse than it is.

    In summary, while I don't think you're completely wrong (in fact, I think you're mostly right), I also think that saying the blanket equation of:

    monetary loss = (copies pirated) * (price)

    is not entirely accurate. Unauthorized software duplication is certainly something that shouldn't be done; on the other hand, saying that everyone who steals would've bought a copy is untrue as well.

  • It's not that Ticketmaster is bitching about people not buying from them. Ticketmaster is bitching because people using Ticket.com don't hit the pages with banner advertisements, and as a result Ticketmaster gets lowered revenues from click-throughs and impressions.

  • The widgets model doesn't work any more, because you are using their widgets through the deep link on your site, not your widgets.

    Well, do you think it is OK for me to stand in front of their store with a big placard saying "You can find widget XYZZY in the middle of aisle 15, left side, third shelf from the bottom"? After all the store wants their customers to wander around since they are more likely to buy something else...

    Kaa
  • Couldn't Ticketmaster.com generate pages dynamically, thus preventing any page from having a fixed location that could be linked to? This way, pages would be created each time, based on what the user clicks on the first set of pages.
  • I understand that ads generate revenue and that ticketmaster would be upset by people bypassing ads. However, the "offending" deep linking still takes the user to a page containing a banner, and ticketmaster will still receive a service charge, so what are they complaining about?

    This one is easy. If they make you travel a maze of 5 web pages to find the "deep-link" your looking for, they can place numerous ads on each page. Since they get pay by the number of people who see each ad, they want you to visit every page so you can see every ad 5 times so they can charge 5 times as much. The problem here is, if you make to hard for me to find what I'm looking for, I will stop looking. Also, if I don't click on the banner ad the first time I see it, what makes you think I will the 5th time I see it?

    Quack
  • Linking directly to a zip is no problem, because if you wave your mouse over the link, you see the URL at the bottom. The real problem, I believe, is linking to another's frame, such that all the site's identification is lost. It's not linking that's the problem, it's direct usage (of frames, tables, images, what have you) that is the problem.
    I'm sure Ticketmaster would not have done squat if tickets.com said "Buy these here" and linked to somewhere deep in their hierarchy. It's all about attempts at misrepresentation. Linking to a zip does not misrepresent like that, although apparently linking to warez/mp3s is illegal, or soemthing.
  • Joe Developer should know better than to make a deal like this with his ISP because traffic on the web is fundamentally unpredictable. Blaming Taco is preposterous. This is a law of nature. A court can meddle with it, but only at the expense of creating strange social anomalies that would present a real danger to the economics of the web.

    The deep linking case is disguistingly easy to solve with technology. Simply create a web-server that allows access to the super-secret deep-directory only when the referer field comes from the same site. Apache probably already does this with one module or another. Compare this to the bevy of lawsuits and legal terrorism required to enforce this in court.

    The law should give people an incentive to protect their own business interests, because this is feasable, rather than to protect those of everyone else, which is not. Taco can't be expected to keep track of every wacky deal offered by every internet provider, and neither can you. The New York Times, on the other hand, can be expected to find a business model that doesn't require changing the nature of the web. Joe Developer should find an ISP that charges a flat rate. Caveat emptor.

  • Just because my financial institutions make it possible or me to conduct business via the web does NOT make it ok to deep link to my bank account information.

    If your bank account publishes your financial information as a page on it's web site, without using some form of access control, I think it's time you changed financial institution.

    Similarly, anyone providing fee-based content, who doesn't understand .htaccess DESERVES to get deep-linked.

    Speaking from experience (I designed a pay-per-porn site a few months ago) it's not rocket science; the first thing you realise is this: every piece of content _MUST_ be protected. I think it's pretty naieve to to say "Please do not bookmark this page, because once your subscription has expired, we can't stop you from viewing all our content." and expect people to actually do as you ask.

  • If a website does not want any outsiders linking to any page other than the main page, this CAN be prevented. The webserver knows the referring webpage and can therefore refuse any request from any source other than an internal page or a trusted site. All that is required is a little CGI work (if that even), and the problem is completely solved.

    Of course, instead of spending a few hours to properly configure a website, they'd rather make a legal issue out of it. Seems to be the trend these days.

    -Restil
  • The widgets model doesn't work any more, because you are using their widgets through the deep link on your site, not your widgets.
  • First off, this happened years ago, but the consequences are still with us. We are even a new company and the rules still apply.

    I believe the story went that the person that took the information was actually a vendor working for another project. There was an NDA for the information that the person was working on but not for the document he read. Although the laws may have changed since then, I believe you are responsible for keeping proprietary information locked up, otherwise you risk having to give the cleaning staff a NDA.

    Steven Rostedt
  • A good example agains this is internal coporate information.

    I agree with the first poster. If you put it on the web then it is like posting it on the outside of your building. The Internet is a public forum, and all information (like it or not) on the Internet is public. If you want security, then use ssh and other secure utilities. Rules against deep linking is not sufficient to secure documents. If you need an internal way to communicate in your company, then set up an internal internet and hide it with a firewall. This is what we do at our company, as well as other companies.

    We were nailed in court that even documents that are left out on the desk is open for other employees to use if they leave the company. Someone actually read proprietary documents that they were not responsible for and when they left the company they used the information that they gathered. When this was taken to court, the judge ruled that the documents where not secured and thus the person was free to look at them. Now we have to lock all proprietary documents up when they are not in use or it is a security violation. It is different in court if someone breaks into a desk and reads documents then if someone just reads the documents on top of your desk.

    So I may contradict myself a little here. I believe that if you don't take any measures to secure your web pages, then they are free to be linked to by others. If you take "reasonable" steps (now that term could take lots of explaining itself) then those that try to link to the secured pages (via cgi or what not) are in violation.

    Steven Rostedt
  • He puts up a banner ad, this would pay for the ISP bill and he might even make a little money. In the end it's much better to be linked to. Even if you don't make any money, your ideas are being disseminated.

    This article is regarding DEEP linking, not simple linking. It's not really about the number of hits, but about bypassing any crap that might be on the frontpage, etc.

    Chris Hagar

  • You honestly thought I was flaming or trying to discredit you? If I gave you that impression, I apologize. I'm not discrediting you, I'm disagreeing with you and giving you an idea why. If you go quote McLuhan, however, I might not be able to maintain my composure for long ;)

    I'm not saying Tim is not disassociated from the web, but rather that the W3C followed Netscape and the other folks into a designed-web environment, rather than lead them to it.

    As such, the importance of W3C has been diminished to a degree, but that's not really the issue for me. It seems to em no more relevant to ask Tim his opinion on the copyright and liability issues of deep linking as it does to ask the designer of the first skyscrapers his opinions on NYC zoning laws: certainly he has some, but that's not his field of authority or expertise.
  • At ay place of work, we have a site that is prone to having other sites deep link to the content. We have two workarounds for this.

    1) Server side script which checks the refering URL against the real domain name. If the request is from www.domain.com show the page; if not then redirect to site home page.

    2) Javascript: add a "jump-out" of frames option. This is a way to remove the frameset, and present your info in 100% of the browser. We have found that people were more app to drop the "other guys" frameset and stick with our content 100%.

    Some might view deep linking as a problem, but there are many ways to workaround the actions of others on the internet.

  • Isn't this the same company who fired a whole bunch of people for swapping emails?
  • While this may go against the grain of the general consensus that feels that deep linking is something that should be fundamentally allowed, there is now software that will provide the ability to control the entry points into your web site as well as secure the entire site as well (AppShield from www.perfectotech.com). Legally, I don't believe there should be any controls in place that say what you do on the web, but if sites want to come up with a technical means to control who and how people access their site, then the more power to them. I think the point I am trying to make here is that a web site is not public property, that your access to that site is not a divine right, and that regardless of how one feels on the spirit of the web, companies do not and will not provide information to the public unless it serves in their best interest. Otherwise why would they? Give sites the power to control things like deep linking, but lets not make laws about it.
  • Well, you can persue both. There is a way around each method
    A way around each method, for tickets.com, or for the user? If 40% of their customers can't use their ticket sales service because they are using a browser that honors the HTTP_REFERER field, then they need a different transaction model.

    Huh?? Why would Ticketmaster want to stop anyone from buying with them.
    Ticketmaster wants people to buy tickets from www.ticketmaster.com. If people buy tickets from www.tickets.com, thier 'brand loyalty' goes to tickets.com, which is fine for NOW while tickets.com is using ticketmaster, but if tickets.com becomes successful, one of the things they would be able to do is drop ticketmaster. Basically what it ammounts to is they are using ticketmaster's infrastructure (and giving ticketmaster a bit of money for it), without any sort of permission at all.

    There's also the more straightforward issue of lost ad revenue.
  • I don't understand why they feel it is more convenient to persue a legal solution to this than a technical one. In the case in question, tickets.com is making forms whose SUBMIT buttons send the data to Ticketmaster to be processed - well, Ticketmaster is already doing CGI why is it more than a 2 minute hack to disallow purchases from people who have the "referrer" variable set to Tickets.com? (or even NOT ticketmaster.com)
  • by Anonymous Coward
    This brought to mind a post I saw a few years ago from someone who ran a web site providing information about foreign adoptions. They were upset because a pro-pederasty web site was linking to their page. Are their no instances in which links should be forbidden? I can imagine the adoption agency would be quite upset to find their name coming up in a web search describing how to get children to abuse.
  • Deep linking (and any sort of linking) is not illegal in and of itself. On the other hand, just because it is a link does not protect you from other laws, such as "passing off", in this case, where one company pretends to be tightly connected with another. Similarly, the fact that it's a link should fail to protect you in cases of libel, fraud, and other informational crimes.

    Linking should be free, but that is not a defense against doing things that should not be free, and that's what I see the real issue here as. Finding someone liable for passing off via a link won't have a chilling effect on links, it will just have a chilling effect on passing off, which is way too common on the web.

    ----
  • Well, I'm no guru when it comes to this stuff, but in my experience, the HTTP-Referer field isn't always filled.

    That's the case with older browsers. However, if you were Tickets.com would you be willing to have 90% of everyone who clicks the buy link on YOUR site get a simple black on gray page that says "We won't sell it to you because you're a flatulating butthead"? Or perhaps a page that explains that you can't actually serve the customer, but we can! followed by a link to the home page.

    I strongly suspect that that would put a stop to it rather quickly!

  • A frame is simply a special case of a link. So, to make some extra cash, I just write:

    <HTML>
    <H1>CaseyB's Amazing Web Links Database</H1>
    <!--miscellaneous banner ads, etc.-->
    <IFRAME SRC="http://www.yahoo.com/">
    </HTML>

    With some clever Javascript, I could probably even size and scroll the frame so that Yahoo's ads never appear on the screen.

    Now I just advertise this to unsuspecting web users, and make some cash.

    Note that this doesn't involve my copying Yahoo's data, nor even accessing their servers myself. Yet I'm making a profit off of their work.

  • by CaseyB ( 1105 )
    junk micros~1 tags. boooooooo

    I'll do that when the anemic little Netscape/Mozilla browser provides such an obvious, useful functionality.

  • I always hated people who link directly to the images on my site avoiding all the marketing crap and sucking up my bandwidth. I wonder if there will be a court case where a company gets sued for breaking links from the outside world.

    I implemented a little file name rotator for the images on the website I manage. Every 8 hours or so, the app will rename all the images and update the URLs on the site to match. In the place of the old filenames, I placed message pointing users to the site where they can get the file (and all the marketing shit.) This effectively broke all the links.

    Now I'm just waiting for some idiot to sue my company over this...

    -jack jsuzuki@ix.netcom.SPAMSUCKS.com [mailto]

  • is that "deep linking" or linking in general is
    no different than a footnote in a book, an
    entry in a bibiliography, or just a conversation
    between two people where one supplies the
    source of a piece of information to another.

    Indicating the source of a piece of information
    is in no way the same thing as supplying that
    information. DUH. Therefore there can be no
    patent violations, threats for linking to
    dangerous/controversial (to some people)
    information, etc...

    It doesn't do this issue justice to say that
    it shouldn't be allowed just because it is
    anti- the purpose of the web.
  • One of the sites I regularly visit, the Kevin and Kell [kevinandkell.com] online comic strip, asks people not to link or inline directly to the comic strip--they've had some trouble with people doing that in the past. I had some concern that this linking decision threw open the door to people to do just that--but from the NYTimes article, I see that it's still open to debate (and legal action).
  • I've had at least one site since 'the early days' and I recall how indexing bots were a real problem long before the web (e.g. gopher, FTP, etc.) Back then we were more worried about server load and bandwidth than content (which was presumed to be open and free)

    ROBOT.TXT has some very real problems. One is that the file must be placed at the root directory of a site (per the original spec) and this is not compatible with some hosting services. Another is that it was a one-stop 'shopping list' of targets for the less-than-scrupulous. And of course, as everyone knows, compliance is voluntary.

    At the very least, allowing robot.txt as a per-directory access restriction would make far more sense today: it would be a little more flexible, and would not provide a shopping list. (it was not adopted in the original, because it was more bandwidth intensive)

    However, we really need a more flexible plan from the ground up, to deal with the needs presented here today. At the very least, it would help the 'cooperating' bot owner to better understand the wishes of the site owner. Today, I suspect that most suites that care about bots at all would allow indexing of some content but not all, and would like to specify access based on use.

    The compliance issue, alas, is unlikely to be resolved anytime soon. It's up there with Direct Marketing dinnertime phone calls and spam.

    __________

  • This story makes me wonder about how big companies interact with their legal department.

    Here we have a case where there was a cheap, more or less foolproof, technical remedy that could have been implemented in well under one man-day, most likely. Yet instead they go for the legal solution.

    This makes me wonder who's calling the shots? Is the "problem" that suits talk to their lawyers and not to their techs? Or is it that lawyers "sell" themseleves to these clients:

    "Hey, Mr. CEO, I noticed Company X is deep-linking to our site. As your counsel, it is my responsibility to inform you that by doing so they're engaging in blah blah blah, and we should sue the pants off them. " Is anyone out there in a position at work where they deal with corporate lawyers? Is it really the companies that "sick their lawyers on them", or do lawyers "sell themselves" to their clients, by painting pictures of legal doom-and-gloom if they don't sue?

    Obviosly in this case, the problem could have easily been solved without an expensive lawsuit - yet we see not technical solution, and an expensive lawsuit.

    Any anecdotes would be appreciated.

    Don't forget to post as AC

  • One of the criteria the judge specified is that deep links must not mislead the user to the point where they don't understand what site they are at. I think that directly linking to a ZIP like that crosses the line.
    --
  • mod_rewrite is your friend...

    RewriteEngine on
    RewriteLog logs/rewrite_log
    RewriteLogLevel 0

    RewriteCond %{HTTP_REFERRER} ^[^http://somehost.com/].*
    RewriteRule .* - [F]

    This should reject any access not referred from somehost.com. Of course, this is off the top of my head, so I might have totally blown it.

    Woogie
  • Well, anyone from Slashdot is prejudice in this case, really, with our without CmdrTaco's comment.

    Think about it. Without deep linking, Slashdot wouldn't exist, now would it? :)

    FWIW, I agree with CmdrTaco: the Web is primarily a broadcast medium. One of the advantages of the hypertext nature of the WWW is that hyperlinks, indexing, and the like make it easier for users to find information.

    The WWW was never designed to be a medium where you only go in to Web sites through the "front door" so-to-speak. The fact that I can read something on a topic, click on link to jump to a related page, and then keep following links to find the information I want is the whole idea: this is POWER. And if it weren't for this kind of power, I think a good number of folks who use the Web for serious research wouldn't be using it, because it wouldn't be practical.

    Take away linking and you take away power. Take away power and the users will follow. People quit using the Web, and the minor tech stock correction you saw in the Nasdaq recently will seem like nothing.

  • Path of least resistance...

    Its much easier (and cheaper) to get a lawyer to do a PHB's dirty work than it is to find a competent geek who will 1) understand what the PHB is saying and 2) agree to do it.

    Good, competent, amoral technical people who can communicate with non-geeks are hard to find and expensive.

    Pay any old lawyer enough money and he will litigate for you until doomsday, regardless of the cause, and no technical expertise needed.
  • If Ticketmaster wanted to prevent deep linking, why didn't they just check the Referer: header instead of calling in the lawyers?

    Of course they'd have to allow requests without a referrer to get through, and one can fake a referrer, but it would stop almost all deep links from a rival site, for people using a regular browser. They could even have redirected them transparently back to their home page!
  • .. its about unfair business practices. one of which is deep linking into ticketmaster as a part of tickets.com business process.

    The fact that a company has an unsustainable business model does not impose any obligations on anybody to change the law so that the model becomes sustainable.

    Sure, tickermaster would like everybody to go through the front page where they can be exposed to ads. So what? Ticketmaster would also like to become the sole legal source for all tickets to anywhere.

    If somebody is selling widgets fo $1.00, it's perfectly legal and moral for me to open a store next to him and start selling the same widgets for $0.90. Of course this will make the guy upset, but it's not a good reason to forbid me to sell widgets.

    Kaa
  • I got two comments:

    1. Could the people posting the stories please put their comments with everybody elses instead of with the story. We don't all have the same opinion as you and you are biasing the story.
    2. Why don't people just play nice. I mean if you are going to link to a page and you are not sure if the people want you to do so, ask them out of politeness. We shouldn't be making a law about this. I don't understand why people don't just respect other people's wishes. We don't have to make a law about this. Just because it is there is no law against it doesn't mean it is right. Respect peoples wishes when it comes to these issues. Why does our society feel obligated to determing what is o.k. to do and what is not o.k. to do by making a law for everything. Simply place nice with each other. If they don't want you to link to their site, don't out of decency. Not because there is some law against it.
  • Let's say he practices what he preaches (see quote above) and says "You can't tell me not to link it!" and leaves the link up. Joe Developer gets 500,000 hits in one day and goes over his allotted bandwidth by 20GB. At $5 per 10MB over his allotment, he now owes his hosting company a $1000 in overage fees and Joe Developer removes his site and goes bankrupt due to the "fundamental point of the web."

    The fairly obvious solution is for 'Joe' to simply remove his page; maybe replace it with a message saying 'please return next month'. Should reduce the number of bytes transfered.

    If you dont want people to read your pages, the simple solution is to not put them on the web! Surely that is the point of the web.

  • ...deep linking DoubleClick ads? [slashdot.org]

    Practice what you preach /.^H^HAnd^H^H^HVALinux.

    --
  • Indeed, it can fairly be said that Judge Hupp left the door open for a link-averse Web operator to ban linking via a contract that a Web surfer is forced to agree to before being allowed to enter a site. He implied that those who deep link in violation of this conspicuous and assented to "agreement" would have a potential breach of contract problem on their hands.

    How would such a ruling affect "click-through" licenses? I think this is something to watch.

    The second point is the nature of the net. Everything is public. If I write a story and placard it on a billboard next to a busy section of I-40, can I stop the newspaper from printing a picture of it? I see attempts to stop any sort of linking in much the same light. How much control do I retain over works I publish in such an uncontrollable area?

  • I Don't think it would be wrong for Taco to keep the link active. Joe could Temporarily take down his server, remove the stuff getting slashdotted, write a CGI that sends a small error message and apology to 99/100 requests that have slashdot as the referrer.

    And if Joe is out of the country on vacation and only has web and mail access, what then? What if Joe is in the hospital and has no clue about the slashdotting he got until his webhost sends him the $5,000 bill? What if Joe had a mexican lunch and is on the can for the first hour of slashdotting, he's already OVER his bandwidth limit before he can even START to code a CGI solution to his problem. What if his project has NOTHING to do with the web at all, it's simply good "news for nerds" and anything CGI is greek to him ... then he has to find someone to stop the slashdotting for him which will cost both time and money.

    Not everyone is a perl guru, that doesn't mean they don't have the right not to endure a slashdotting against their will.

    A lot of people say if you don't want people showing up, don't put your stuff on the web ... well I put my house on a public street, does that mean I want 500,000 geeks driving by one day to take a look? Hell no, and it's my right to do what I can to stop that since geeks usually leave a trail of Jolt Cola cans and Slim Jims behind them. heh

    Take that to the next level and put those 500,000 geeks in my front yard, than you'll have what a slashdotting is like. If I put up a no trespassing sign, all 500,000 of you would be breaking the law regardless of how public the street (network) I live on is. So yes, the web is public, that does not make every single page a public place where the person who owns the page (pays for the hosting, resources, etc.) has no rights to control how it's accessed.

    To put it in simpler terms ... if I'm a site admin that does not want a certain amount of network traffic, and you continue to send that network traffic to my site after I ask you not to ... that my friends, is a denial of service attack. What's next, script kiddies saying they have a right to smurf you because your server is on the "public internet" and the whole point of the internet is to share data?!
  • He puts up a banner ad, this would pay for the ISP bill and he might even make a little money.

    So now to be able to utilize the web as a publisher and still control how your content is accessed, you have to become a commercial entity? I don't buy this argument one bit.

    What if Joe's site is about why internet advertising is the downfall of the internet itself? Wouldn't quite work then now would it?

    My point is, the only simple answer is to respect those who wish to have their content not linked to, all other "solutions" avoid the overall issue here which is regardless of the "open" nature of the web or the "public" aspect of the internet, a publisher of content deserves to not be trampled over by millions if he doesn't want to be.

    And if you read the original post, you'd see I wasn't addressing DEEP linking, I was addressing Taco's statement that he feels you shouldn't be able to stop people from linking to any pages, not just DEEP linking.
  • This probably will sound hopelessly naive and uninformed to people who solution to every problem is to sue... But if you don't want people deep-linking to your website, why not use technical means to keep them out? Check the HTTP-Referrer, or only let them in if they have cookies that where set at the top level of your site... I guess by sueing, you don't have to worry about implementing the above methods, and then having people get around them. But I figure that if you don't want people getting to something, password-protect it. Of course, in the curious world of advertising, you can want people to see something, but only if you have control over it..
  • The way it is put here, does that mean they'll start fighting the use of junkbuster in the future too? I mean, the result is the same as deeplinking, you avoid the adds!

    What I think of this: Like others have stated before me, HTTP_REFERER isn't just there for the cat to play with! (not that that could help them with junkbuster... :)

    Thimo
    --
  • You (CmdrTaco) prejudice the question by comparing "please don't spider this page" with "you may not link this page".

    How about "you may not spider this page" or "please do not link this page"?

    I don't think there's any difference between linking and spidering (ie indexing). If you make something public by publishing it, other people have a right to refer to it whether in a web page or an index. Of course you can ask them not too, but that goes for both cases.

  • The simple implementation of this little CGI variable "HTTP_REFERER" or it's equivelant in your development package, wil control "deep links". If you want to annoy people with your fron NASCAR style page, you can make that come up automatically no matter where someone trys to enter your site.

    Other option, mentioned several times above, have content in the middle of each page and common borders around it.
    <rant>
    This is one of the most annoying non-issues on the web.
    </rant>

  • If we were still using Tim Berners-Lee's web, we would still be clicking in an environment where content decided appearance rather than the author.

    If you recall, the original incarnation of the web called for the tags to say what text was, not how it should be displayed. The idea was that a tag would define what the content was - a quotation, a mathematical formula, a definition, words to be emphasized, etc - and that a browser written for a college student might display this content quite differently than a web browser written for a grandmother or a scientist or a lawyer, etc.

    Now, in the days where authors fight tooth and nail to get their pages to look the same in Netscape and Internet Explorer, the anti-design contingent has lost - for good or ill.

    So given that we've moved away from the idea of a web where commerce wasn't kosher and design didn't exist, should we really look to the original design spect to address an issue that goes beyond the original scope?

    Another interesting thing to consider was Nelson's Xanadu. In the Xanadu incarnation of internetworked hypertext, "deep linking" was part of the design - the idea is that text simply would not be repeated. If the Associated Press issued a news story and 100 sites quoted from it, in the Xanadu incarnation, that AP quote would be hard linked by design.

    Maybe Nelson's views on copyright and linking might be more relevant than Tim Berners-Lee's. I'm not familiar with them, myself, maybe I should go do some research...
  • I'm not too sure what the decision exactly means, but I believe that deep-linking a PAGE should always be allowed. Maybe frame tricks and such should be prohibited though. However, it doesn't seem acceptable to me to allow people to deeplink other content such as images, video clip, sound, etc, even if they mention the deeplinking. Obviously, they would only steal resources without any gain for the deeplinked host.
  • That is the distinction between 'linking' and 'deep linking'.

    Basically deep linking allows me to pass off other people's content as my own. I can do it by using their graphics from my own <img> tags, or by linking to files directly, or perhaps even by including one of their HTML pages in my frameset.

    My opinion on this is that it's only okay to do this normally. If you don't want this to happen, there are plenty of technical ways to avoid it, and it would be appropriate to employ them. (If I feel that you should only be able to go to here [ducker.org] the long way, well, it's my server and I have every right to decide which requests I honor. It is CERTAINLY not my responsibility to maintain my site such that without-permission deep links to it continue to work.
  • The only really troubling (to me) point in the analysis is that a site's terms and conditions can prohibit deep linking. The article suggested that sites might then require you to agree explicitly to those terms as a condition of letting you use the sites. Those terms would then become part of an enforceable contract.

    The first troubling thing is how much more cluttered web browsing would become if sites got serious about that. It's tedious enough with a 56k modem; they don't need to make it worse by making you download extra Javascript and legalese before letting you use the site. I don't think anyone wants to see it become harder to get good stuff out of the net.

    The second troubling thing, though, is the attitude that some /. readers seem to have, which is that these restrictions don't bind us if they annoy us, or frustrate us, or make no sense from any perspective we can see. You don't have to justify a contract in terms of public policy or the common good.

    Site publishers have some information you want. They don't owe it to you. In an ostensibly-free society, they are entitled to decide under what conditions they're willing to share what they've created. You, in turn are free to decide to accept the conditions and access the information, or reject them and do without.

    This works both ways: no one needs to convince Mattel that the GPL attached to cp4break (I think that's the software I mean) is a socially beneficial way to distribute software--they're stuck with it. But a site doesn't have to justify a contractual prohibition on deep linking. If you accept it by visiting the site, you're stuck with it.

  • Joe should get a banner ad.

    or at the very least a non-gouge web server.
  • Who is liable according to US law when a machine "commits" a crime? I don't know this, and it would be interesting (and useful!) to know. =)

    I'd like to think that liability is based at least remotely on responsibility... But, some interesting issues come up when assigning responsibility for supposed illegal links and the "unauthorized derivative works created through framing", a case which is very similar.

    I'll examine the frameset problem a bit further first (there are more comments that are specific to the linking problem below). Your PC, created by company A with hardware designed and manufactured by companies B, C, D, and E, is running instructions of a web browser made by company F, which is acquiring a frameset-containing-webpage written by person G from a server owned by H through networks owned by I, J, and K, all at your direction. The browser then assembles all the relevant data, passes it through to the display architecture of an operating system designed by L and distributed by M, to a monitor built by company N. I could add much more detail here, but this should suffice to demonstrate that there are a lot of people and objects involved.

    Now, by general consensus, it is equally correct to speak at any level of abstraction when describing an event. That it, it is equally true to say, when picking up a penny with my hand, that "I am picking up the penny", or "My hand is picking up the penny", or "The force of friction produced on the penny by the atoms of my hand is picking up the penny".

    So, in the case of the framing problem, the creator of the web page is producing the derivative work, your web browser is producing the derivative work, your computer is producing the derivative work, your monitor is producing the derivative work, you are producing the derivative work, etc. Who is responsible? All of the above? Some of the above? None of the above? Those on the highest level of abstraction? Those on the lowest? I believe that the assignment of responsibility in such cases is somewhat arbitrary.

    So while the issue of who is responsible for creating a link is similarly hard to deal with, there is another issue that's more specific to unethical linking: If you ask those complaining about the links what part of the source code they are against, they'll probably tell you that it's not specifically the A tag or the URL, but both together in the order that they are in. So if you remove the A tag, for example, they will stop complaining, because the link that was formerly present is no longer present when the page is displayed in a normal web browser. That is to say, in the normal interpretation of the HTML code, a link is no longer present.

    However, suppose someone writes a browser that automatically interprets text fragments starting with "http://" as URLs and presents them as hyperlinks. In other words, in this new interpretation of HTML code, any URL is a link. Does that mean that the forbidden URLs are now illegal links and need to be removed (and by the same token, if I write a browser which presents all occurences of the letter E as some forbidden link, the letter E be banned from the web)? Or, will the URLs be considered legal as long as the current W3C HTML standard or some other "normal" legal standard doesn't interpret them as links (leaving people free to write software which does)?

    Godel comes to mind for some reason. Hmm... (Note: If you haven't read _Godel, Escher, Bach: an Eternal Golden Braid_, please buy / rent / borrow / steal a copy if you don't have one, and read it RIGHT NOW!)

    Anyway, that's enough food for thought for one message. =)

    Just my $0.02 [a conservative estimate given the amount of time that I blathered on for...].

    -rak
  • I heard Ticketmaster just applied for a patent on their "25 click" technology...

    (Rimshot)


    ...5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.
  • I don't think copying of the page is what's at question here. What this is saying is: "We want to be the sole distibution point of this information." Is this legitimate?

    I would argue that it is a point. Case in point: I work for a standards institution. We develop many of the standards that are used to develop products that you use every day. The development of a standard is not a trivial process, and is a committee affair which lasts some time. The actual proceedings of the committee leading up the release of the standard are destroyed, so as to limit liability.

    Once this standard is released, our organization controls the delivery of these standards. We don't allow people to link to these, since you may only view a standard in its entirety. Why do we do this?

    Again, more liability. A standard MUST be distributed with ALL relevant information. If we were to distribute part of a standard on high-strength aircraft bolts, but neglected to ensure that the reader of the standard read the section on post-heat-treatment, and the bolt design failed in use, then we, as the standards body are liable. Standards bodies are usually not-for-profit agencies, so we can't afford lawsuits.

    Additionally, these standards are copyrighted. Our members purchase the right to view any or all of our standards. It costs LOTS of money to develop a standard. Do we, as an organization, have the right to tell our members, "Use of this information may not be distributed to any other person or organization. Only the purchasing member has rights to this information." I think so. It's like game piracy. Do I, just because I bought some game, have the right to make copies of the game to pass out to all my friends? No. Can I make copies for myself? Sure. But distribution is illegal, and I think that this point is the substance of the argument.

    Do I believe linking is bad? Hell no. The web would not exist without it. Do I believe web sites have the right to demand that you go through them to get the information which THEY cataloged, which THEY invested in, which THEY made available as a service to those who subscribe to their services? Wholeheartedly.

    As an example, should it be legal for me, as a subscriber to some pr0n service, to start my own pr0n site which merely links to the site I belong to? No. How about this: I'm a student at a university. I have a T1 line into my room as a perk of living on-campus. Can I resell that service to others? Not likely. Should I be allowed? You decide.

  • ... its about unfair business practices. one of which is deep linking into ticketmaster as a part of tickets.com business process.
  • would destory the web as a researchers medium. We'd all be relegated to using pay-per use research outlets like Lexus-nexus.

    tcd004
    LostBrain [lostbrain.com]

  • CmdrTaco should have said 'UPDATE:' before the stuff about the Japanese article. That being said, that article disturbed me greatly. One line in particular.

    The personal opinion of this journalist is that the judge has made an extremely appropriate decision

    The reporter calls himself a journalist and then goes on to editorialize. I truly hope that this is not (though expect it is) a normal part of Japanese journalism. It is completely inappropriate to both act as though you are reporting the facts and then to go on to proffer a very specific opinion as to your view on the facts while at the same time calling yourself a journalist. Such behavior is completely inappropriate and exceptionally unprofessional.

    Of course, the journalist is wrong. Which is perfectly appropriate for me to point out since I am not purporting to report news events but rather candidly giving my opinion on a message board.

    When someone places a link on their web page to another webpage (or any other Internet content outside of their own site), that person has no way of knowing or controlling what exactly it is they are linking to. You may say that they should know because they have placed the link on the page, but at the very instant in which a person places a link on their own page, the content to which the link connects to can change. The change can be (and usually is) without any knowledge on the part of the linker.

    In this particular case, the content linked to had NOT been deemed illegal by court proceedings at the time the link was made, but only after.

    The logical conclusion this Japanese court would want us to believe is that whenever you place an HTML link onto your webpage you are responsible ever afterwards for what it links to whether or not you know it has been changed or deemed illegal. This is absurd.

    Specific to the claim that "the defendant undeniably increased the number of ways of accessing pornographic sites" and has thus run afoul of the law: the defendant has not increased the number of ways of accessing the offending material at all. There is one way to access it (so far as we know), the one URL which links to it. Publishing of the URL does NOT increase the number of ways to access. That remains but one.
  • You (CmdrTaco) prejudice the question by comparing "please don't spider this page" with "you may not link this page".
    Um, actually, no he doesn't. The difference is that Ticketmaster was trying to enforce the former, and the robots.txt "standard" is a convention that is followed by spidering programs. It's perfectly possible to write a spidering program that does whatever the hell it wants regardless of what the robots.txt file says.

    Anyway, what's the big effing deal? Why didn't Ticketmaster just configure their http server to redirect all traffic with a referer [sic] header of *.tickets.com so that instead of seeing http://www.ticketmaster.com/some/really/really/dee p/link.html, it would send you to http://www.ticketmaster.com/ ? That's a much better solution than litigous legal battles. Silly corporations, dirty tricks are for kids!

    --

  • I think that what he was referring to didn't have anything to do with copyrights.
    Picture this: You run a server at foo.com. That guy over at bar.net has an image you want to use, and it's on one of his servers. So what you do, instead of copying it to your server and linking via <img src="http://images.foo.com/graphics/picture.jpg">, you link it with <img src="http://www.bar.net/graphics/images/picture.jp g">. Of course, this means that you don't use your own bandwidth to serve it up, but the other guy's bandwidth instead.
    Side note: I head a story about a webmaster who was moving around some directories on her server, and started getting 404s in her logs. She discovered that some other site was linking images off of her server in-line on their pages. The graphics were just bullet buttons, but it still pissed her off. She wound up creating some new graphics with the same names as the old ones, and in the same location, only the new ones contained text such as "We are lame" and "We are such losers" and stuff like that. :)

    --

  • Don't most browsers have a preference setting that lets you disable sending a referrer? (Just asking, since mine does.)
    I don't know about "most" browsers. But "most" of the general population would never have the idea of diabling it occur to them. Unless tickets.com published instructions stating "If the ticketmaster.com home page appears instead of the concert ticket page, hit 'Back' in your browser, then go to Edit|Preferences..." :) Yeah, I can definitely see that happening. ;)
    You can never, ever trust a client. All clients should be considered hostile.
    Witness the EverQuest (or whatever it is) fiasco.

    --

  • It's a lead in. Just like headlines, they are meant to generate further interest.

    The case involves both issues.

  • Well, you can persue both. There is a way around each method.

    Huh?? Why would Ticketmaster want to stop anyone from buying with them.

  • Some spiders are not at odds with copyright.

    Some spiders will make an analysis of a page (and maybe generate a derivative work. Some will make copies.

    A search engine (or the CyberPatrol spider) may read the page, checking keywords, and building an index or value table of sorts.

    Other spiders will just copy. I had my resume, even though copyrighted and containing:

    Note to recruiters: Do not send requests for more information! I would be interested in valid, open job requests. That means that you may send me information about an actual job opening to see if I would be interested in that job.

    Any general recruiting requests will be treated as SPAM! It is not welcome. This is not an invitation for resume or job bank or any other services.

    A recruiting company put this a database which they sell access for. Then I started getting spam from that company. Their spider made copies and stored in a database, making a copy.

    I suspect that a legal delineation will be made. The type a spider for building an index, or for analysis will be allowed, but the other that just makes copies wil be tightly restricted.

  • You are forgetting something.

    CyberPatrol does not link to blocked sites!. CyberPatrol checks your site against the list to see if it's been rated as bad.

    If your site is on the CyberNot list, and it should not be, then you should file a lawsuit against Mattel. Having thousands of such lawsuits may give them same feeling that they give people when they file their abusive lawsuits. But this would be legititmate

  • It's true that ticket Master will not make as much money, not getting hits on the pages with banner ads. But....

    Ticketmaster makes money from selling tickets. They may not make as much money, but if they lose the sale, they lose more money.

    What they can do, is on the response from the sale, reditrect them to the Ticketmaster homepage and tell them, they are better using Ticketmaster.com, not tickets.com. Or something like that.

  • There is a difference from making a copy on your local hard drive and building a database which access is sold.

    When you view a page, it is usually copied into a cache, but you don't sell copies of the cache.

  • I don't think that asking people to not deep link will lead to death of the web. Why are people offended by being prevented from deep linking, but not offended by sites that require registration, or subscriptions? Look at what's happening at Slate and The Street and let the consumer decide.

    That said, if you want to make your site navigatable but unlinkable, why not:

    Determine which pages can be linked to from outside. Call this class O.

    Determine which pages can only be linked to from inside. Call this class I.

    Have every page set the cookie to their class: I or O.

    If you are asked for an I page, don't return it unless the cookie says the person came from an I page. Otherwise redirect to the homepage.

  • Here is ticketmaster's robots.txt directive:

    User-agent: *
    Disallow: /

    Now, one of the first things I learned when I started building robots was to build bots that played nice and that respected this file. Obviously they don't want anyone indexing their site. They may not realize that they are shooting themselves in the foot by doing this.(a site that doesn't want bots is turning away vast amounts of potential visitors-are you reading Ticketmaster?) However, I personally will always write bots that obey the robots.txt file.

    Its fine by me if a site does not want to be indexed. I will always relish the story of the store that asked mysimon to stop indexing their site, only to beg them to list them again after experiencing a significant drop in traffic.

    Law or no law, a site that refuses bots will experience the opposite of the slashdot effect. I can hear the wind rustling through the ghost town of ticketmaster.com already.

  • this may be a bit of a stupid question, but does this mean it's legal for me to link to someone's non-html content.

    Say their webpage contains graphics and I want to use those (assuming they're public domain), but I don't want to put it on my server. (I realise there are technical ways to stop me and it would make be a cheap bastard to boot, but that's not the point)

    Is this legal now? (I don't think it should be)
  • by sjames ( 1099 ) on Friday April 07, 2000 @08:22AM (#1145489) Homepage Journal

    A good example agains this is internal coporate information. Putting this on the web reaps the benefits of being easily available to the employees of the company, while not being public information.

    That's what .htaccess is for. Otherwise, it's like putting the private information in a folder under the doormat and hoping nobody will stumble over it. The 'normal assumption' for info on the web is that it's public. Requireing auth is a perfectly legitimate way to indicate information that is NOT public.

  • by sjames ( 1099 ) on Friday April 07, 2000 @08:37AM (#1145490) Homepage Journal

    to conduct business via the web does NOT make it ok to deep link to my bank account information. Neither is it ok to deep link to sites that provide content on a fee paid basis.

    The link itself should be a non-issue. Other sites are perfectly free to deep link into my account info, as long as the bank server replys with "You are not authorized to view this content" or some such. A site stealing the user/pass for the info and using that to get to the data is another matter.

    Web servers are like a business establiushment where if the door is unlocked, there is implied permission for the public to enter.

    I understand that some sites get revenue from advertising, and they are free to do that. They are perfectly free to have their server refuse the request if the referred_by is from an outside site. (Or be more creative and send a redirect to their index page). If Ticketmaster had any sense that's what they would have done. The whole lawsuit could have been avoided for $60 - $200 worth of man hours. I'll bet it cost more than that just to ask their lawyer "Can they do that?" As a side benefit, their competition would have ended up with egg all over it's face. (priceless)

  • by Pahroza ( 24427 ) on Friday April 07, 2000 @06:02AM (#1145491)
    In addition, Ticketmaster contended that deep linking interfered with its economic relationships with advertisers, who paid handsomely to advertise on the site's home page. Finally, the company said that Tickets.com was guilty of "passing off" and "reverse passing off" -- forms of unfair competition -- because consumers might confusingly conclude that Ticketmaster and Tickets.com were connected in ways detrimental to Ticketmaster and beneficial to Tickets.com

    This paragraph of the article seems to me somewhat like the software industry's claims of damages resulting from piracy. Given that certain people would never have purchased a product due to various factors, simply downloading a pirated version doesn't really cost them any money.

    I understand that ads generate revenue and that ticketmaster would be upset by people bypassing ads. However, the "offending" deep linking still takes the user to a page containing a banner, and ticketmaster will still receive a service charge, so what are they complaining about? Perhaps they never would have made that sale had the user not gone through the site doing the deep linking.

    Just a thought...
  • by Quack1701 ( 26159 ) on Friday April 07, 2000 @08:33AM (#1145492) Homepage
    I once had someone link their ebay auction to a picture on my server (without my permession.) What I did was wait until he had one bid (so he couldn't modicfy the auction) and then replaced the picture with some pornography. You'd be supprised at how many hits he started to get!

    In retrospect, I spamed my server more by changeing the picture, but I think it was worth it.

    If your afraid someone is deep linking your site, and you don't like it, just change your links. It's not that hard. And if the Japanese ruling holds any water, you may be able to get them into legal trouble depending on what part of they world they/you are from. *smile*

    Quack
  • by xant ( 99438 ) on Friday April 07, 2000 @06:27AM (#1145493) Homepage
    And although he dismissed the breach of contract claim, he granted Ticketmaster permission to file an amended complaint with facts showing that its "terms and conditions" created an enforceable contract, seen and agreed to by Tickets.com.

    This has come up before, and there is a strong argument against contracts that you agree to by having seen them. Could I create a contract that said:

    By viewing this text you agree to be bound by the terms and conditions of this contract. This contract stipulates that you may not view this post with moderator points remaining without moderating the post up 1 point.
    Well, you'd better hope not. Ticketmaster.com is saying they had contracts on display on the site, and that by using the site you're agreeing to those contracts. Sure . . . and hey, when I change the non-read-only license agreement on Sun's software download pages to "I 0wn j00 Sun Software", that creates a legally binding contract too . . .
  • by tjwhaynes ( 114792 ) on Friday April 07, 2000 @06:10AM (#1145494)

    In the /. intro, it says

    I think people should be able to say, "Please don't spider this page" (robots.txt for example, but it gets stickier with copyrighted content) but I don't think anyone should ever be able to say, "You may not link this page" since that is fundamentally the anti-point of the Web.

    I agree. The very nature of the web would suggest that the act of accessing a web page was making a copy of it. Therefore it is difficult to see how anyone could say "You may not copy this page" because by the time you see this message you have already made a copy. Now - can this argument be extended to making links to a web page? If you consider web pages as a broadcast, rather than a published work, then I see no problems with unaltered content being mirrored, as this is merely an extension of the broadcast route. Mirrors and partial mirrors may prove less obvious. If in the process of making a partial copy you imply something derogatory or contrary to the original by changing the context of the page, then this is covered by libel or slander laws anyway if the case is sufficiently serious. Not to say that this doesn't happen already in the newsprint media - quotes are truncated and put out of context all over the place. Of course, the waters are further muddied with trademarks and other such concerns, but I don't believe that changes the base rules. It would be a sad day if the courts stopped linking to other sites - it wouldn't be a web anymore.

    Cheers,

    Toby Haynes

  • by HiyaPower ( 131263 ) on Friday April 07, 2000 @06:05AM (#1145495)
    If you desire to restrict the linking of content to a site, you do what the NYT did, have a login to the site. This keeps out spiders, bots, and other sorts of randoms (not really, but it at least declares the intent to do so), while allowing access to their content. A link past the front door on such a page is dubious.

    However, if you do not impose such a lion at the gate, then you have declared your urls to be available to the web by whatever means folks choose. Its like saying that I can't buy a copy of a newspaper and post an interior page on a (paper and tack) bulliten board. Gee, so many of my book marks are "deep links". Is my bookmark file illegal? O well... When you can 't win by a legit means, litigate...

    Good manners sez you don't link into someones site as part of your content (as opposed to a link to send them there), without some "By your leave". Now if we are going to legislate some manners, I have some manners employed by drivers near the "Big Pig" in Boston that I would like to have included ;-)

  • by dattaway ( 3088 ) on Friday April 07, 2000 @07:58AM (#1145496) Homepage Journal
    So, I'd imagine this handy little trick would work in /etc/hosts

    208.48.26.217 www.nytimes.com

    which means whenever you look up www.nytimes.com, you get partners.nytimes.com instead.
  • by Signal 11 ( 7608 ) on Friday April 07, 2000 @06:00AM (#1145497)
    NY times runs an article on deep linking... which requires registration to view. Anyone else find this ironic?

    Maybe the solution all these up-tight corporate sites (like the NY Times) will be to make even more obnoxious use of cookies, http referer values and more invasive authentication to "protect us".

    Well.. better login to slashdot so I can post this...

  • by |DaBuzz| ( 33869 ) on Friday April 07, 2000 @08:07AM (#1145498)
    but I don't think anyone should ever be able to say, "You may not link this page" since that is fundamentally the anti-point of the Web.

    So let's look at an example:

    Joe Developer has a good idea but not a lot of money, he hosts his site with information about his project on a $9.99/month hosting plan where he gets 200MB of transfer a month.

    Someone submits Joe Developer's page to slashdot because it's a valid "news for nerds" item and Joe gets slashdotted. Within minutes, Joe contacts Taco saying "Don't link to my page!" ... does Taco take it down?

    Let's say he practices what he preaches (see quote above) and says "You can't tell me not to link it!" and leaves the link up. Joe Developer gets 500,000 hits in one day and goes over his allotted bandwidth by 20GB. At $5 per 10MB over his allotment, he now owes his hosting company a $1000 in overage fees and Joe Developer removes his site and goes bankrupt due to the "fundamental point of the web." (Note: I think Taco would remove such a link under these circumstances, this is just an example of why one size does not fit all.)

    So as you can see, there ARE cases where someone should have the right to say DON'T LINK TO THIS PAGE. While much of the web is built to get traffic, some pages are not meant to be slashdotted for a number of different reasons.

    So while I agree that a site that invites enormous amounts of eyeballs shouldn't deny linking (i.e. NYTimes, CNN, etc.), sites that do not aspire to get traffic should be allowed to control how they are linked.

    Now, if only Apache would deny based on referrer like the old NCSA servers did. *sigh*
  • Ticketmaster would also like to become the sole legal source for all tickets to anywhere.

    That's it right there. Ticketmaster does have a virtual monopoly on tickets to events it advertises. Since you have to go through them to get your tickets, they want to leverage that to force you down a clickstream that exposes you to as much paid advertising as possible.

    OTOH, sites like Amazon, which don't have monopolies on the products they sell don't seem to be making any noise over this issue. Why? Because deep-linking gets you to buy from them instead of surfing your way to someplace else.

    With Ticketmaster there is no someplace else, so that's no help to them.

  • ...that if you put something up on the web, you've made it publicly available for people to link to.

    What's the big deal anyway? If you put enough of a header/footer on your page that identifies the site, and show's links to other content (Say like ZD) then people will go to the stuff that interest them, and you've gained a reader from the "deep linking"...

    If you don't want people to link directly, protect your articles/materials behind some CGI that at least makes it more difficult.

    Just my three (Canadian) cents (that add up to 2 US cents).
  • by wowbagger ( 69688 ) on Friday April 07, 2000 @05:50AM (#1145501) Homepage Journal
    Is that it can be applied to the NY Times as well.
    For example, to bypass the login for this article use
    http:// partners.nytimes.com/library/tech/yr/mo/cyber/cybe rlaw/07law.html [nytimes.com]. In other words, change the www to partners.

"The identical is equal to itself, since it is different." -- Franco Spisani

Working...