Gentleman Goat writes: "The NY Times has a well-written article exploring the recent court decision about Deep Linking in closer detail. " Free registration required. This one goes deeper and talks about Web crawling bots and other issues related to deep linking. Honestly I think the spider problem is a separate issue. I think people should be able to say, "Please don't spider this page" (robots.txt for example, but it gets stickier with copyrighted content) but I don't think anyone should ever be able to say, "You may not link this page" since that is fundamentally the anti-point of the Web. Check out the ruling from Japan that linking, in some cases, is illegal.