Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Google Relents, Publishes Belgian Ruling 226

gambit3 writes "Google on Saturday published on its Belgian website a court order which forbids the Internet search engine to reproduce snippets of Belgian press on its news amalgamation service. The move constituted a u-turn as Google had said on Friday that it would not comply with the court order despite facing a fine of 500,000 euros ($640,900) daily if it did not publish the ruling." From the article: "Google said its service is lawful and drives traffic to newspaper sites because people need to click through to the original publisher to read the full story. It now displays stories from news agencies, foreign newspapers and Internet sites belonging to local television stations."
This discussion has been archived. No new comments can be posted.

Google Relents, Publishes Belgian Ruling

Comments Filter:
  • Yes, but... (Score:2, Funny)

    by Anonymous Coward
    Can you read it in China?
    • Re: (Score:3, Informative)

      by mattjb0010 ( 724744 )
      Yes, I just checked.
    • Aw, c'mon, if there was a problem reading it here in China, it would be about the Chinese Internet Site Blocking policies and not about Google. Notable sites blocked to us internet users in China: - wikipedia (accessible through proxy) - Technorati (utterly and completely inaccessible) - BBC (completely and totally blocked) - Anything on angelfire domain - Geocities (sometimes accessible through proxy) - Google.com (quite often blocked but you can just go to google.co.uk usually) - Google.cn (believe it..
  • by MLopat ( 848735 ) on Saturday September 23, 2006 @05:32PM (#16170019) Homepage
    I am all for fair use. But the fact that Google copies, changes, reassembles, etc. copyrighted information without anyone's consent should be challenged. The challenge, while difficult to overcome at first may potentially lead to Google winning the case and setting a precedent whereby all information publicly available on the internet would be entered into the public domain or at least break ground for fair use.
    • by Mateo_LeFou ( 859634 ) on Saturday September 23, 2006 @05:39PM (#16170071) Homepage

      Fair use is a longstanding element of copyright that "content owners" (sic) were hoping we would all just eventually forget about. Google's indexing of information (even if it involves copying without permission) is a perfect example of fair use, and hopefully this case will be high-profile enough to get people asking questions about this stuff

      • Re: (Score:1, Insightful)

        by iminplaya ( 723125 )
        ...and hopefully this case will be high-profile enough to get people asking questions about this stuff

        I wouldn't count on it. The world has acquired a few other items over the last five years that need much more urgent attention. I don't think copyright is that high on the list. Nor should it be. It is an issue for the corporations to fight out. Quit buying their products, build bullet-proof servers, and the issue will disappear into the night. It is so petty to the point of being ridiculous. We need to con
      • Re: (Score:3, Insightful)

        by O'Laochdha ( 962474 )
        This is a concept of common law, however, and an intentional loophole in the treaty; individual nations don't have to allow for "fair use." Apparently, Belgium doesn't.
    • by kfg ( 145172 ) *
      . . .setting a precedent whereby all information publicly available on the internet would be entered into the public domain . . .

      That'll solve the problem alright, by eliminating publication to the web, so they'll be no news to search for. Go buy a paper.

      KFG
    • by Meltir ( 891449 )
      So....
      is this the same case as:
      http://yro.slashdot.org/article.pl?sid=06/09/13/15 32230 [slashdot.org] ?

      If so then the summary (and TFA) got it wrong from what i can tell.
      That case wasnt so much about google indexing the pages and putting them up on news.google.com, it was about google caching the pages that were free and keeping the cache when they werent free anymore (aka paid archive access).

      Is this even the same case ?
      The prev summary mentioned 1 mil$ this one is talking about half a mil, but they look pretty much the
    • by oohshiny ( 998054 ) on Saturday September 23, 2006 @06:36PM (#16170525)
      But the fact that Google copies, changes, reassembles, etc. copyrighted information without anyone's consent should be challenged.

      If they did, then it should be challenged, but that's not what they're doing.

      may potentially lead to Google winning the case and setting a precedent whereby all information publicly available on the internet would be entered into the public domain or at least break ground for fair use.

      If you want to put content on the Internet and not have it be indexed, archived, and/or republished, you have two simple options: use a robots.txt file or require a loging.

      What is really going on is that companies like the Belgian newspapers want to destroy the public domain and fair use: if companies like Google can't assume that content that is freely available on the Internet is actually either public domain or available under fair use, then public domain and fair use are dead.

      In different words, companies like the Belgian newspaper are trying to kill the public domain and fair use through FUD. And the Belgian court has handed them a victory. It's disgusting.
      • by storem ( 117912 )
        And yet again I'm ashamed being Belgian, luckily I can still be proud being Flemish (Dutch speaking Belgian).

        I have serious doubts with these procedings, and question the views of the court's expert on this case. I'm not surprized at all this is happening in Belgium, only worried for what has become possible in this small European country...

        I agree with Google's response removing all links to the French & German press in Belgium. Google should tread careful now, because the same court may force them to
    • The challenge, while difficult to overcome at first may potentially lead to Google winning the case and setting a precedent whereby all information publicly available on the internet would be entered into the public domain or at least break ground for fair use

      "Fair Use" in the American context usually means very limited quotation. Reviews. Citations.
      It may apply to slightly extended usage rights within the home.

      It does not mean that a commercial entity like Google can sweep up everything in sight for fr

    • Could you rephrase that? I'm not sure how one challenges facts.
  • by rolfwind ( 528248 ) on Saturday September 23, 2006 @05:35PM (#16170045)
    in webtraffic.

    Good for them.

    Will they sue Yahoo/MSN next?
    • by Ronald Dumsfeld ( 723277 ) on Saturday September 23, 2006 @06:46PM (#16170593)
      You bet they'll see a drop in traffic, try googling for site http://www.lesoir.be/ [lesoir.be] on google.be, or news.google.be. You don't just get the ruling, you get a message that thousands of results have been deleted. Dutch-language papers, such as http://www.hln.be/ [www.hln.be] are still available and in the cache.

      If you do the right search in Google, you'll turn up the following message:
      In response to a legal request submitted to Google, we have removed 1260 result(s) from this page. If you wish, you may read more about the request at ChillingEffects.org.
      and the following link [chillingeffects.org] and comparison [chillingeffects.org]
  • I don't get it (Score:4, Insightful)

    by mikesd81 ( 518581 ) <.mikesd1. .at. .verizon.net.> on Saturday September 23, 2006 @05:38PM (#16170067) Homepage
    I still fail to see how it is a copyright infringement to link to news articles? It's not like Google is hosting the article on it's own website...it's linking. It's a shame that companies are so money hungry that they want to be paid for someone directing traffic to their site. Next business will want money from taxi drivers for delivering customers.
    • Re:I don't get it (Score:5, Informative)

      by Scrameustache ( 459504 ) on Saturday September 23, 2006 @05:45PM (#16170131) Homepage Journal
      I still fail to see how it is a copyright infringement to link to news articles? It's not like Google is hosting the article on it's own website.

      According to the ruling I'm reading right now on google.be, I can sum up your misunderstanding in two words: Google cache.
      • I never followed the stories enough to ever consider Google Cache.
      • does google.be not follow robots.txt?
        • does google.be not follow robots.txt?

          Hard to say, the last part of the ruling mention's the court dismay that Google refused to take part in the technical assesment portion of the trial, which is where such details would have been timely and constructive to divulge.

          I think google shot itself in the foot there.
        • Comment removed (Score:4, Informative)

          by account_deleted ( 4530225 ) on Saturday September 23, 2006 @07:43PM (#16171011)
          Comment removed based on user account deletion
          • by hublan ( 197388 )
            robots.txt is an opt-out. The law in Belgium requires an opt-in, not an opt-out.

            So you mean basically, if there is no robots.txt file, then don't index belgian sites at all? Sounds like a reasonable compromise to me.

            Heck, do that for all websites, maybe we'll see a brief drop in linkfarms. Well, until they catch on...
          • Re: (Score:2, Insightful)

            You opted in when you put your content on the world wide web.
          • When you go to a web site it downloads the information from their server and places a copy on your computer that your browser then displays. If you want to go with an opt-in method for copyrighted material on the internet (every web pages is copyrighted unless intentionally released to the public domain) you need to write to the web master and get permission (most countries require copyright privlages to be in writing) to download his/her site. If it is in the public domain you might as well write them an
      • Re:I don't get it (Score:4, Insightful)

        by laughingcoyote ( 762272 ) <barghesthowl.excite@com> on Saturday September 23, 2006 @06:26PM (#16170437) Journal

        According to the ruling I'm reading right now on google.be, I can sum up your misunderstanding in two words: Google cache.

        I can respond in one filename: robots.txt.

        • Comment removed based on user account deletion
          • While I dislike spam as much as anyone, I really don't see how you can "outlaw" the harvesting of email addresses either. If you post your email address to a website, you really have no reasonable expectation that it is any longer "private information". I certainly don't have any expectation that the email address I use here, even given the auto-obfuscation of it, will in any way remain private. I have a -truly- private email address which I give only to friends, family, and business associates, and that on

            • Comment removed based on user account deletion
              • Ah, I can see that you do not program. Or I hope you don't!

                When setting something up, the defaults should be whatever the majority will use. That's the difference here. In the case of spam, the majority does not want it, so the default should be no until you explicitly sign up for a list. On the other hand, most websites do want to be indexed and cached, so the default should be you are until you explicitly say no thanks. It's no different then anything useful. Most people will want the "find" command to

            • by Pastis ( 145655 )
              I don't mind them harvesting my email address. They can do whatever they want to do with my address. They can read it, color it, print it, place it in a big database... It's public information, you're right.

              I just don't want them to send me mails I did not ask for.
      • Re: (Score:3, Interesting)

        by kimvette ( 919543 )
        Actually, if the newspaper staff themselves had ever submitted their URL to google for inclusion after Google had deployed their caching technology, Google should appeal this and countersue the paper for willful negligence, fraud, extortion, and anything else their legal team can dream up.

        On top of removing and permanently banning them from the Google index.
    • "It's not like Google is hosting the article on it's own website"

      Actually, an earlier article explained it is exactly like that, for certain older stories no longer on the original publishers' sites.

      (This does not make the thing less stupid, though)
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Copyright infringement is just the foil. Suing big US corps like Microsoft, Google etc. - for huge "fines" is part of the EU's raison d'être. And they complain about US commercialism!
      • i don't like this . I used google cache numerous times because our ministers like to say someting one day , and then claim something entirely different the other day ( and in some cases the press removes the original message , i wonder why :-) ) .

        So i used it to see the original message .

        for the record , i'm a belgian citizen .

    • Next business will want money from taxi drivers for delivering customers.

      You mean like airports charging taxi drivers both for a license to pick up in the airport and each time they pick up as is the situation in Dublin?

      Bussinesses charge based on what they think will get them the most money. The Belgian newspapers are no different. They understand the situation pretty well. It's just that there's money to be made.

      For the amount of traffic google news generates they could pressure media to pay them

    • by baadger ( 764884 )
      What I don't get is why the newspaper in question didn't just throw up a robots.txt file [robotstxt.org], blocking Google's news spiders, and then ask politely for Google to remove all existing content from their indexes.

      I guess they'd just rather flex some highly paid lawyer muscle and deal with the expenses of a court battle than get some web monkey sat in a broom cupboard somewhere to take 10 minutes out of their busy schedule and do this...
      • by vrt3 ( 62368 )
        The problem is not with new articles; the problem is that the newspaper has a model where everybody can read new articles, but only paying subscribers can see old archived articles.

        If Google stores the articles, everybody can read the old articles without paying for a subscription.
  • Minitruth (Score:2, Interesting)

    by gsfprez ( 27403 )
    apparently, in Europe, the Ministry of Truth is working well - making sure that old news doesn't rear it ugly head to compete with the news of today.
  • by Rude Turnip ( 49495 ) <valuation AT gmail DOT com> on Saturday September 23, 2006 @05:44PM (#16170129)
    I went over to www.google.be. No one will know what's going on--the whole thing is written in Belgian. Brilliant, Google!

    • It's in French, except for the first line, that's dutch. (belgium : 35% french, 60% dutch, 1%german-speaking)
    • by hey ( 83763 ) on Saturday September 23, 2006 @06:28PM (#16170455) Journal
      Your post seems to be in American.
  • Incompetence at work (Score:2, Interesting)

    by Aminion ( 896851 )
    Any competent web developer should know how to use the The Robots Exclusion Protocol [robotstxt.org] to prevent crawlers from crawling/indexing a web site. Why News Sites do not want to be visited by Google is really beyond me - it is free advertising! Visitors still have to visit the news sites if they want to read anything but a short article summary.
    • Re: (Score:2, Insightful)

      by sammck ( 306016 )
      The news sites want to be crawled and indexed by Google, and show up in their search results. They just want to control the user experience for what articles are visible, what advertising is displayed, etc. Google news takes that control away from them.
    • Re: (Score:3, Interesting)

      by RonnyJ ( 651856 )
      However, if a site doesn't have a robots.txt file, should search engines just presume that the site has given permission for the page to be cached?

      As a web user, I prefer it like that, but I can understand the point of view that permission should be actually granted, not just assumed (i.e. have a Robots *Inclusion* Protocol instead).
      • by bigpat ( 158134 )
        As a web user, I prefer it like that, but I can understand the point of view that permission should be actually granted, not just assumed

        It comes down to fair use, not permission. If the copying is done as a fair use, then there is no assumption of permission and honoring a robots.txt is just a courtesy not some legal requirement.


    • So, according to your logic, all SPAM is legit as long as you can opt-out afterwards???

      That's a dangerous attitude.

  • ... that by banning Google from reprinting their stories, they have shot themselves in the revenue-hungry foot. Without Google serving up ads for them or redirects to their pages that contain ads, I predict a massive drop in their internet based income. It could very well be enough to kill the already fragile print media (or at least that one outlet).

    Eventually news corporations will realize that they need Google a hell of a lot more than Google needs them.

    (It's kind of scary that Google has become so p
    • by banning Google from reprinting their stories, they have shot themselves in the revenue-hungry foot. Without Google serving up ads for them or redirects to their pages that contain ads, I predict a massive drop in their Internet based income.

      I know it is heresy to say this on Slashdot, but there are other search engines than Google.

      Belgians will move to the one that serves them best. Unlike the Geek, they aren't bound to make the the pilgrimage to Mountain View.


    • There are about 4 milion French speaking Belgians.

      That is a small market, these newspapers don't earn any money (worth mentionning) from their website's anyway.

      The website are a service to existing newspaper customers.

      If these customers can use google as an archive, the service becomes useless.
  • A) Feed a lot of children in Africa
    B) Donate to cancer research
    C) Buy me a new graphics card ..among other things. As others have said, the Belgian French-Speaking press need to be taught a lesson in humility, and perhaps another concerning the workings of the Tubes.

    If only we had more New Yorkers on the Google high board.
  • Missing the Point (Score:5, Interesting)

    by Pinky3 ( 22411 ) on Saturday September 23, 2006 @06:22PM (#16170395) Homepage
    The issue isn't about linking or copyright or caching. Google lost the case. They removed the offending content.

    The issue was whether the judge could require Google to publish his opinion on the front page of Google.

    Question 1) If the NY Times lost a case, could a judge order them to use the whole front page to publish her opinion?

    Question 2) if you lost a case, could a judge order you to buy the front page of the LA Times to publish his opinion?

    Perhaps this is some Belgian thing, where a judge can require losing defendants to publish the judge's opinion on the front page of a national paper.

    To our Belgian friends: is this a common practice?

    Al
    • Re: (Score:2, Insightful)

      by gerbouille ( 663639 )
      It's common in Belgium and in France (maybe it's a Civil Law thing?). For "press crimes" like slander and libel, the publication of the verdict is usually required, sometimes in several newspapers.
  • One word... (Score:3, Interesting)

    by D H NG ( 779318 ) on Saturday September 23, 2006 @07:16PM (#16170821)
    Belgium [wikipedia.org]!!
  • by falsified ( 638041 ) on Saturday September 23, 2006 @08:05PM (#16171167)
    I will now boycott Stella Artois!

    ....eh, fuck this. *cracks another one open*

  • So, if I design some sort of Internet utility, and I happen to break the laws of some piss-ant country, and they sue...wtf should I care? They have no authority over me anyway...provided I don't have any sort of branch in that country, I'm safe, right?
    • No. European countries have banded together to make each others laws effectively apply to people in all European countries. If you're found against in a European country that you don't have a prescence in, but do have a prescence in another European country, they'll just get the country you do have a prescence in to enforce the judgement against you. These countries will even do it against their own citiziens.
      Case in point: A british couple bought some land in the turkish bit of Cyprus and built a holiday v
  • Do No Evil? (Score:3, Insightful)

    by DavidD_CA ( 750156 ) on Sunday September 24, 2006 @02:36AM (#16172827) Homepage
    Google's motto of "Do No Evil" has one very distinct flaw:

    People disagree on what "evil" means.

    Obviously Google thinks it's doing the right thing by spreading information to the masses, like the information on this newspaper's website.

    The newspaper, on the other hand, thinks that action is quite evil. They are losing ad revenue because of it.
  • I believe this decision to be stupid . I only ever access those sites through gnews. no more now.
    But there are, i think 2 'real' reason behind this action, if you listen to what media people say here:

    -targeted marketing : with google news, google get his marketing info for personnalised ads database. Not the newspaper.
    i believe this is The reason behind that lawsuit. they dont care about their content, they know they will loose hits, they know people dont read the news on google cache but come to their si
  • The Belgian court's decision in the Google case creates an interesting precedent. This decision could be used by anyone in Belgium whose content is the target of a link. On the basis of a link to .be, anyone could find themselves targetted and fined by a Belgian court.

    Every so often a court emits a ruling that makes it impossible to know what's legal and what's not, and leaves one open to liabilities that could not possibly be predicted. This, like the EU's rulings against Microsoft (also out of Brussel

  • Yeah, because if you want to keep people from reading something except by your explicitly defined methods, putting it on the Internet is a great way to keep it locked down.

    Idiots. Everything posted on the net is fair game, imho. Suggesting otherwise is just silly.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...