Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Well, duh... (Score 1) 210

But this isn't what this law is about. At least, that's not what I understand it to be about. For this given example, there should be a very specific law designed to handle it properly. This is more about forgetting things that you did (and somebody wrote on the internet), and not cases where you are a victim of a crime. At least, that's what I understand it to be for.

As I understand it, the law has been around for 20 years. It's about letting people, whose data is being collected by a company, demand to see what said company is recording about them, demand to correct data that isn't factual, and demand to be erased from said company's records if the person has (or no longer has) no business relationship with the company.

Comment Re:Well, duh... (Score 1) 210

Except what you're talking about is not so much helping you remember events, but rather helping you discover old events you didn't know about.

Think of it like this: lots of people today weren't born when the Watergate scandal happened, and lots of people don't even know who Richard Nixon was. But they can google him, and in this case Google isn't operating as a memory assistance device, but rather as a teaching device.

The problem is that whereas Watergate is a well known historical fact of some importance, most of the other facts that people discover using Google are hearsay, rumours, and opinion. The quality of information on the internet as a whole is worse than on Wikipedia. At least on Wikipedia the articles can be edited. Google's index should similarly be editable, I think that would raise its value to the level of Wikipedia hopefully.

Comment Re:Well, duh... (Score 0) 210

f you want to call it something else, like the-right-to-prevent-undesirable-information-from-being-copied-and-published, I have no objection. It's a mouthful though. And there are so many different possible reasons someone might have to request a removal, that it wouldn't be reasonable to make special rules for all of them.

Moreover, it may in fact be none of anyone else's business why. Does the battered wife really need to tell some Google employee that she let her husband beat her up for years, just so she can justify the removal of a link to her address? It's kind of nobody's business. And if the husband goes around telling everyone she stole some money from him, how many people are going to assume she's a scumbag who's trying to wipe her slate clean?

Comment Re:Well, duh... (Score 1) 210

Sure, but not all true facts should remain in Google's index either. For example, half of all slashdot readers argue regularly that disclosing true documents to the public was traitorous as soon as Snowden did it, and that Google shouldn't link to them. Or think of this: disclosing the true address of a battered wife can lead to her husband finding her and beating her up, or worse. I'm sure you can find lots of examples yourself if you apply your mind to it.

Comment Re:Well, duh... (Score 0) 210

What we DONT want however is if I go raping or beating people I can get news articles about me supressed.

Why? That's silly and wrong. What we DONT want suppressed is the court records of that guy's conviction. He has been convicted of a crime, right? It's not just some news article that claims, nudge nudge wink wink, that there's been raping and beating of people, right? It's not just some search engine that collects that news article automatically, without reading it, and recommends that everybody should go read it when they search for the word grape ("did you mean rape? Here are some links for you").

Requiring the removal of unverified data from private third parties is perfectly reasonable . That says nothing about requiring the removal of verified public record data from the courts and official public information sources. Let's not confuse the issues.

Comment Re:Blaming Google (Score 1) 239

Google doesn't publish any of the information it indexes.

Google cache. Google news. Enough said.

Google makes no claim to the veracity of their information, beyond trying to keep obvious attempts to game result sour of searches.

Which causes all sorts of potential slander and defamation headaches. Slander can be as simple as repeating an untruth, thereby sullying someone's reputation. As a private company without any special legal status, I would expect they should embrace the possibility that a simple request for removal from their index might prevent a more protracted legal proceeding. In fact, refusing to remove material when reasonably requested would likely qualify easily as obstruction.

Not really, they are following an accepted practice since the start of the internet [...] If you don't want them to crawl your site, a simple HTML tag will stop them unlike window washers who you may have to pull a weapon on to convince them to leave your property alone.

Except that, because Google is indiscriminate, they will repeat and amplify, for profit, any untruths that they happen to find on obscure websites. Suppose I accuse you by name of being a terrorist. You can attack me legally, but most likely my blog isn't worth the effort. But Google enhances and duplicates my outrageous claims to anyone, especially if I've been clever about it. So now you have a problem. While technically I originated the terrorist claim, Google is slandering you in this case orders of magnitude more than I. And both legally and practically, you really need to tell Google to stop.

I would say that is one POV. Independent of wether or not you consider Google to publish information, the challenge is how to decide what is legitimately able to be removed. When is their a demonstrable public interest in the information that outweighs a right to privacy?

I disagree with this interpretation. Google is an unregulated private company. They have neither an obligation to the public, nor any higher binding standards that are imposed by law on them. They are solely responsible to their shareholders, to maximize returns within the ordinary bounds of the law.

The problem of deciding what is legitimate or not isn't a problem for Google to solve, it is a problem for the courts to decide on a case by case basis. But since that is obviously highly impractical, I feel that gving the subjects of the information the right to censor it from Google searches is the next best solution. The same requirement about data should apply to all ordinary private companies without special status.

Alternatively, if Google is to get special privileges to use other people's data in ways that can harm them, then Google should become a public agency, legally regulated, and probably owned and controlled by the state. Think NASA. At least that way there is a real social contract and tradeoff.

Should the press be shutout from Google searches because what they publish could be embarrassing, damaging, and possibly wrong? Get a bad review? Take it down. This path, taken to an extreme, means no negative information would be searchable, no matter if it is true or not.

I would imagine that much of the press would like this, as it means that they regian control over the information they produce. People will have to visit their sites rather than reading the stories for free through Google.

However, consider what your argument really implies. Google would have to merely institute a policy on content to deal with the deep linking problem. You don't get to search web pages deep within a news site, instead you are presented with the front door of the site only. Once you enter, your dealings are directly with the news site. If they slander you, you can complain to the source. If you read their articles, they get feedback and show you ads, etc. It's really much more logical to not have a third party processing the content and offering an alternative presentation of it without assuming the responsibility of the content.

Finally, how do you address cases where a company has no presence in the EU, but is reachable from the EU? Should they comply with removal requests? Should EU based companies with no presence in China remove material the Chinese find offensive, threatening or otherwise want removed?

The defacto accepted approach to this problem is censorship, I believe. The underlying question you should be asking is: does it make more sense for Google to comply with data ownership laws while remaining accessible, or is it better to become completely inaccessible? The fact is that Google doesn't have the power in this relationship. Note that they faced exactly this problem in China a few years ago, and lost the market then. If they refuse to comply with EU laws, they could lose the EU market too.

I am not saying their shouldn't be a mechanism to address a right to privacy but in the absence of clear guidance it can have many unforeseen consequences.

Yes, we both agree on that point. Where we disagree is that I feel the right to censor one's information from private third parties where there is no direct business relationship is already a clear and practical way to address the problem.

Note again that if Google was a highly regulated public agency with a clear mandate to balance and advance the public good, as many people seem to think they are, but they are certainly not, in any way, then the problem would be more complex.

Comment Re:Blaming Google (Score 0) 239

It appears the "right to be Forgotten" rules apparently have no provision for appeal or to give the supplier of the information the right to decide if it was a valid request.

Why should there be a provision for appeal? It's the person's data, and Google isn't an organ of the state offering a social service. They're just some private company that collects data indiscriminately, whether true or false, and publishes it for profit. At best there should be strict identity checking to prevent fraud.

The best way to view this is like those car window washers at the red lights. They start washing your windscreen without asking, and then expect payment as if you'd agreed to this. Similarly, Google goes around publishing stuff about everyone without asking, and without quality control, to make money.

It's perfectly natural to be able to tell Google to stop publishing rumours and hearsay, or even true fact that are embarassing.

Comment Re:Blaming Google (Score 0) 239

I don't know why the journalist is blaming Google for this ("So why has Google killed this example of my journalism?") when it's obvious they're not doing this voluntarily.

More to the point, why is he blaming Google for something that is not his business? It's their servers, not his. His blog is physically unaffected. Sometimes, Google makes changes to their listings, on their own computers, to serve their own customers or to follow the law. This is nothing new - save for the fact that he's trying to write a story to rile up the masses.

Because the people in charge are terrified of Google, the Internet, and their citizens use of it. So the BBC, kowtowing as usual to power, but still with enough journalistic testicles to make some form of protest, blames Google..

Conspiracy Theory much? Google spies commercially on people all the time. That's a fact. Their spying powers in Europe are being looked into, and ordinary people, whose data is being spied on by Google (and Facebook too!), have received the right to have some of their spied data be removed or altered. What this has to do with the BBC "kowtowing as usual to power" is beyond me.

However, it's high time that the rampant commercial spying that's going on is reined in - and starting with Google is a great idea.

Rather than pinning the blame on the corrupt shitpile of lawyers and wonks who forced Google to do this in a desperate attempt to make money the deciding factor in information control and suppression.

Really? Lawyers are a shitpile because they forced Google to allow people some say in what Google publishes about them? Why don't you go live in Russia or China. They have spy networks where anonymous people can say anything they like about you, and you don't get to tell anyone it's not true. You seem to like that.

Comment Re:more interessting,.. (Score 1) 219

The line is easy: users must be informed at all times what their commercial relationship is with Facebook. What makes this ethical problem easy to solve is that, at any time, if a Facebook engineer is in doubt (or not) about some course of action that he intends to follow, then he should inform the users affected what's about to happen. Often, even just imagining informing the users is enough to decide if some idea is going to be unethical.

Users have a specific relationship with Facebook which both parties have agreed to, and which is not unlimited. When in doubt, inform the user.

Where do you draw the line? If Facebook realised that showing more negative stories (by monitoring what people already see) makes people more likely to click adverts is that really any better/worse than them artificially increasing/decreasing the amount of positive stories a user sees?

Ok, you have some doubt. Let's apply the rule: inform the users. Suppose they receive a message such as "Dear X, we have implemented an algorithm to artificially increase and decrease the amount of positive stories you see." How do you think the user would react? If the user appreciates the honesty, you're on to a winner. If the user doesn't like it and complains, you now have a better understanding of the effects of your proposal, and you can improve the service. If the user threatens to sue you, you've struck gold! You can stop the algorithm in its tracks before it causes damage to both Facebook and the user.

If Google was having a hard time deciding if a page was junk or not, would it be unethical to put it in the results for some users and see how they react? Clearly that's an experiment without user knowledge, but it certainly doesn't sound like it's unethical to me and stopping that kind of experimentation or flooding sites with notices about them would make things better for users.

If in doubt, inform the user. Put the junk message in the results page, marked as an "unsure" result. Give the user a button to indicate feedback. Now the both of you know what's going on, and nobody gets taken advantage of. But, you might say, if people know what's going on you won't get unbiased results. True, but people haven't consented to being spied on in the first place. So Google has no right to expect to get unbiased results.

People aren't labrats. What's perfectly fine to do on a rat, such as spying on their behaviour without telling them, is unethical to do on a human. Why? Because they are human. No other reason required. People have more rights than rats. That's just how it is.

Comment Re:Hey Larry ... (Score 1) 186

Incorrect. Collecting data can be evil, for the same reason that poisoning someone is evil. It's not that the poisoner killed the person, obviously the person died of the effects of the poison. But the poisoner made the death a near certitude. Similarly, collecting data makes the use of that data a near certitude. And just like poison needs a certain minimal concentration, or amount, to be lethal, so does a collection of data need to be of a certain size, or universality, to be lethal.

So yeah, collecting health data on everybody and analyzing it and then selling it is evil. Damn right it is.

Comment Re:Not convinced (Score 3, Interesting) 176

Unfortunately, natural language is less powerful than the CLI. The problem is that it is very difficult to express a sequence of instructions in a natural language. Very quickly, you end up with paragraphs full of clarifications, which look very much like legal documents (legal documents are in fact an early attempt at using natural language to express precise ideas - and it just doesn't work very well either).

A CLI is simpler and more regular than natural language, and succeeds in allowing complex instructions to be expressed in few words.

Comment Re:Bad (Score 1) 138

Merely hiding the information means that when Google sells its database to companies, those hidden records will be sold too. Then the issue propagates.

You have to understand that Google is a business, whose primary responsibility is to its shareholders. It must do anything that will improve shareholder value, which right now only means spying on people, but when the next great competitor arises and they're struggling they'll be selling their assets. Anything they have collected by then will be fair game. So it's important to *remove* data that doesn't belong to them much before then.

Slashdot Top Deals

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...