Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Yes. It's called "informed consent." (Score 1) 141

> There is only a question of degree and depth of analysis.

I take it you didn't read the first half of my post. That is not the only difference between what is effectively a passive popularity contest that you happen to measure in revenu or subscriptions and an attempt to affect change in the emotions as an active result of my actions as a specific goal. Seriously, this is not a complicated distinction.

> The Common Rule does not apply here.

I see you are not up to date on varoius state laws. I suggest starting with Massachusetts.

> There's nothing objectively special about name-calling it "human experimentation".

Are you seriously not aware of the long history of unethical experimentation? Really?

Or are you taking offense at the comparison? Remember, the entire point I'm trying to make is that while experiments are a very good thing, because of very serious problems when it involved humans that have happened disturbingly often in the past, there is now an ethics requirement to get somebody else to sign off on it first. This is a trivial requirement for experiments like the one Facebook did. I expect that the tech industry could make some "internet research"-specific IRB that reduced the time involved to almost nothing, because by invoved at all, they discourage most of the problematic experiments from even being proposed.

> Every single person who is offended by this, seems to be on a bandwagon to nowhere.

So, not only are you not listening to what those people are saying, you bring out the content-free insults that don't actually address the arguments being discussed.

Nevermind; in return for your overgeneralization I'll just overgeneralize you as someone who is obviously invested in abusing their users and therefor blind to the damange they could cause. Just like arguing with the creationists, such people are not worth the effort; as Upton Sinclare said, "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

Comment Re:Yes. It's called "informed consent." (Score 1) 141

This is true, of course.

Which is why so many people are angy at Facebook - they went far beyond simply changing the layout or pagination or similar features. Instead, they set out to see if they could manipulate the emotions of their users, by the indirect means of selecting bits of content that the user requested that had certain properties, and changing how they would be presented (effectively hiding those items for the testing period).

How they may have intended to improve user's emotions. By most external reviews, they were unsuccessfull and failed to have much of an effect regardless. None of that matters. You don't get to decide on your own to conduct experiments like that, even if they are well intentioned. (EVERY experiment is "well intentioned", at least by the experimentor)

Why is it that people who are supposedly highly educated, experience [observation: your low /. UID] and used to dealing with complex issues have such an insane ignorance with regards to the Common Rule? I know techie nerds/geeks (myself included) stereotypically have less than ideal social skills, but the this Facebook issue seems to have revealed a deep sociopathy and lack of empathy in most of the tech industry. It's like people simply cannot abide the idea that even trivial external checks - to prevent some of the serious problems that have happened in the past when people experimented without any oversight - and so they dream up all kinds of excuses and bad arguments to try and deflect the topic. I'll try and point out these problems, using your post as an exampole. (I'm not trying to pick on you personaly)

Changing how your website performs text output is not experimenting with users.

Total straw man, as that's not what facebook did. This might suggest a failure to read the acctual paper, or maybe some serious misunderstanding of the difference between changing your own product (to which people might react emotionally) and setting out to manipulate those emotions as a goal in itself.

There's no need for consent

That may or may not be need in an actual experiment. Which is why you ask the IRB, who can waive the consent requirement in some cases. Requirements such as getting Informed Consent only happen after you talk to the review board.

when I move a button, nor when facebook changes an algorithm.

Of course not. That kind of change is totally off topic.

Take a breath and reconsider.

This type of casual dismissal is what I was talking about above. It suggests to the rest of the world that they shouldn't trust the tech industry, because they apparently don't care about ethics issue or are frighteningly ignorant about basic social constructs.

Comment Re:ethical science (Score 1) 141

No, you don't get to get blanketly avoid informed consent simply because it makes your experiment *hard*.

You ask an IRB for a waiver, each time, like you're supposed to. The whole point is that you are not supposed to be running experiments *on humans* without supervision. We've had way too many problems with that in the past, and so the requirement of getting a 3rd party to sign off first was invented as an incentive against running unethiccal experiments.

For some reason, there are a lot of people that are *shockingly ignorant* on this subject. They see this requirement as some sort of hostile or confrontational situation. Well, who do you think staffs an IRB? Other scientists, of course. It's not like they want to prevent people from doing experiments. In many cases where the experiment is not possible under the usual rules, they can *grant waivers* to some of the requirements. Facebook's study probably would have been an example of that: initial consent waived or defered into a debriefing. Or something else - if you work with the 'IRB, maybe other workarounds to this problem could be found.

The point being, you don't get to make this descision on your own as the experimentor. /for details, check with your lawyer. Seriously. This stuff can be a *felony* in some situations, and some *state* laws are even stronger. A real lawyer is required in almost all cases.

Comment Get approval of an IRB, like everybody else (Score 2) 141

The whole point here is that you need somebody else who is not the experimenter to sign off on the experiment when humans are involved. We call those people the IRB.

What's that, tech industry? You think a/b testing would be impacted? That's a popularity contest, not an experiment. Facebook's experimet had the specific goal of being able to manipulate the emotions of their users, which goes far beyond simply asking which website layout they find more attractive or useful.

What's that, tech industry? You think it would take way too much time if you had to get approval for experiments? Then throw together a multi-company group to found your own IRB. I'm sure there are universites that would be willing to partner with that group to lend their advice and help the group get started quickly.

What's that, tech industry? You think that there is not way you could conduct your experiments if you had to get proper informed consent, (which has specific criteria - an EULA or TOS does not count)? First: welcome to the club. Sometimes, doing proper and ethical experiments is hard. Many disciplines have tog deal with that, and I guarantee it is easier to find alternative ways to test your theories about "social media" than it is for the psychologist trying to investigate complex mental health issues, and both of those areas of research get to skip the whole "untested, unknown, and probably horribly dangerous new drug" mess that some doctoroso have to find a way to test without killing the participants.

Worse - and this betrays the total and complete ignorance of the people at Facebook that ran this experiment - if they had bothered to ask an IRB like you are supposed to, there is a good chance they could have gotten some requirements such as having to get informed consent in advance could have been waived. Their experiment simply wasn't that risky, compared to most experiments involving human testing.

TL;DR - If the tech industry decided to work with the process and bothered to ask an IRB, they would have avoided a lot of bad PR. Their failure to do this - and their insistence afterwords that even a trivial "trust but verifyf" is the kind of thing that only applies to *other people* only serves to make people fear the entire insutry. Justifiably. Would you want to buy stuff from people that avoid every ethics regulation?

Of course, I haven't addressed any of the state laws, some of which have even stronger requirements...

Submission + - U.S. Law Enforcement Seeks to Halt Apple-Google Encryption of Mobile Data (bloomberg.com)

schwit1 writes: U.S. law enforcement officials are urging Apple and Google to give authorities access to smartphone data that the companies have decided to block, and are weighing whether to appeal to executives or seek congressional legislation.

The new privacy features, announced two weeks ago by the California-based companies, will stymie investigations into crimes ranging from drug dealing to terrorism, law enforcement officials said.

“This is a very bad idea,” said, chief of the Washington Metropolitan Police Department, in an interview. Smartphone communication is “going to be the preferred method of the pedophile and the criminal. We are going to lose a lot of investigative opportunities.”

Submission + - Calling Mr Orwell, rejigged executive order makes collecting data not collecting (techdirt.com)

sandbagger writes: '...it is often the case that one can be led astray by relying on the generic or commonly understood definition of a particular word.' Specifically words offering constitutional protections against unreasonable search and seizure. TechDirt looks at the redefinition of the term collection as redefined by Executive Order 12333 to allow basically every information dragnet, provided no-one looks at it. "Collection" is now defined as "collection plus action." According to this document, ot still isn't collected, even if its been gathered, packaged and sent to a "supervisory authority." No collection happens until examination. It's Schroedinger's data, neither collected nor uncollected until the "box" has been opened. This leads to the question of aging off collected data/communications: if certain (non) collections haven't been examined at the end of the 5-year storage limit, are they allowed to be retained simply because they haven't officially been collected yet? Does the timer start when the "box" is opened or when the "box" is filled?

Submission + - "The internet poses one of the greatest threats to our existence" (zdnet.com)

An anonymous reader writes: "The internet poses one of the greatest threats to our existence," said [Australian] Senator Glen Lazarus on Thursday night. Hah! A former rugby player says something dumb, that's always funny, right? No. This mix of ignorance, fear, and sometimes plain laziness infests so many of Australia's lawmakers — and right now that's dangerous.
The Australan Senate was debating new national security laws for Australia. Those laws passed. They give the Australian Security and Intelligence Organisation (ASIO) expansive powers to spy on all Australian internet users, and dramatically restrict freedom of the press.

Australian spies will soon have the power to monitor the entire Australian internet with just one warrant, and journalists and whistleblowers will face up to 10 years' jail for disclosing classified information.

The government's first tranche of tougher anti-terrorism bills, which will beef up the powers of the domestic spy agency ASIO, passed the Senate by 44 votes to 12 on Thursday night with bipartisan support from Labor.
The bill, the National Security Legislation Amendment Bill (No. 1) 2014, will now be sent to the House of Representatives, where passage is all but guaranteed on Tuesday at the earliest.

Comment Re:launchd (Score 3, Interesting) 469

I'm not talking about *init systems* - systemd was never "just an init system". Remember, it's absorbed stuff like network management and system authentication. That kind of feature often requries linking to (L)GPL code, and you can trigger the GPL's requirements depending on how you do that.

So Poettering wants to move all those function calls to (k)dbus. In his own words, "... the primary interfacing between the executed desktop apps and the rest of the system is via IPC (which is why we work on kdbus and teach it all kinds of sand-boxing features)".

Comment Re:Not a boycott but a confirmation (Score 2) 469

That's exactly my point. I'm suggesting the goal is to avoid making a derivative work. The GPL describes various ways to recognise a project as having "derived" from covered code, and linking copyleft and proprietary code together is one of them. (with some variation depending on if we are talking GPL or LGPL).

Remember that one of Poettering's goals is, in his own words, "... the primary interfacing between the executed desktop apps and the rest of the system is via IPC (which is why we work on kdbus and teach it all kinds of sand-boxing features)".

The point is if I want to do (for example) some sort of user authentication, I may have to link against libpam.so. This is something that would be reasonably commoon in embedded systems, and linking covered code into your embedded device (and having to distribute libpam.so with your product) could easily be a derivative work. (details matter, ask your lawyer about specific projecs).

Once absorbed into Poettering's project, you avoid all that risk because you don't interface with the system features directly and instead use "local RPC". This changes the project from being a potentially infringing derivative work into something that merely uses the tool. Merely using a tool that is licenced under the GPL is explicitly excempted, as the GPL only coveres redistribution and not use. ("GPL is not an EULA") This is a major change in legal status for your typical embedded device, which often wants a minimal OS to host their embedded app. They would also really like to avoid having to deal with the handling anything GPL. Changing to "local RPC" for all system interaction neatly fixes that problem.

We don't run across this pattern with traditional RPL tools, because it's bad for performance to needlessly serialize everything when you could simply call a function directly.

Comment Re:Not a boycott but a confirmation (Score 2) 469

The traditional RPC tools don't force a chane in API for local requests - they link against the same traditional .so file that any local app would use. That is very different from forcing dbus to be the only exposed API even for local use. Apache may provide features over sockets, but apxs(1) still exists and apr.h still exposes a traditional API.

I'm not a lawyer either, but this is obviously unexplored territory for the GPL (which doesn't have a lot of court precedent regardless of the current issue.

It's not like we'll ever find some smoking gun proof. This is simply the best theory I've heard.

Comment Re:launchd (Score 1) 469

systemd is designed to replace APIs based on {static or dynamic} linking with the dbus/kdbus IPC mechanism, as a way to use (L)GPL libraries without being bound by the (L)GPL.

Note that despite uselessd's much saner approach to technical features, the exposed dbus API is still requried. Switching to the uselessd implementation still enables this new type of "tivoizaiton".

Comment Re:Not a boycott but a confirmation (Score 4, Interesting) 469

That's the whole point of all of this mess: {,k}dbus

Neither an init system nor vertical integration are the goal. The one consistent thing in all of the "systemd mess" is to leave the actual implementation officially a moving target where the trditional .so based library APIs are either hidden and undocumented or they are left out entirely. This forces you to use an IPC mechanism (dbus/kdbus) instead of simply linking to the functions you need and calling them directly. Forcing data to be serialized/unserialized so it can be sent over IPC is not nearly as efficient as calling a dynamically loaded local function. The systemd people love fast thing ("boot time!", etc), so why would they require this slow IPC everywhere?

*** if you never need to link to a library to use it, you can "link" to and distribute GPL code without being bound by the GPL. Poettering's cabal and systemd is an attempt to enable a new form of "tivoization" ***

If you are technically only "using" a library (no linking, no modifications to the library), you have not "infected" your proprietary code with the GPL. It's slower, but computers got fast enough that it doesn't really matter.

The nasty part is that by forcing arbitrary incompatable interface with systemd, to run stuff like GNOME you have to provide the key dbus features even if you don't use systemd. The end-run around the GPL still works with uselessd or any other "systemd replacement".

Unfortunatley, Lennart's cabal has everybody discussing technical features so this obvious goal isn't even addressed.

Comment So what's wrong with systemd, really? (Score 5, Insightful) 385

(paraphrasing a previous post of mine, becuase more people should see this)

It breaks existing promises, and makes few new promises in return.

There has been a lot of talk about the various technical problems with systemd and its developers inexperience-betraying design decisions. As bad as those are, they miss the larger point. There has also been a lot of very important talk about philosophy of design ("the unix way") that again shows how little experience the developers have and their disregard for the work people have already done and will have to do to fix the systemd mess.

These topics are valid, but miss the larger problem that systemd represents and the threat it is to Free Software in the Linux ecosystem.

## The problem with systemd's design: embrace and extend ##

As an excuse for all the vertical integration Poettering's cabal have been busy aglutenating into what they still sometimes claim is "justs an init system" has been the laughable claim that systemd is in any way "modular". They claim that "modular" is a *compile time* feature, or some property related to the fact that they build several ELF binaries. This is not modular, because it does not represent some form of stable, well-defined API.

What is an API (Application Programming Interface)? It's not a technical feature. It is not documentation that describes how to use some set of features. It is not a calling convention. So what is it?

An API is a PROMISE .

It is a social feature, not a technical one.

The functions and documentation are just a particular implementation of that promise. The key attribute that makes an API an API is that it is a promise by the developer: "If you want to interact with some feature, this is the way to do so, because while other internal stuff may change at any time, I promise this set of functions will be stable and reliable".

Binding previously-separate features into one project is bad design, by itself, the problem with systemd. The problem came when Poettering stripped down the barriers betwen features with the specific goal of removing established APIs (and breaking existing promises that developers relied on). His stuff may compile into various separate programs, but Pottering is very careful to keep various key interfaces "unstable" (despite being good enough for RHEL), specifically to not make any promise about how those interfaces will work in the future. He likes to call this hididng of interfaces "efficency" or "removing complexity". What he never mentions is that many of us used those promises, and by removing them he has at best forced others to do a lot of work to fix the breakage, or at worse made various features impossible.

A good example is logind, which was absorbed into systemd just so promises about its behaviuor in the future ("stable APIs") could be removed.

The reason many of us that have been watching Poettering's cabal for many years now suggest these changes are intentional and malicious are based on this. Occasionally removing features because of a technical need or bug or security requirement is understandable. Purposfully stripping out entire sets of features - that is, the features that allow other groups to develop with confidence that some feature they won't simply vanish - is something entirely different.

If MS acted like Poettering's cabal and removed a formerly-public API that competetors used - while promoting their own product that happens to use internal, not-publicly-promised APIs, the world would be screaming "monopoly". This happened, and resulted in several high-profile court cases.

## systemd threatens the GPL ##

It goes without saying that many people would like to distribute various GPL licenced software and not be bound by the terms it requires. The fact that some of these same people use the courts to threaten people who do the same to their software is noted, but off topic for now. The problem is the linking clauses in the GPL. Link the wrong way with GPL software, and the so-called "viral" nature kicks in.

Systemd (via kdbus) are an end-run around this. By calling function calls "IPC", you don't have to link to the GPL licenced code. A lot of players are willing to take the loss in performance for the benefit of distributing GPL software "unmodified".

You may have noticed the "systemd way" (and to some extend, the "gnome way") has been to ONLY provide access across dbus (soon, kdbus) instead of providing a local library .so and .h you can use directly. When the "local" forms even exist, they are often poorly documented and usually unstable. You may have also noticed that for "compatability" (by fiat), the "not-systemd" replacements tend to talk over dbus, as that is the mandated "correct" interface.

Embracing and extending linux with systemd is only a tool. The goal here is a new form of "tivoization" - to let proprietary business use GPL code while never opening up their part.

Is this really what you want to support by using systemd?

//now that you know this, guess what the point of systemd's control of cgroups is really about

//hint: think proprietary/GPL isolation

Slashdot Top Deals

An adequate bootstrap is a contradiction in terms.

Working...