Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Re: GNOME (Score 3, Informative) 552

by Endymion (#48816723) Attached to: SystemD Gains New Networking Features

You might want to read this post from a few years ago when the GNOME and GTK 3.x were replacing thir 2.x branches. Of particular interest is the quotes of Allan Day (GNOME dev and RedHat employee):

Facilitating the unrestricted use of extensions and themes by end users seems contrary to the central tenets of the GNOME 3 design. We’ve fought long and hard to give GNOME 3 a consistent visual appearance, to make it synonymous with a single user experience and to ensure that that experience is of a consistently high quality. A general purpose extensions and themes distribution system seems to threaten much of that.

[...]

I’m particularly surprised by the inclusion of themes. It seems bizarre that we specifically designed the GNOME 3 control center not to include theme installation/selection and then to reintroduce that very same functionality via extensions.

[...]

One particular issue is the ability for users to modify the top bar via extensions. This part of the UI is vital for giving GNOME 3 a distinctive visual appearance. If we do have extensions, I would very much like to see the top bar made out of bounds for extension writers, therefore. We have to have at least *something* that remains consistent.

[...]

The point is that it decreases our brand presence. That particular user might understand what it is that they are running, but the person who sees them using their machine or even sees their screenshots on the web will not. The question we have to ask ourselves is: how do we make sure that people recognise a GNOME install when they see one?

So not only is this about enforcing a monoculture, the reason to enforce a monoculture is because the desktop isn't about getting work done. No, the desktop - according to GNOME - is for branding/advertizing.

*sigh*

While we're on the subject, I recommend everybody read this post by the same author. It's speculative, but it does explain a lot of what has been happening to linux over the last few years... and how it may fit into the large picture.

Comment: "on the net" is not always a good thing (Score 1) 43

by Endymion (#48674653) Attached to: PlayStation Game-Streaming Service Comes To Samsung Smart TVs In 2015

Download cap? It's a "smart tv", so I expect the upload for the microphone-related features to be a pain in bandwidth caps. Really, though, anybody who buys into these spy^H^H^H "smart" (networked) products has no right to complain about the 1984-style future. "voice activated"? Yes, only after uploading the room's audio to a remote server for processing. It turns out it only took rebranding surveillance devices as "smart", and consumers will pay money to have their house bugged.

I like the internet, and have spent a lot of my life working on parts of it, but.... it is a very bad idea to put everything on a network.

Comment: Re:Hope and change (Score 1) 83

by Endymion (#48553035) Attached to: FISA Court Extends Section 215 Bulk Surveillance For 90 Days

As you examine larger and larger groups of people, no matter the type of group, their similarity to the the average population approaches 1.0. A small group is brought together for some specific reason that may differentiate the group. At the other end, a sufficiently large group of people would is the entire population.

National armies and other armed forces will usually be some of the larger groups. So they will have a lot of similarities with the average population... including most of the political argguments. If the army was given such exteme orders,, I suspect they would end up just as divided as the rest ofg the country.

Comment: Re:hum (Score 5, Insightful) 647

by Endymion (#48481353) Attached to: Debian Forked Over Systemd

Ahh, the usual misrepresentation of why we oppose systemd that always shows up. Calling us haters while trying to reframe the discussion away from the real issues isn't convincing - it just adds evidence that systemd gains position by propagand and politics instead of design and implementation quality. No, you are not going to scare us away form linux. Some may retreat to FreeBSD, which is fine (it's a good OS). The rest of us are going to stay with linux, even if it large parts of linux leave and become part of the systemd monoculture. We've been here before, after all, over a decade ago.

The varied technical issues with systemd are bad enough, but they have already been discussed, and are a central reason why the sysadmins ae forking Debian. Many systemd advocates try and steer discussions back to these technical issues - while denying that systemd doesn't actually work for everybody - to avoides talking about the fundamental design problems and philosophical changes that systemd forces on Linux. While it is currently popular to "move fast and break things", those of us with more experience understand the value in not breaking everything. None of this means that those that are better served by systemd shoudl stop using it! We're only angry about the attemts to force a monoculture by breaking compatability for political reasons, when there as no technical need. You know, like Microsoft does with their "not invented here" attitude.

Still, those are philosophical issues about the software itself. That is not the primary problem some of us have with systemd, which is not about technical problems, but is instead an attack on our prefered method of licencing. The systemd takeover is an attempt to separate Linux and many userspace tools from the GPL, so that software can be used under the LGPL terms instead.

What is the big difference between GPL and LGPL? Linkage. Linking to a GPL library requires you to follow certain requirements if you link against it, while the LGPL specifically allows taht usage. (k)dbus provides the workaround, by replacing what would be a normal function call into a library with a "IPC". It's slower, but so what, computers are way faster than needed. In the end, while you can still choose to release your code as GPL, if you have to use an IPC mechanism to do anything useful the license requirements that will actually apply ends up being being more like the LGPL. For a better explanation, see this post by stevel in the Gentoo forums.

Well, if I wanted to release under the LGPL, I would. What I'm not going to do is undermine my choice of license just because a bunch of embedded developers (and others) want to use what were traditionally GPL projects without having to be bound by the copyleft requirements. If this was proprietary software, you would call that kind of behavior "stealing" or "piracy".

So don't bother with claims about "faster desktops" or "easier programming". When your solution also bundles a forced monoculture ("unifying the difference betwen distributions") and contains a loophole around the licence some of us chose it is simply not an option for those of us that place "freedom" as the most important feature. /how much does JTRIG (or their equivalent) pay for these propaganda attempts, anyway? //It's a waste of money regardless, given how transparent these comments are ///some of this post is reused from a post I made on HN

Comment: Re:No trust (Score 1) 581

by Endymion (#48419713) Attached to: Debian Votes Against Mandating Non-systemd Compatibility

We've been explaining this for years, only to face the slurs and word-game attacks by the systemd advocates.

Once more unto the breach, dear friends

First, re: monolithic. What IS a factual effor is that, when discuss software, the term "monolithic" has anything at alkl to do with the number of files a project happens tro compile into. To claim that something has a "monolithic design" is to claim that the features are far too thightly coupled, and cannot be replaces individually should the need arise. That need can range form personal opinion, to some horribl security flaw being discovered. Really, this is an extension of the idea of trying to write "modular code" instead of an unmaintainable pile of spaghetti. Systemd is the worse case I've ever seen a project heading down the "spaghetti" route, and maintainabiliity in the future is goign to be painful once the needs of the real world start making demands against the "simple" desing systemd started as. In this sense, it is a repeat of the "hal" fiasco. Only worse., given how much more money and time that has been invested so far.

That's just a general technical critique. The real problem with systemd is not technical, but the fact that it is actively trying to remove the unix nature of linux and replace it with a more windows-ish style. Yes, that is opinion.. Some of us are of the opinion that we came to unix to get a better OS than the hard-to-use, obscure-by-design windows style of OS. So we will never be using systemd, as the very nature of what systemd enforces goes against the very reason we currently chose to use linux in the first place. The only reaon you see anger here is because Lennart choose the wrong way to implement these goals. He could have forked off a ux and made his own sandbox where he is free to do whatever he wants. Instead, he is ripping apart a place others call their home.

There is an evern deeper problem in play here, too. The maintainability is enough of a reason for the sysadmins to fear systemd, and my personal opinion and personal requirements are enough for me to avoido systemd, but those are both local concerns. The bigger problem, which rarely talked about due to the systemd advocates yelling about everything else and idstracting a lot of people with technical minutea is a problem of ideology and Free Software.. As we used to say here on /. many years ago, sometimes the "free as in speech" of Free Softwware is more important than the "free as in beer" ($$$/cost) of Open Soruce. See the the usual sources like the FSF foir why; what matters is some of us choose to release projects under the GPL, for ideological reasons, and not the "Lesser" variant, the "LGPL". This makes it harder to use tsome software in proprietary code., which was the intent behind the choice of the GPL over the LGPL.

Well, that pisses off a lot of people, would would liek to sue Free Softwarein their products (distribution), but not be bound to the GPl's requirements. Which brings us to how systemd is an end-run around the GPL in a new variant on "tivoization". (k)Dbus is simply an excuse to say you're not "linking" to GPL code, by making all API calls into RPC. As someone who has worked on building a community of Free Software, this will be a devistating setback. (stevel explains it in more detail at that link)

Of course, the fact that systemdj's compartmentalization (with cgroups) to create a purposfully opaque box you're not supposed to care about is exactly how you would pull some scheme to force DRM into linux, and I'mm sur ethe NSA just loves having such a huge pile of new, overly complicated, pile of C code placed into such a key position. These are good reasons for avoidin systemd, but like the technical arguments , are not partciularly important.

//Just watch: I"ll be accused of being "supid" or "paranoid" for posting this, yet nobody will refute the main point about how you firce the GP/LGPL distinction to vanish when tall API *mist* be done through (k)dbus. I wonder if JTRIG is paying any of them?

Comment: Re:Another Annoying Dependency? (Score 1) 581

by Endymion (#48418849) Attached to: Debian Votes Against Mandating Non-systemd Compatibility

Oh, and I forgot to address this bit:

it does nothing for you unless your app uses ALSA libraries, so it doesn't help your any with your /dev/dsp using app.

So which is it? Lying? or yo've never actually used ALSA and on't know how it worksk?

This is part of the config file I mentioned in my other post, that handles the OSS->dmix routing.

# try this file as your ${HOME}{/.asoundrc
# for the dsp0 sections to work, you will need the OSS compatability
# kernel drivers. IIf you never get a /dev/dsp, try running:
# sudo modprobe snd-pcm-oss snd-mixer-oss
#

pcm.dmixer {
        type asym
        playback.pcm {
                type "dmix"
                slave {
                        channels 2 #assuming 2-ch stereo
                        pcm {
                                type hw
                                  #adjust these to fit yoru hadware
                                card 0
                                device 0
                                format S16_LE
                                rate 44100
                      }
                }
                bindings {
                        0 0
                        1 1
                ]
        }
        capture.pcm "hw:0"
}

# route /dev/dsp to dmix

pcm.dsp0 {
        type plug
        slave.pcm "dmixer"
}

ctl.dsp0 {
          type plug
          slave.pcm "dmixer"
}

# repeat ^^^ as pcm.plug1/ctl.plug1as needed,
# if you have if have more than one sound device.

# set defaul ALSA route to the same dmix
pcm.!default {
        type plug
        slave.pcm "dmixer"
}

ctl.!default {
          type plug
          slave.pcm "dmixer"
}

Comment: Re:Another Annoying Dependency? (Score 5, Insightful) 581

by Endymion (#48418365) Attached to: Debian Votes Against Mandating Non-systemd Compatibility

I don't think you remember how things were before PulseAudio.

No, we remember quite clearly. ALSA worked just fine, with only one easily fixed issue: distros needed to set asourdrc to use dmix by default. Those of us that have multiple soundcards and pre demanding requirements (music pa/production)went through the minor trouble of setting up jackd, which solved all the rest of the problems regarding synchronization and very-low-latency data processing.

Really, the only thing that ALSA needed was a nice GUI editor/frontend for the config file. Those of us that used jackd already had such an editor (qjackctl, among others).

Oh, what's that? You want to claim that PA forced better drivers? That may be true, but it is not a feature of PA, nor a reason to use it. (driver fixes are orthogonal, to which software uses the drver) . Some of us actually read the hardware compatability lists before buying our hardware, too, and never had a problem with stabilitgy.

PA basically handles everything and provides interfaces for everything,

Yes, it is a wrapper around ALSA (unless you someohow usedd some some other type of sound driver than the ALSA snd-card-*.ko kernel modules). It adds latency and a giant pile of useless overengineering, when a simple config file was all that was needed (and maybe GUI editor for that file). Any of the fancier features provides were better served by jackd anyway.

Oh, and that's when it works. Even just a year ago, when I last tried PA, it introduce a shocking number of compatability problems for no good reason, and stilll added a LOT of latency.. I'm not even talking about the non-sound issues!) by simply uninstalling PA so everything fell back to using ALSA by itself. The list is so large now, that even non-technical people I know make jokes about how bad PA is.

As usual, while there is some need for improvement in ALSA ( and other linux features, but the bloated, non-working, latency adding mess called PulaseAudio is *not* the solution.

Comment: Re:Not true. There's a different division (Score 5, Insightful) 863

Your'e close - the split is indeed between the older Unix types and people that just want to be "users", but you need to recalibrate where their relative positions. Those of us that are against being forced to use[1] systemd see this in a different light. As computers became inexpensive over the last decade, a new generation of younger people joined the Linux community. They were young an inexperience, and often made well-known mistakes in their software. Thats was ok - we were all n00bs at first, and many of us tried to gently nudge the inexperience developeers in useful directions. Very few listened, and have now decided that anything "old" is bad.

Listening to those that came before you is important, if you want to avoid making the same mistakes. A lot of those lessons are collected under what many refer to as the "Unix Philosophy". Mostly, that "philosophy" is jsut a handful of tricks that make maintainance saner. A lot of the stuff that people claim is "overcomplicated", "messy" or an "archaic design" is such an "ugly" state for a reason, and those messy bits are bugfixes. The nice ideal design we all starty with rarely fits exactly when we introduce it to the problems and unforseen circumstances in the real world. That ugly spaghetti-code-style hack that seems to ignore and bypass the "correct" way? That is probably a bug fix, and by removing it you probably reintroduce the bug.

You call us luddites, but heed our warning at your own peril. Some bugs and bad designs have happened before, and a key reason why we don't like systemd is that it makes one of the worst mistakes you can ever make when designing software: it throws out the supposedly "old" or "ugly" parts. I suggest readoing Joel Spolsky's famous essay on this topic:

you can ask almost any programmer today about the code they are working on. "It's a big hairy mess," they will tell you. "I'd like nothing better than to throw it out and start over."

Why is it a mess?

"Well," they say, "look at this function. It is two pages long! None of this stuff belongs in there! I don't know what half of these API calls are for."
[...]
The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they've been fixed. There's nothing wrong with it. It doesn't acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that's kind of gross if it's not made out of all new material?

Back to that two page function. Yes, I know, it's just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I'll tell you why: those are bug fixes.

Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it's like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.
[...]
When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

Systemd is still at the early stage, where it can get away with this kind of bad design, but as more and more people start to use it and the never-ending list of Real World Problems starts to creep in, the systemd developers - and the distros that joined them - are goign to have one nasty mess on their ihands. It is going to be a nightmare to all of the bugfixes and real-world mess that was thrown away because it was "old".

We tried to warn them, and were labeled luddites.

Well, as B5's Londo Mollari put it:

"Ah, arrogance and stupidity all in the same package. How efficient of you."

As for the users that like systemd because "it works", this is probably true. When you simply want an appliance that "just works", systemd is probalby the better choice. When you have simple needs, an appliance is just fine.

Those of us that to not want an appliance, non the other hand, will keep our general purpose computers, because "ease of use" just isn't that important compared to the larger problems threatening those devices.

[1] Note: I didn't say "against systemd" - if YOU want to run it, you should. If systemd provides better features for your requirements, then that's great. This doesn't mean it fits everywhere or that is satisfies all possible requirements.

Comment: Re:Yes. It's called "informed consent." (Score 1) 141

> There is only a question of degree and depth of analysis.

I take it you didn't read the first half of my post. That is not the only difference between what is effectively a passive popularity contest that you happen to measure in revenu or subscriptions and an attempt to affect change in the emotions as an active result of my actions as a specific goal. Seriously, this is not a complicated distinction.

> The Common Rule does not apply here.

I see you are not up to date on varoius state laws. I suggest starting with Massachusetts.

> There's nothing objectively special about name-calling it "human experimentation".

Are you seriously not aware of the long history of unethical experimentation? Really?

Or are you taking offense at the comparison? Remember, the entire point I'm trying to make is that while experiments are a very good thing, because of very serious problems when it involved humans that have happened disturbingly often in the past, there is now an ethics requirement to get somebody else to sign off on it first. This is a trivial requirement for experiments like the one Facebook did. I expect that the tech industry could make some "internet research"-specific IRB that reduced the time involved to almost nothing, because by invoved at all, they discourage most of the problematic experiments from even being proposed.

> Every single person who is offended by this, seems to be on a bandwagon to nowhere.

So, not only are you not listening to what those people are saying, you bring out the content-free insults that don't actually address the arguments being discussed.

Nevermind; in return for your overgeneralization I'll just overgeneralize you as someone who is obviously invested in abusing their users and therefor blind to the damange they could cause. Just like arguing with the creationists, such people are not worth the effort; as Upton Sinclare said, "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

Comment: Re:Yes. It's called "informed consent." (Score 1) 141

This is true, of course.

Which is why so many people are angy at Facebook - they went far beyond simply changing the layout or pagination or similar features. Instead, they set out to see if they could manipulate the emotions of their users, by the indirect means of selecting bits of content that the user requested that had certain properties, and changing how they would be presented (effectively hiding those items for the testing period).

How they may have intended to improve user's emotions. By most external reviews, they were unsuccessfull and failed to have much of an effect regardless. None of that matters. You don't get to decide on your own to conduct experiments like that, even if they are well intentioned. (EVERY experiment is "well intentioned", at least by the experimentor)

Why is it that people who are supposedly highly educated, experience [observation: your low /. UID] and used to dealing with complex issues have such an insane ignorance with regards to the Common Rule? I know techie nerds/geeks (myself included) stereotypically have less than ideal social skills, but the this Facebook issue seems to have revealed a deep sociopathy and lack of empathy in most of the tech industry. It's like people simply cannot abide the idea that even trivial external checks - to prevent some of the serious problems that have happened in the past when people experimented without any oversight - and so they dream up all kinds of excuses and bad arguments to try and deflect the topic. I'll try and point out these problems, using your post as an exampole. (I'm not trying to pick on you personaly)

Changing how your website performs text output is not experimenting with users.

Total straw man, as that's not what facebook did. This might suggest a failure to read the acctual paper, or maybe some serious misunderstanding of the difference between changing your own product (to which people might react emotionally) and setting out to manipulate those emotions as a goal in itself.

There's no need for consent

That may or may not be need in an actual experiment. Which is why you ask the IRB, who can waive the consent requirement in some cases. Requirements such as getting Informed Consent only happen after you talk to the review board.

when I move a button, nor when facebook changes an algorithm.

Of course not. That kind of change is totally off topic.

Take a breath and reconsider.

This type of casual dismissal is what I was talking about above. It suggests to the rest of the world that they shouldn't trust the tech industry, because they apparently don't care about ethics issue or are frighteningly ignorant about basic social constructs.

Comment: Re:ethical science (Score 1) 141

No, you don't get to get blanketly avoid informed consent simply because it makes your experiment *hard*.

You ask an IRB for a waiver, each time, like you're supposed to. The whole point is that you are not supposed to be running experiments *on humans* without supervision. We've had way too many problems with that in the past, and so the requirement of getting a 3rd party to sign off first was invented as an incentive against running unethiccal experiments.

For some reason, there are a lot of people that are *shockingly ignorant* on this subject. They see this requirement as some sort of hostile or confrontational situation. Well, who do you think staffs an IRB? Other scientists, of course. It's not like they want to prevent people from doing experiments. In many cases where the experiment is not possible under the usual rules, they can *grant waivers* to some of the requirements. Facebook's study probably would have been an example of that: initial consent waived or defered into a debriefing. Or something else - if you work with the 'IRB, maybe other workarounds to this problem could be found.

The point being, you don't get to make this descision on your own as the experimentor. /for details, check with your lawyer. Seriously. This stuff can be a *felony* in some situations, and some *state* laws are even stronger. A real lawyer is required in almost all cases.

Are we running light with overbyte?

Working...