Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
60% of the DNA is _definitely_ junk, as they consist of known repeated elements (LINEs, SINEs and others) and defunct genes. This is not an 'absence of evidence', we know exactly how this DNA has happened.
How do we know that those repetitions are not needed to accelerate (by parallel processing) some important process which, with a single expression, would otherwise be too slow to survive?
We don't fully understand how the phenotype is developed from the genotype, and it very well might depend on statistical properties of gene appearances in the genome, and not just their presence or absence. Is there something in biology science that could discard such possibility?
The world could have collaborated and built the modern Internet just fine on BSD licensed software, which is itself a variation of public domain.
True, but the nature of collaboration with BSD software would have been much more enterprise-y and committee-centered.
The idea of grass-roots FLOSS development only happened after Stallman's ideals of "giving back to the community" spread around with strong guarantees that their contributions would remain open, which didn't happen with the "public-domain-but-not-quite" that plagued the BSD-licensed but patent-encumbered UNIX systems.
I believe Stallman is credited for this because the average user never heard of Open Source or Free Software until the arrival of the GPL and its enabling of systems built with the Linux kernel and GNU userland.
You make it sound as if it was a coincidence that FLOSS took off at the very precise time that the GPL was created. It was not. The GPL is different from other permissive licenses in that it perpetuates the openness of the system it's used in; therefore it has a natural mechanism for survival of the project, that confers it an competitive advantage and makes it more robust against closed forks.
for whatever reason, FOSS and similar ideas were completely unknown to average users until the GPL took off.
This "whatever" reason may very well be that the GPL was created. The fact that source for all published modifications had to be released back to the project was a strong incentive way back for FLOSS developers to prefer contributing in GPL projects over BSD ones. It may not be that much relevant nowadays because the sharing culture has grown stronger, but the idea of "copyleft" and the "share and share-alike" was a necessary element to make it happen.
The Unix wars were still fresh in our minds, so we were all painfully aware of what happens when Big Megacorp forks a free system and releases a strong closed fork with commercial support, competing with the original project. In that context, copyleft is a guarantee that the software you've volunteered to will not be strangled by a strong vendor using the software without giving anything back.
The idea that big companies should support open source software as proof of their good will, even if the license doesn't require it, was mostly a consequence of the dynamics reinforced by the GPL mindset. Prior to that, BSD licensed UNIX vendors fought over the patents achieved on their particular variant and treated it as intellectual property to be defended, not as a shared resource to be cultivated.
Most of those will send an email link to the email address you had already registered with them.
And thus comes the danger of having all your logged-in email addresses accessible to whomever steals you phone, which was my original point.
So, you've never encountered a site with a "I've forgotten my password" option that sends you a mail to log in?
Anyway, it's bad enough that a thief can access all data in the logged in service even if they can't change the password.
Q. If your Android phone is unlocked, how easy is it to change the passcode?
You have to enter the old passcode before entering a new one, same thing to disable it altogether.
But it's more than enough time to access all the services to which you're logged in in your browser, and possibly change your password in them.
There's a frequent misunderstanding when people talk about freedom with respect to the GPL. The concept of "freedom" is itself not well defined, and historically there are at least two competing and somewhat opposing definitions, "positive" freedom (which is about maximizing the amount of things people are able to do) and "negative" freedom (not interfering with things that others want to do). The GPL is primarily concerned about the former, and your complaints are about the latter.
The goal of the GPL is that everybody can use the software for any purpose, and learn how any changes and modifications work; this is seen as a requirement to increase the amount of things that can be done with the software, guaranteeing that it can be adapted to any hardware or platform, with no commercial secrets getting in the way.
In order to achieve this goal, the GPL doesn't come "free" (as in "gratis", i.e. no cost): it has a cost that you must pay if you want to use it; but for anyone willing to pay it, there are no further restrictions imposed by anyone, for any modified version. In your case you *could* have merged your application code with the GPL library* for any purpose**, but you should be willing to pay the cost, which is to release your own source code when you republish the software. So, your negative freedom is reduced (you are forbidden from keeping your version of the software hidden and publish just the binaries), but the positive freedom of the system is increased - overall there are more people who know how to use your modifications and adapt them for other uses, which couldn't happen if you kept your modifications secret.
The expectation is that by adding contributions from many users to the pool of knowledge, the whole society sees an increase in the amount of possibilities to use the software ("positive freedom"); it's the same principle that motivated the patent system in the Renaissance. The "release what you know" cost is intended to publish knowledge that otherwise would not be shared, and thus cumulatively improve the whole system. Now, there are valid concerns that the upfront cost may instead work as a disincentive to participate in the system (both with patents and copyleft software), but that argument doesn't make less free.
* Assuming the original license and copyright law allowed it. (This is why the FSF recommends using only GPL-compatible licenses).
**(Including selling it, although with FLOSS software this typically only works once for each release).
In contrast, I've always thought that the primary concern was towards the interests of the software.
I agree with your view, but that terminology is anthropomorphizing an inanimate object, which IMHO makes it difficult to understand the benefits of that approach. If GPL achieves "what's best for the project", few people will care.
I've been recently describing what's good about the GPL by highlighting the knowledge about the software.
Compared to other FLOSS licenses, the GPL/copyleft ones, are the ones which best protect users interest to learn how the software-as-a project-works. Permissive licenses which allow users to close the source of their forks provide more individual freedom, at the cost of losing the knowledge about those published forks; with copyleft, that knowledge is preserved.
As you see, the logic of my explanation is the same (you maintain everything in the project), but providing a concrete reason why users of the software should care about keeping it evolving in the public sphere.
Convenience trumps ideals more often than not.
Ideals are not there to achieve convenience. They exist to steer us away from convenience, to avoid short-term gains that would push us into some long term dead-ends.
So ideals are not useful because we live by them on a day to day basis, but because they warn us when we deviate too far from them. Of course, having a few idealists that *do* live by their principles is a useful reminder for the rest of us that agree with them, but are nonetheless swayed by convenience.
Remember the Ubuntu phone? Remember what people were excited about regarding it? Notice how it hasn't been achieved due to various business cockblocks, thus leaving the gate wide open for someone bigger to step in? Hint hint.
Actually, there are a couple Ubuntu phones. Bq's Aquaris E4.5 was launched last month in Europe. (It's not the flagship device the Ubuntu Edge was suposed to be, but rather a semi-budget offering). And the Meizu MX4 Ubuntu Edition is a mid-range device.
Software testing doesn't protect against a user pressing the wrong button, which then works as expected. I agree it's a management error, but the failure in such cases is a lack of user testing.
Systems should be designed to follow the interactions that are more likely to be made by users, not the other way around - forcing users to follow the path that a developer thought would make sense. Unfortunately, user-centered design is still a foreign concept to a good chunk of developer houses.
Or you can blame the idiot designer who didn't properly explain the consequences of "doing this" in their black-box interface, so that the user could make an informed decision.
Like wiki pages, Flow posts have their own revision history. Flow-enabled pages have a wiki-style header. Each thread has a summary which can be community-edited. Threads can be collapsed and un-collapsed by anyone. All actions are logged. In short, wiki-style principles and ideas are implemented throughout the system.
However, a core property of wikis -that the structure of the page can be edited in any shape without the need for programming- is missing. Flow is a threaded conversation system by design, and only a threaded conversation system - it can't be tweaked by their users into something else, and the sequence of comments is shown in order enforced by the tool. All discussion regarding how the tool could be generalized to support other kind of collaboration workflows or those basic needs such as reordering and merging comments, which are trivial to make in the basic wiki "everything is a stream of text" model, were dodged or delayed to be studied at future "more complex" use cases. That didn't provide any confidence that those needs were understood by the design team.
I find your post interesting, and your points in many ways are an accurate analysis of many major problems with Wikipedia - yet I still find your point 11 ("The wiki is the problem") a non-sequitur. A wiki is in essence a model for data storage, where the expectations for interaction and data management are closer to control versioning than to the classic CRUD cycle. As such, it's a neutral tool that could be used in many other ways and improved to cover most of the current shortcomings; in particular, there's no reason why those other "practical solutions" and workflows for organizing content couldn't be built on top of a wiki-like storage layer, so the contradiction you see doesn't exist in essence.
The problems you mention are for the most part caused by the community dynamics and rules, with a few caused by the current wiki platform, rather than the wiki storage model itself.
The only point directly related to organizing things as a wiki is point 6, "Page ownership" - which is a real problem, but only exists because of the decision to build an encyclopedia where each page is an article that can be edited by anyone, not because the tool for storing the page is stored a wiki system. Every other point is caused by the project's original view as an anarchist playground which permeates all its policies, not any inherent limitation of the software.
As for the approach taken by newslines.org, I agree that there's a need to give visibility to contributions from any user without giving the next editor in the line the possibility of removing them completely without trace; though that doesn't the benefits of a wiki.
Newslines is good for news-driven topics, but there's a need for an encyclopedia-like description of the topic, that a list of unrelated news doesn't cover; there needs to be a coherent wording that describes the highlights of the topic and how each part relates to the whole, and a wiki page covers that need. Compare the pages for Ebola at Wikipedia and at Newsline - which one would you prefer for first learning about the disease, and which one for staying up to date with recent developments? It's clear that they serve different, complementary purposes.