So, as an aside, isn't the entire point of a tech aggregator to provide a technical summary? Not just copy and paste the article's summary... anyway...
FTFA:
Intel went to great lengths to highlight the new P-states and C-states in which it can completely shut down the clock of a core. The firm said the operating system needs to provide "hints" to the processor in order to make use of power states and it seems likely that such hints are presently not provided by the Linux kernel in order to properly make use of Clover Trail.
In other words, Intel has added new capabilities to Clover Trail that allow enhanced power management, and Linux doesn't currently support it. Anyone who thinks that this will continue to be the case for much longer is a moron, especially if Intel continues to release its architecture datasheets, which we have no reason to think that they won't.
The article really says: It can't run Linux because there's no support for it in Linux, and there's no support for it because it's literally brand-new.
Unless "Do Not Track" is actually an explicit expression of a user's conscious intent, it will face the same hypothetical fate and become yet another ignored standard.
So you think most users WANT to be tracked by every shitty ad server on the Internet and only a few people don't?
It really doesn't matter what I think. This is how the standard was designed and implemented, and IE using it in its current form is clearly abuse. Apache's solution not only restores the (admittedly little) value that the tag has. It's also one of the only ways Apache has to actually take a stand against MS's abuse.
If DNT ever does get worked into law such that ignoring it carries fines and/or legal penalties, then its value ceases to be derived solely from web sites' acknowledgement of a user's explicit request and, instead, becomes largely derived from the threat of financial penalties. This, being a much stabler source of "power" (although, you know, rights and all), would nullify Apache's argument for their response and they should (and, I would bet, would) remove their new behavior.
Until then, it is what it is, and their response is very justifiable under those conditions. MS has turned a well-intentioned standard based around a gentleman's agreement into a marketing bullet at the standard's expanse, and Apache is fighting back. Props to them.
This is not an attack on privacy. This is the only valid option.
If you look at the details of the Do Not Track Header, you'll see that there's not much to it. It's an optional HTTP header that represents the user's request not to be tracked. There is no mechanism to actually enforce this choice; any party can easily just ignore the header and track you regardless. The entire purpose of the header is to express a user's intent, and, therefore, the entire value of the header is derived from that intent.
It's like the "Baby on Board" car signs: If I place one in my car's windowpane, polite drivers should see that sign and grant me additional driving space and courtesies, and I may be able to drive in the carpool lane. Imagine, now, that everyone always puts that sign in their car by default because they want the additional driving space and courtesies. The value of my sign is significantly diluted; not only does standard driving operation make it impossible to honor those requests, but my own actual situation gets lost in the noise. Drivers will surely ignore the little yellow sign altogether, and it becomes worthless.
Unless "Do Not Track" is actually an explicit expression of a user's conscious intent, it will face the same hypothetical fate and become yet another ignored standard. Its only value is derived from its explicit intent, and Apache and Fielding are taking steps to ensure that the value is not compromised.
Freedom to post whatever you want in a public forum is important in our world today. Wikileaks seems to self destructing and isn't necessary in the grand scheme of things.
Came here to say this. There will always be a vacuum for leaking facilitators, especially with the vast-reaching scale of the Internet and strong cryptography and anonymization technologies, and it will always be filled. Even without Wikileaks, there are other sites like Cryptome. Hell, even Gawker's filling that role. Hell, here's a compiled list. With decentralized file-sharing sites, any torrent tracker or public file server can operate as a host for information. As Brand famously said, "Information wants to be free", and the "99%" of any country will continue to be hungry consumers of that information.
It doesn't matter if Assange wants to be a showman or douche things up. He doesn't matter at all in the grand scheme of things. He's merely the current public face of a system that has always existed and will always continue to exist. You can't make an example out of a thing like that.
The Powers that Be aren't stupid. They have to know this. Our job as the Public is to systematically remove any alternatives that they have to being good and respectful to their fellow man, and leaking is a critical and and inevitable part of that mission. With the Internet, we are closer than ever to having the tools to actually accomplish this. This doesn't mean that all leaks are good and noble; it does, however, mean that we need to respect their role in making the world a better place. It also means that legislating against this inevitability is both futile and self-destructive in the short term.
You've made a few errors in your fun theoretical musing:
Oh goodie, someone who talks like this...
1) Most of our DNA is, in fact, superfluous, as far as we can tell. Less of is superfluous than we thought a few years ago, but more than we thought ten years ago.
Sounds like we've got it right this time, though! Assuming you're referencing Junk DNA, there's a world of difference between "no discernable function" and "superfluous". Additionally, even with an upper bound in DNA functional density, there's no reason to assume there isn't also an optimal upper bound to superfluous-to-functional DNA ratio. Adding a massive chunk of DNA to an organism is going to have some effect, you have to agree, and with no functional purpose there's very little evolutionary reason not to just whittle it down to nothing. After all, if there was actually a benefit to more superfluous DNA, evolution's had plenty of time to add it.
So, I guess, thanks for really not saying anything at all.
2) Evolution does not tend towards optimization. It trends towards "good enough". Extra DNA only matters if you're a bacterial cell, and the rate-limiting step in your growth is the replication of your entire cellular DNA. In many ways, for a human, noncoding DNA is beneficial - random errors and strand breaks are less likely to corrupt important parts of your file if a good chunk is noise anyway.
There is a lot of naiveté in this part of your response. First, "good enough" is a form of optimization; it's just an optimization across factors other than straight efficiency. Second, there is a cost to copying useless DNA, bacterial cell or not, and unless there is a benefit to offset the cost, an organism that sheds that DNA will be fitter than one that doesn't. If, for example, I stuffed a kilogram of extra DNA into your cell, it'd probably matter, even if you aren't bacteria. You're asserting, without any logic, that this cost fits into some magical "good enough" threshold you have just conjured. Cool threshold bro.
3) It has, technically, already been done (although not released). Venter's synthetic life form has genetic "watermarks" embedded in it. Nothing as awesome as an entire book, but the premise is there.
It's painfully obvious that my "what if it's already been done" statement was not referencing other synthetic human works, but rather the natural genome. Just a heads up, but your genes may be missing some padding around your Broca's Area expression
Encode the data into DNA, then splice the DNA fragment into a self reproducing organism and release into the environment. You end up with trillions of copies of the original data distributed all over the world. (error correction codes would deal with transcription mistakes)
Future generations, even future sentient life forms millions of years later would the be able to decode the data. It would be very obvious as soon as they had sequencing technology: organisms with large parts of their DNA that don't code for anything useful...........
It's a cool thought. Another possibility, though, is that evolution would, within a few (relatively speaking) generations, completely reject the the superfluous DNA as inefficient and/or unfit. Duplicating it costs energy and matter, and transcription errors and/or cross-gene sharing may actually ruin critical parts of the animal. Given evolution's tendency towards optimization, it seems almost inevitable that the information wouldn't survive in even the short (again, relatively-speaking) term.
Another independent (and conflicting) fun thought is: "what if it's already been done?" Would be cool if we were walking books
About time that the Ukraine accepted what most governments of the world have already accepted--that the U.S. is your master and you had goddamn well better do whatever the fuck we tell you to!
Now sit, rollover, and say "We're your bitch!"
This does raise a worthwhile issue: I couldn't find anything in the article that says that the US requested that Demonoid be shut down for this meeting.
Now, the US Authorities are likely quite happy that it was shut down, but that's a different point. Doing something to please a trade partner isn't necessarily being its "bitch". People, corporations, and countries, the US included, suck up to each other all the time as a sign of respect, deference, and/or good faith and to gain a more favorable status. That sounds like what this is: the Ukraine knew that the US would view the move positively, so they did it as a gift to strengthen their status.
If the Ukraine knew that Obama loved candied walnuts and consequently brought him a few bags for the meeting, nobody would say that they were his "bitch". Just because this gift is despicable doesn't change that fundamental intention.
EA can't claim to be the originator of online People/Life Simulations because of these programs released in the mid-80s (on Commodore 64):
home: http://en.wikipedia.org/wiki/Little_Computer_People Online: http://en.wikipedia.org/wiki/Habitat_(video_game) Sequel: http://www.bing.com/images/search?q=habitat+club+caribe
You sneaky jerk! Now I can no longer honestly say I've never used Bing.
Hepatitis C++? Hepatitis C#?
Objective Hepatitis C. *shudders*
From a defensive point of view, what is the minimum number of compromises that one should run in their own network to provide themselves with sufficient plausible deniability from this type of thing?
Some ISPs provide this for the customers by giving them all secondary semi-open wifi networks. For example BT Broadband customers have their own private wifi network but the router also broadcasts a second BT OpenZone SSID that allows other BT subscribers to get internet access after logging in. Fon offers something similar. The deal is you provide free wifi to other subscribers in exchange of having use of the same service when you are out and about.
Can you prove I didn't have malware? What if I sold a computer recently - it must have been infected, since all of the ones you confiscated aren't - and wiped the disk prior?
Can they confiscate your computers? In the UK they can't because copyright infringement is a civil matter. They can ask to examine it and you can tell them to fuck off because the burden of proof is on them and you are not required to aid them in any way, other than sharing evidence you yourself intend to rely on.
Well here's the thing - assuming that they can, through some judicial voodoo, examine all of your computers and other systems, how could they ever hope to prove that you didn't have malware on your system at the time the alleged crime occurred that has since been removed (by itself or by you)? The burden of solid proof just seems impossible to meet.
As a quick follow-on regarding "preponderance of evidence" (and legal burdens of proof in general) mentioned in another post: If I'm infected with a downloader malware, or if I have an open WiFi point, I could argue that this points to the likely scenario being that I didn't download anything illegally.
In the case of downloader malware, if someone finds stolen art in my basement, and, upon further investigation, discovers that someone else has built a hidden tunnel into my basement and used that area to store tons of stolen art, no person in their right mind would say that I likely stole that one specific piece of artwork.
In the case of an open WiFi access point, if a car used in a hit-and-run was found parked in a parking garage amidst several other random cars, no person in their right mind would say (by that fact alone) that it's likely the parking garage owner committed the hit-and-run.
I suppose all pirates should self-infect with some malware and run open access points just for plausible deniability. Sandboxed, of course...
So in all of these cases, as a technical person, I can't help but wonder how they're connecting an IP address to positive evidence of a specific person's deliberate action. There are countless plausible scenarios where a person can own a number (IP address) involved in a crime and yet not themselves be aware of or involved in said crime. Some examples are:
In all of these scenarios, the crime could have been committed without any knowledge of the defendant. In some of these scenarios, the defendant has little-to-no chance to detect or thwart the crime. How does any lawyer convince any judge or jury that the person on trial committed a crime in light of this?
From a defensive point of view, what is the minimum number of compromises that one should run in their own network to provide themselves with sufficient plausible deniability from this type of thing?
Furthermore, from an activist's point of view, imagine someone built a malware variant that monitored browser usage (Google, Facebook, etc.) for movie names and automatically downloads movie titles that were mentioned to a secret directory? I've now got a piece of malware that automatically, without any user knowledge or intervention, downloads illegal files that that user is interested in. What if the malware downloads new movie releases instead by monitoring public release knowledge bases for titles? Is being infected by such a malware enough for innocence? If enough people are thusly infected would the entire concept of using IP subpoenas for prosecution fall apart?
Just food for thought. I'd really like to know how someone can be held criminally-liable unless the prosecution caught them using the illegal file or captured an attributable confession.
You can measure a programmer's perspective by noting his attitude on the continuing viability of FORTRAN. -- Alan Perlis