Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Use this to reflect on privacy as a whole! (Score 4, Insightful) 513

Keep in mind that Facebook and countless other sites are already admittedly collecting the same (and more) information and behavior associations, oftentimes with as little publicly-released details, accountability, and oversight, and then using it actively and aggressively to manipulate every single person (American or otherwise) into altering their financial behaviors, public perceptions, political persuasions, social interactions, and much more.

This is obviously not the same as a government agency per se, but it is useful to reflect on the differences and (more so) the similarities between what is specifically unsettling about a government and a large corporation having this information. Throughout this series of revelations, I've found it useful to contemplate any concern that I feel regarding my government possessing this degree of intimate information in the context of the Facebooks, Googles, and LinkedIns of the world. They are (to a far wider degree) actively targeting you (and everyone you know) directly and collecting and using all of the same associations with no need for suspicion of terrorism, illegal associations, FISA courts, or any real oversight. They sell this information in troves to the highest bidder with loose terms and are willingly or unwillingly subject to their members' respective governments' information request laws. They and their associates and clients are applying that information actively to change you.

While I can't stress enough that the gravity of one's government's actions should not be grouped with likeminded corporations, I do worry that Internet corporations are collecting more information with less oversight and accountability and using it in far more objectionable ways against a far wider audience! It's a different kind of threat, but in many ways I fear them far more than the government.

I (personally) hope that the outcome of this series of revelations is a global reflection on privacy and information sharing and not just a narrow-minded focus on a particular agency's actions.

Comment Re:I am Jack's total lack of surprise... (Score 4, Insightful) 221

Google knows what it's doing when it comes to search (including maps), and (after several years) Android - everything else is stuff built/rolled out/supported by disparate uncoordinated groups with no coherent strategy or purpose beyond "hey, this looks like something the PR guys would like."

What a stupid statement. "They only knew what they were doing those times they did well." Most of their projects, with the exception of search, started out as disparate uncoordinated groups with no coherent strategy.

Comment Re: Its hard to tell (Score 0) 440

The reason we have rules of war is so we can react with some degree of international cohesion when they are broken. It's a deterrent for the little guys: follow the rules or the wrath of god will come down on you. Supporting you will be poison and in all likelihood you have damned your cause.

Honestly it seems like a win-win for humanity. The premise is, of course, that larger nations have the incentive and wherewithal to avoid total war. This cornerstone becomes the foundation to pull other nations without the maturity or international presence up to these standards.

Obviously if any nation, alliance, or cohesive group is adequately threatened they will never pull total war off the table. Rules of war / engagement help keep it off the table in areas where armed conflict is a threat with a much higher frequency than the first world. Frankly, autonomy and sovereignty have not existed since economies globalized, and that's also probably a good thing. We're moving towards a humane and cohesive world, but we're not there yet, and constraining war is a triage until we are.

A world without war is not a pipe dream, and, I would argue, an inevitability as countermeasures and technology continue growing in destructive power and ease of acquisition. However, looking at the state of the first world, much less its trading partners and the rest of humanity, we all have a lot of rapid cultural development and normalization to go through before we're ready.

The Internet will be the herald of all of this. So, tech world, keep up the good work.

Comment Re:I HATE this (Score 2) 473

It wasn't one instance of it though - it was more than 350 women. If you steal one orange, you'll get a slap on the wrist, you steal a truckload and that's a totally different thing as far as penalty.

I would personally disagree that blackmailing even 350 people is worse than murder. Regardless, I think OP's point still stands. Things like murder are in a completely different category of crime.

Comment Re:Will not work on 64 bit (Score 2) 208

The address Space of 64 bit processes is vast compared to available memory. The process will run out of memory before the address Space could be filled.

Unfortunately many browsers still run 32bit even on 64bit systems because of plugin compatibility. Time to move to 64 bit browser processes.

Note also that this attack is only feasible against browsers. Like other ASLR bypasses it will not Work against e.g. Outlook or Word where the attacker has very limited ability to control memory allocation.

It's worth mentioning that the critical component here is using client-executed trusted/sandboxed code (in this case, JavaScript) to exhaust the memory space. The code must be able to allocate memory, and it must be able to identify the virtual address of the memory that it allocated, else when it begins opening a hole for ASLR determinism, the shellcode won't know where to target.

Any client-side language that can allocate arbitrary memory and identify the allocated address should be able to be used in this capacity. JavaScript is identified in the PoC, but I wouldn't be surprised at all if VBA can do the job in Office (e.g., Word/Excel/Outlook/etc.), and if other trusted/sandboxed codes could do the job in other languages (Java, C#, etc.).

An obvious defeat is to either have application-enforced restrictions on embedded language allocations (e.g., I can allocate up to 10G of memory, but my embedded script can only allocate 1G) to guarantee the presence of some random areas. Another option would be to allocate dynamically-loaded library memory in a different restriction context than standard process memory so that their respective allocations don't draw from the same pool.

Comment Re:Gnome: I never got the hype or the recent rage (Score 4, Informative) 378

Not much else to do but agree. However, you should really give KDE4 another shot. Ever since KDE4.5 or so, it has been a fully-usable (albeit heavy) desktop environment. It's achieved the level of maturity and configurability I've always associated with the KDE3 line, and has added several features that are genuinely useful (such as window grouping, tiling support, a full semantic desktop, and several powerful UI scripting techniques), as well as the traditional KDE integration technologies. After some early 4.x struggles, I'm once again in love with the full KDE user experience.

I've done my tour of GNOME2, XFCE, KDE3, Enlightenment, xmonad, GNOME3, Unity, and KDE4, and I would, for primary desktop purposes, choose KDE4 without hesitation at the moment. Definitely worth giving it another shot if you haven't already.

Comment Really, Linux won't (currently) support CT (Score 5, Informative) 434

So, as an aside, isn't the entire point of a tech aggregator to provide a technical summary? Not just copy and paste the article's summary... anyway...

FTFA:

Intel went to great lengths to highlight the new P-states and C-states in which it can completely shut down the clock of a core. The firm said the operating system needs to provide "hints" to the processor in order to make use of power states and it seems likely that such hints are presently not provided by the Linux kernel in order to properly make use of Clover Trail.

In other words, Intel has added new capabilities to Clover Trail that allow enhanced power management, and Linux doesn't currently support it. Anyone who thinks that this will continue to be the case for much longer is a moron, especially if Intel continues to release its architecture datasheets, which we have no reason to think that they won't.

The article really says: It can't run Linux because there's no support for it in Linux, and there's no support for it because it's literally brand-new.

Comment Re:Nobody's attacking privacy... (Score 1) 375

Unless "Do Not Track" is actually an explicit expression of a user's conscious intent, it will face the same hypothetical fate and become yet another ignored standard.

So you think most users WANT to be tracked by every shitty ad server on the Internet and only a few people don't?

It really doesn't matter what I think. This is how the standard was designed and implemented, and IE using it in its current form is clearly abuse. Apache's solution not only restores the (admittedly little) value that the tag has. It's also one of the only ways Apache has to actually take a stand against MS's abuse.

If DNT ever does get worked into law such that ignoring it carries fines and/or legal penalties, then its value ceases to be derived solely from web sites' acknowledgement of a user's explicit request and, instead, becomes largely derived from the threat of financial penalties. This, being a much stabler source of "power" (although, you know, rights and all), would nullify Apache's argument for their response and they should (and, I would bet, would) remove their new behavior.

Until then, it is what it is, and their response is very justifiable under those conditions. MS has turned a well-intentioned standard based around a gentleman's agreement into a marketing bullet at the standard's expanse, and Apache is fighting back. Props to them.

Comment Nobody's attacking privacy... (Score 4, Insightful) 375

This is not an attack on privacy. This is the only valid option.

If you look at the details of the Do Not Track Header, you'll see that there's not much to it. It's an optional HTTP header that represents the user's request not to be tracked. There is no mechanism to actually enforce this choice; any party can easily just ignore the header and track you regardless. The entire purpose of the header is to express a user's intent, and, therefore, the entire value of the header is derived from that intent.

It's like the "Baby on Board" car signs: If I place one in my car's windowpane, polite drivers should see that sign and grant me additional driving space and courtesies, and I may be able to drive in the carpool lane. Imagine, now, that everyone always puts that sign in their car by default because they want the additional driving space and courtesies. The value of my sign is significantly diluted; not only does standard driving operation make it impossible to honor those requests, but my own actual situation gets lost in the noise. Drivers will surely ignore the little yellow sign altogether, and it becomes worthless.

Unless "Do Not Track" is actually an explicit expression of a user's conscious intent, it will face the same hypothetical fate and become yet another ignored standard. Its only value is derived from its explicit intent, and Apache and Fielding are taking steps to ensure that the value is not compromised.

Comment Re:Internet, not necessarily Wikileaks (Score 5, Insightful) 257

Freedom to post whatever you want in a public forum is important in our world today. Wikileaks seems to self destructing and isn't necessary in the grand scheme of things.

Came here to say this. There will always be a vacuum for leaking facilitators, especially with the vast-reaching scale of the Internet and strong cryptography and anonymization technologies, and it will always be filled. Even without Wikileaks, there are other sites like Cryptome. Hell, even Gawker's filling that role. Hell, here's a compiled list. With decentralized file-sharing sites, any torrent tracker or public file server can operate as a host for information. As Brand famously said, "Information wants to be free", and the "99%" of any country will continue to be hungry consumers of that information.

It doesn't matter if Assange wants to be a showman or douche things up. He doesn't matter at all in the grand scheme of things. He's merely the current public face of a system that has always existed and will always continue to exist. You can't make an example out of a thing like that.

The Powers that Be aren't stupid. They have to know this. Our job as the Public is to systematically remove any alternatives that they have to being good and respectful to their fellow man, and leaking is a critical and and inevitable part of that mission. With the Internet, we are closer than ever to having the tools to actually accomplish this. This doesn't mean that all leaks are good and noble; it does, however, mean that we need to respect their role in making the world a better place. It also means that legislating against this inevitability is both futile and self-destructive in the short term.

Comment Re:Near perfect backup (Score 1) 160

You've made a few errors in your fun theoretical musing:

Oh goodie, someone who talks like this...

1) Most of our DNA is, in fact, superfluous, as far as we can tell. Less of is superfluous than we thought a few years ago, but more than we thought ten years ago.

Sounds like we've got it right this time, though! Assuming you're referencing Junk DNA, there's a world of difference between "no discernable function" and "superfluous". Additionally, even with an upper bound in DNA functional density, there's no reason to assume there isn't also an optimal upper bound to superfluous-to-functional DNA ratio. Adding a massive chunk of DNA to an organism is going to have some effect, you have to agree, and with no functional purpose there's very little evolutionary reason not to just whittle it down to nothing. After all, if there was actually a benefit to more superfluous DNA, evolution's had plenty of time to add it.

So, I guess, thanks for really not saying anything at all.

2) Evolution does not tend towards optimization. It trends towards "good enough". Extra DNA only matters if you're a bacterial cell, and the rate-limiting step in your growth is the replication of your entire cellular DNA. In many ways, for a human, noncoding DNA is beneficial - random errors and strand breaks are less likely to corrupt important parts of your file if a good chunk is noise anyway.

There is a lot of naiveté in this part of your response. First, "good enough" is a form of optimization; it's just an optimization across factors other than straight efficiency. Second, there is a cost to copying useless DNA, bacterial cell or not, and unless there is a benefit to offset the cost, an organism that sheds that DNA will be fitter than one that doesn't. If, for example, I stuffed a kilogram of extra DNA into your cell, it'd probably matter, even if you aren't bacteria. You're asserting, without any logic, that this cost fits into some magical "good enough" threshold you have just conjured. Cool threshold bro.

3) It has, technically, already been done (although not released). Venter's synthetic life form has genetic "watermarks" embedded in it. Nothing as awesome as an entire book, but the premise is there.

It's painfully obvious that my "what if it's already been done" statement was not referencing other synthetic human works, but rather the natural genome. Just a heads up, but your genes may be missing some padding around your Broca's Area expression ;)

Comment Re:Near perfect backup (Score 1) 160

Encode the data into DNA, then splice the DNA fragment into a self reproducing organism and release into the environment. You end up with trillions of copies of the original data distributed all over the world. (error correction codes would deal with transcription mistakes)

Future generations, even future sentient life forms millions of years later would the be able to decode the data. It would be very obvious as soon as they had sequencing technology: organisms with large parts of their DNA that don't code for anything useful...........

It's a cool thought. Another possibility, though, is that evolution would, within a few (relatively speaking) generations, completely reject the the superfluous DNA as inefficient and/or unfit. Duplicating it costs energy and matter, and transcription errors and/or cross-gene sharing may actually ruin critical parts of the animal. Given evolution's tendency towards optimization, it seems almost inevitable that the information wouldn't survive in even the short (again, relatively-speaking) term.

Another independent (and conflicting) fun thought is: "what if it's already been done?" Would be cool if we were walking books :)

Slashdot Top Deals

Remember to say hello to your bank teller.

Working...