Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:typical (Score 5, Insightful) 471

You sir, are sorely mistaken. I don't know what the proper name is for this rhetorical device so let's call it the defeatist's fallacy. You're certainly not the only one to spout it, but if you think about the implications you ought to be able to see where it goes awry and why it's such a devious thing to say.

It goes a little like this: Because an arbitrary someone already knows your name, the only sensible thing you can do is shout your name from the rooftops, tag it everywhere, and be sure that every single little thing you do has your only real name attached to it. Yes, this is hyperbole, but think about why it's such a silly thing to say. What you say is silly in a similar fashion.

People do have multiple identities even with a more or less identical name attached to it. Some of us have multiple identities with differing names attached to it. It does not follow that everyone must automatically pack all their identities together for combined inspection, even though facebook thinks that's really neat for making them money.

If you share your entire life on facebook, then yes, adding a nickname isn't going to help much. But if you don't, well, then having seperate accounts with different names attached might help. That you'll also have to block "like" buttons everywhere and never ever use facebook's "identity services" (mostly a data gathering vehicle) for other sites (or only for a well-defined set only used in the context of that nickname's identity), perhaps even need differing proxy services for different accounts, is besides the point. Even the fact that you can often datamine multiple identities together with high probability is besides the point. That it amounts to a false sense of security in some sense, well, since internet privacy enforcement is mostly law based so far, we can turn it into legally actionable security should we need to.

I do keep separate this account, for example. If you'd like, try and find a "real" name to go with it, report back here. Even text similarity analysis with the entire web will not help you much. If you go back far enough you might find enough leads for some good-old humint legwork, but purely electronically you'll have a challenge yet.

While datamining is getting ever cheaper and is already much more feasible than most people, even techies, are aware, does not mean that it is free, and with some effort you can make it expensive enough to not be worthwhile. Though really but a last refuge, you can try for being a thorougly uninteresting needle in a needlestack.

Your argument goes that because the choice is of no use for people who dump too much information into facebook (directly or indirectly) in the first place, it's okay to remove the choice for every user of facebook. And that, my dear zazzel, just doesn't fly.

Comment Re:sub-CA hell (Score 4, Insightful) 39

And why is that? This is actually exactly how the CA structure was designed to work, not that commercial "we'll protect you from anyone we don't take money from"-crap, involving RAs and other unchecked entities that can use a CA to vouch for something that they haven't even checked themselves, a practice that somehow made it into the gold standard.

The DFN is the german academic research network, and so the guys running that network can vouch for every organisation connected to it. Each organisation is supposed to be able to vouch for the certificates they issue. What's your problem with that?

Personally, I think the whole PKI thing is FUBAR, since only one super is allowed to vouch for a sub and you're effectively forced to trust someone else's CA collection (down to a certain vendor silently undoing your changes to the store on your operating system come every update check). To make digital trust workable I, end user, have to be able to choose whom to trust, a choice I currently do not have, in fact cannot have lest my intarwebz stop functioning!

But in the case of the DFN, the hierarchy is exceptionally clear and one of the few places where it actually makes sense. And maintaining 200 sub-certificates is a lot less work than maintaining millions upon millions of certificates issued on a couple bucks and a grainy copy of your passport. What does that prove anyway?

Comment What's really going to happen: (Score 1) 182

The webmonkeys get hold of it. Do everything with it. They're ecstatic! Finally something that runs their javascript nice and fast!

So they throw more js into their webpages. Drop in a few more libraries, for their convenience. Of course, they're testing the stuff to the dev server that's at least as fast as the production server but sees only a small fraction of the load, and they have gigabit from desktop to server.

Thus, their websites become that much more crappy for everyone else, for everyone who doesn't have the lastest accellerator, or a nice and fast connection to an overspecced and mostly idle server.

It's happened before, it's happened again. Feh, if your desktop is old enough (single core, less than 2GHz these days) then between the crashes due to low memory you can actually notice when, say, jquery gets an update: Everything that uses it gets slower.

This is the state of websites, and as things stand, faster browsers mean slower websites for non-webmonkeys.

Comment "they halt operations when they [fail]" (Score 2) 112

Ah-ha-ha! You wish failing kit would just up and die!

But really, you can't be sure of that. So things like ECC become minimum requirements, so you might at least have an inkling something isn't working quite right. Because otherwise, your calculations may be off and you won't know about it.

And yeah, propagated error can, and on this sort of scale eventually will, suddenly strike completely out of left field and hit like a tacnuke. At which point we'll all be wondering, what else went wrong and we didn't catch it in time?

Comment Not so absurd after all. (Score 3, Interesting) 128

To techies the idea seems absurd, but it's not. Sure, your server, your rules. But what you pull into them is another matter entirely, and the American view that if it's not behind closed curtains, it must be public, doesn't scale.

Compare, of all places, Japan, where it is in fact customary to "not see" things that are pretty much out in the open out of sheer necessity because too many people are living too close together. In a sense, the internet is worse than Tokyo.

There's irony here, where the techies are deriding politicians for doing boneheaded things with far too much data. Well, this is part of that, but in reverse, and if they're doing it wrong it's up to us to find ways to do it right and nudge them in the right direction.

DRM became a bad word because big media deployed it to control their customer whom had thought they'd bought something only the seller afterward pulled a legalised fast one. David losing to Goliath until dvdjon came along.

Data protection in this case wouldn't include money passing hands in the reverse direction. It's more like, well, you put DRM on your SSN when you sign up (and pay) for something that requires it, and you can more or less reliably wipe your SSN out of their databases once they no longer need it.

No longer having to trust some faceless large entity on their wooly word salad assurances and their pretty face is a nice boon for the individual. Bit of a different power balance there.

Yet the only real fix is to not store all that data in the first place. This means that a lot of data that's being gathered now must not be gathered at all or perhaps some other data needs to be gathered. Zero-knowledge proofs will likely have a big place in that, say to prove you're old enough without showing your ID card with all that extra data you're forced to give out currently. This'll need new techology, but will prove necessary to really scale out our data use without building databases of ruin.

Comment Re:Undeniably real and dangerous (Score 1) 43

Honestly, no, I'm not. If you keep confusing petty theft and murder you're not going to do much about the crime rate.

The threats are being named to perpetuate an indulgence racket where we should've switched vendors ages ago. We keep on cheapening out in favour of quick fixes that never seem to end up fixing much of anything. The risk to our systems is irrelevant in the face of our unwillingness to do what really needs to be done.

Comment Re:Undeniably real and dangerous (Score 1) 43

The IT security industry arguably is made out of oversimplification. It's like, cyber, you know?

Take, for example, the word "hacker". It's not enough to know you're one a them "hackers", you need to show what hat colour you wear. And even then it's not enough. Why? Because it's been so overused everyone got confused.

Do fishy things with computers? Hacker.
Filch some access codes over teh intarwebz? Hacker.
"Security researcher"? Hacker.
Script kiddie? Hacker.
Do fishy things while there's a computer tangentially involded somehow? Hacker.
Place keyboard logger (physical)? Hacker.
Do fishy things while there's a computer in the next room? Hacker. Obviously.

Where the word once indicated someone with great original skill, and in general ment moby technological creativity, requiring your respected fellows to give it to you, it now has become an epithet as easy as the FBI's "mail fraud" indictment: You get'em for free with whatever else you do. Notably "talk to a journalist", or even "be in the vague vicinity of a journalist hack's next piece".

That the term nevertheless got used in criminal legislation is telling of its devolution by overuse. And of legislators a bit too keen to not be seen as hopelessly behind the times.

APT is the latest way of this "IT security industry" full of "hackers" to show that they're down with the cyber, baby. Where "cyber" now-a-days is the clueless' way of saying intarwebbertoobz when they don't want to sound like complete hicks, or just sound officious, notably governments.

So I say the whole bunch of "hackers" is very cyber these days. Cyberhackers with their cyberwhite cyberhats. Selling us cyberprotection against cyberAPTs.

But hackers, they're not. Which is ironic, since hacking is just about the exact opposite of oversimplification. But then, these people don't do ironic. They're dead serious on selling you their definitions and their protection.

Comment Google is far from the only one (Score 1) 190

I'd prefer cash. Around here, though, it's actively being marginalised, in the name of "security", but it's actually shifting risks to me and costing me privacy and flexibility to boot.

It really doesn't matter who owns your wallet, as long as it's not you it means you're being shafted. And this is why we need truly electronic payment mechanisms, not just online, but in our wallets too.

The problem is that the people who can give you such a thing have a perverse incentive not to. This includes, but is not limited to, google.

Comment Look into emulating the ABI (Score 1) 193

FreeBSD and NetBSD have an ABI wrapper feature (aka "linuxulator" when wrapping linux) that let you translate syscalls for older versions of the same OS, or even different unixoid OSes. Add userland libraries from the original environment, and you can run the original app unchanged. As long as the application doesn't try and access hard-to-duplicate features, talk to hardware directly, that sort of thing. This gets you a modern and virtualisable OS that can run your old programs.

If there isn't a suitable ABI wrapper for your platform now, at least it could be added relatively easily. Possibly a long shot but at least it's an option to look into.

Comment Shouldn't even want to virtualise everything. (Score 2) 320

Virtualisation is great, but there are a few things that cause horrible chicken/egg problems if you virtualise them.

So I'd reserve at least two separate boxes to "do infrastructure". DNS, NTP, remote logging, trap receiving, bastion, and so on. You simply plunk a unix on them and put the individual services in jails or the local equivalent. Don't even need much in the way of performance, so any old 1U box will do fine. Heck, a soekris or an alix board will do. Those are short enough that you can stick'em in any old wiring closet too. Great for geographically dispersing.

If you're stumping up for infrastructure that can host hundreds of VMs, then of course that is enough capacity to also run "little boxes", but it'd be stupid to not also shell out the little extra to make your infrastructure robust, instead of risking hypervisor dependencies on not-yet booted VMs in your private cloud, or whatever you'd call it. "Seems to work" is not enough: Turn off the entire datacentre and then try and cold boot it, remotely. If it's fully virtualised including necessary basic supports, it'll take more time and trouble than if you don't virtualise the pillars on which you built up the rest.

If all I had was exactly two boxes, I'd still run NTP and local DNS next to the hypervisor, not under a guest. NTP in particular; I've had my fill of (windows) boxes claiming to be stratum two yet being off by two minutes because they only update once a week. Of course, on a virtualised unix it'll be much less, but I don't want to find out the hard way the VM distorted the timekeeping in unexpected ways later, so this is one thing that needs its own box. There are similar scenarios for the other basics, but I'll leave them as an exercise. The gains of virtualising, saving a bit on hardware and power, simply do not outweigh the trouble when you can least afford it.

Comment Re:Welcome to the club (Score 1) 213

I guess I'm looking at it

Sorry to have to disappoint you there.

I'm looking for a new desktop, and I found I had to re-assess my shortlist when I realised that the usual single core benchmarks not merely are equally cherry picked, but also are factory rigged in hard-to-compare ways. Notice that the balanced MP cherry picking is your contribution; I just said to not focus on single-core benchmarks only as there's 4 or 8 cores available. My requirements mean that MP is not unimportant, but sheer gaming prowess is.

What with the price, you ask? Perception. intel is sitting pretty so they can (and have) put prices at very neatly balanced gouching points whereas AMD has had a thorough trouncing in the press so they have to provide more perceived value this time round. The bulldozer failure may or may not have been alleviated with thread affinity mods in the OS, but even if they'd fix that now, the damage has been done. Point is: It doesn't need to be an engineering fail to require fixing through pricing drops.

Also note that AMD does drop prices over time, and intel pretty much hasn't for a while. intel can get away with keeping the prices up even with newer and presumably better parts appearing at the same marketing and pricing slots, AMD cannot. That's not all engineering, that's intel sitting pretty and AMD threading water.

Comment Re:Welcome to the club (Score 5, Insightful) 213

Your argument doesn't stack up.

First you say they're bringing an 8 core chip to compete with a 4 core chip. Fine. Then you complain the cores cannot keep up 1:1. So you're expecting AMD's chips to be twice as good as intel's to be able to compete.

That, of course, is rigging the test, and so is dishonest.

One could also say that with single cores not much worse than the competition, but double the number of cores, and a lower price to boot, you get better value. Moreso if you can make good use of the double number of cores.

And that's before considering that single-core benchmarks are entirely unrepresentative for multi-core performance thanks to various tricks like turbo core and turbo boost — that aren't 1:1 comparable so you'd have to do full, sustained benchmarks on all cores simultaneously to find out which delivers the most sustained instructions per second.

Meaning that AMD's offering takes more marketing footwork, but technically is not all bad. Not at all.

Comment Re:Are there any reasons to wait for the crime? (Score 2) 106

what reasons would be there to wait for the crime to happen?

The freedom to act must include the freedom to fuck up. (Though it does not imply immunity from consequences.)

So it depends highly on just what consequences you're proposing to attach to "detected pre-crime". If all you do is to "just happen" to appear on the scene, and in case of private property ask people to leave, then you can.

If you are proposing to impose sanctions on not-yet-commited "crimes", or even "merely" build tracking databases, then that means you've reduced freedom some more, in fact you've stooped to thought police. It means that the country in which this is deployed is no longer free.

On the other hand, stopping a crime that wouldn't have happened anyway has no negative repercussions for this person.

Does it not? I think that's terribly naïve for a regular reader here, in fact for anyone with an internet connection.

Plus, there is the issue that such systems do effect changes in behaviour; you're effectively making people the string puppets of the technology. I think that's putting the cart before the horse, so this is a class of things we simply really ought not want. In fact, we ought to not want these things.

Comment Re:XML (Score 1) 39

Having worked on a project done in XML (XML-in-SOAP actually, for double the XML, without specifying where and how to switch to quoted-xml-in-xml input), sort-of, by people who obviously had no fscking clue about specifying anything, much less interop standards, I'm pretty sure that fixed-field ascii would've been a massive improvement over that steaming pile of crap.

There are very definite requirements to specifying interop formats, and the things XML imposes are almost completely orthogonal, making it not a solution for the problem it's supposed to solve.

What you need is people who understand what interoperability entails and how to specify it, in any format. Those are very rare, and none of them are at the W3C. Just read their specifications: They're incredibly one-sided, written entirely for the "website author", and not for the browser writer. No wonder no two browsers act entirely the same on identical input.

There's a big secret that most spec writers fail to grasp, but I'll let you in on it here: For a computer interop format to be of any use, you need both an encoder and a decoder. So, provide reference encoders and decoders along with the spec, so people can check their implementations.

Ideally also provide faulty input so that it may be rejected. For an even more advanced approach there are model checkers that take a state machine representing your protocol and tell you where it'll break down.

But fail to do even the most basic reference designing, and you get something that may never parse. As was the case with above "SOAP" project. XML actually seems to have worsened the interop hell, as well as made processing and storage requirements worse. So, on balance, it's snake oil.

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...