Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:I've been wondering why this took so long (Score 2) 127

If you read the TfL page about this that's exactly what they say their plan is - more track barriers, and allowing "current drivers to work for the rest of their careers". Of course I doubt the RMT will be willing to see itself slowly fade into the sunset via natural ageing, but they don't want to push it too far. London Underground engineering is incredibly efficient, they pack a lot of maintenances into the 3-4 hour engineering hours they get each night (the Tube never really shuts down per se). A lot of the upgrades require rehearsals in mockups of the stations, timing is so tight. If there was a sustained strike then a crapton of automation upgrades could be completed quite quickly.

Comment Re:Or howabout IMAP? (Score 2) 74

More generally, 2-step authentication disables the risk analysis based login security. If you set up 2SV then you can use your account via Tor.

However, note that - as observed in a comment below - you cannot create a Gmail account via Tor without passing phone verification. Thus if you're logging in to a Gmail account via Tor successfully that probably means it was created outside of Tor and so has some non-Tor IPs associated with it at some point.

The key point is that email and Tor don't mix, for obvious spam reasons. It's not a Google specific thing. People may wish to look into Pond, a secure messaging service designed to be used via Tor from beginning to end.

Comment Re:it solves some unicode issues (Score 4, Interesting) 774

I haven't used desktop Linux for about a year now, but before that I used it for about a decade and in the early 2000's even did development for it, so I read this post with interest.

I feel the money quote is this one:

People on the email thread have claimed we had an agenda. That's actually certainly true, everybody has one. Ours is to create a good, somewhat unified, integrated operating system. And that's pretty much all that is to our agenda. What is not on our agenda though is "destroying UNIX", "land grabbing", or "lock-in". Note that logind, kdbus or the cgroup stuff is new technology, we didn't break anything by simply writing it. Hence we are not regressing, we are just adding new components that we believe are highly interesting to people (and they apparently are, because people are making use of it now). For us having a simple design and a simple code base is a lot more important than trying to accommodate for distros that want to combine everything with everything else. I understand that that is what matters to many Debian people, but it's admittedly not a priority for us.

For what it's worth, this paragraph makes a ton of sense to me. The biggest problem with Linux, both on the desktop and to a lesser extent on the server, was the fact that you got a basically half-baked set of components that were hardly integrated at all. Basic stuff like being able to set the timezone graphically ended up being distro specific apps / hacks because there was no API to do it, and everything was held together by giant piles of shell scripts and Python which might or might not be something you could actually contribute to or work with, but was certainly never usefully documented.

Basically, the experience of using or developing on Linux gave you the impression of a man in a slightly dishevelled, ill fitting suit. All the parts of a smart suit were there, but none of them quite fitted or lined up, and there were lots of small tears everywhere. And waaaaaay too many people liked this state of affairs because they had made "I am a UNIX user" a part of their identity and had managed to convince themselves that an OS architecture that dated from the 1970's was actually totally elite, and any attempt to reform it was "ignoring the UNIX philosophy" or some shit like that.

Result: MacOS X absolutely ate Linux's lunch on the desktop, despite the fact that Linux was free and Macs .... decidedly not free. Heck Linux didn't even make much headway against Windows, even though under Ballmer the Windows team basically sat on their ass for a decade rewriting the start menu.

From a (now) outsider looking in, this whole systemd fiasco looks a lot like Linux finally being dragged into the 21st century through the sheer willpower of one man, who has an apparently infinite ability to withstand faeces-throwing by the UNIX peanut gallery. Don't like systemd? OK, stick with Debian Stable or FreeBSD and don't get the new features. Stick it to the man and keep your "I Love *Nix" t-shirt on. Me? Between reading about GNOME 3 and systemd I'm starting to wonder if it's time to revisit Linux and give it another shot. If that community can conquer its UNIX fetish and build a modern OS, it has a lot of potential.

Comment Re:Corporate Malfeasance (Score 1) 293

If Infosys is in fact guilty of discriminating against American workers by refusing to hire American workers for American jobs, then such malfeasance should be punished

How do you know, though? I mean, surely any company that has an office in America and hires any non-American worker would fail your proposed test? How would international companies ever expand into the USA if hiring any non-American for an "American job" would result in their US assets being immediately liquidated?

Ultimately foreign companies have to be able to set up base and hire in other countries, and hire the people they think are best qualified. The Indian managers comment here might well be highly offensive but it doesn't actually say "don't hire American's because they're too expensive". It says "don't hire them because they suck" .... a comment that I'm afraid I've read American's making about Indian developers many many times.

Comment Re:So what you're telling me (Score 2) 146

TrustZone-based devices also have fused per-device keys which act as the root of trust. The devices that I'm familiar with also have a hardware AES coprocessor which can load and use these per-device keys but will not reveal the actual key bits, not even to secure world code. Secure world code can request operations be performed with the keys, but not see them. Non-secure world code can't do anything except make requests of the secure world code.

I did not know this. That changes a lot - if even the TrustZone can't access the per device key directly then it would appear to give equivalent security (or actually better) to what Apple is doing.

It would be nice to know which devices implement exactly what kind of security, but it seems everything is heading in the right direction, which is very good to hear.

Comment Re:So what you're telling me (Score 5, Insightful) 146

However, I can definitely confirm that there is a hardware-backed crypto service in most of the better Android devices. It's called keymaster. Google creates the API and the code that uses it, and device makers have to implement it, or not. To see if your device has it, go to Settings->Security->Credential Storage->Storage type and see if it says "Hardware-backed".

I think it's worth noting something here about the Android implementations. Based on the articles and other published documentation, the Android "hardware backed" key stores are in fact not hardware backed at all, but rather based on the ARM chips TrustZone technology. This creates a secure world inside the CPU which is isolated from the main operating system. The secure world can store data and do computation on the main CPU without being exposed to viruses or root level access from Android itself.

But this comes with a huge caveat. This "secure world" is in fact just the same CPU running a program written in C. Such programs can of course have exploits. And in the past this is exactly what has happened, I believe some Motorola device was rooted in this way in fact, because the TrustZone protected program had some kind of overflow bug in it and that was enough to take control of the secure space.

What's more, I think it's deeply uncertain how exposed programs running in this secure space are to side channel attacks e.g. via timing or cache line games. People keep discovering clever ways to recover secret keys when running on the same physical CPU that shouldn't be possible according to the rules of the sandbox. And where does this secure program get its entropy from? A hardware RNG? Maybe, but as far as I can tell that's entirely up to the phone manufacturer, and in a competitive environment where everyone is trying to get costs down I suspect some manufacturers would choose to save money by skipping it. After all bad randomness looks the same as good randomness.

The Apple implementation, in contrast, appears to have the per-device key blown into the chip at manufacturing time, and then hard-wired to the AES circuitry. That is, it's actually hardware based and there are no chances for a "VeriLog overflow" bug or equivalent breaking the security of the system.

Anyway, I'd like to give kudos to swillden here for taking part in the discussion and being honest about how his work on Android currently stacks up with Apple. That takes some bravery. Also, there's more to security than disk encryption. The Apple celebs drama wasn't caused by the NSA breaking disk encryption, it was a bunch of pimply 4-channers phishing or guessing account recovery details on the cloud service. Whilst Apple has historically been ahead of Android in on-device security, they have been behind Google on cloud account security and this is in many ways equally as important.

Comment Re:Backdoor in TPM chips? (Score 1) 146

If anything, elliptic-curve cryptography is the weak-link. We're being forced onto it by being told everything else is weak. It's not as well-researched or understood as the algorithms that have been attacked for nearly 40 years. Implementations are few and based on published curves. And there's NOTHING being said about our move to it.

I want to address this statement here, because I feel it's misleading.

Elliptic curve cryptography has been researched since the 1980's. It is not at all crazy or new. It is the current state of the art, RSA is obsolete and learning-with-errors is the current best candidate for a next-gen post quantum crypto system, as far as I can tell.

Nobody is being "forced" onto ECC by being told other things are weak. To the best of anyone's knowledge, RSA is not weak given big enough key sizes. However, ECC is just as strong when used with much smaller keys and signatures, and can be much faster. Basically you halve the key size with ECC to get the security level. So a 256 bit key gives 128 bits of security, i.e. you would need to try 2^128 times to cover the entire key space, which is physically infeasible. Whereas you need perhaps a 2048 bit RSA key to get gold standard security. Additionally ECC has various other nice features that can be useful in certain contexts.

Implementations of ECC are widespread. Every OS comes with one out of the box and implementations are available for every reasonable language and platform.

Finally "based on published curves". Yes, ECC requires people standardise on some public parameters to use it. That's no different to HTML or TCP or any other standard. The one caveat that exists here is that for the secp256r1 curve (which is widely used in ECC based SSL) the curve parameters were chosen by the NSA. However, (a) nobody knows any way that this could introduce a weakness despite decades of research into ECC and (b) there are many other curves where the parameters were not chosen by the NSA, for example Curve25591 where the parameters were not only chosen by djb but every parameter has an explanation for why it's the best value that could be chosen. There's no flexibility or magic numbers in its design at all, but it's still ECC.

The good news is that the secp256r1 curve has basically no redeeming qualities at all, except that it's one of the oldest and so support is widespread. All new applications that get a free choice of curve are being based on a modern modern one like Curve25591 or secp256k1 (bitcoin), where the parameters are much more rigid and there are no ways to insert back doors.

Comment Re:Good luck with that. (Score 5, Informative) 124

OK, I read the paper.

The money quote is at the end:

The evaluation results from Section 4 show that work still needs to be done before program obfuscation is usable in practice. In fact, the most complex function we obfuscate with meaningful security is a 16-bit point function, which contains just 15 AND gates. Even such a simple function requires about 9 hours to obfuscate and results in an obfuscation of 31.1 GB. Perhaps more importantly (since the obfuscation time is a one-time cost), evaluating the 16-bit point obfuscation on a single input takes around 3.3 hours. However, it is important to note that the fact that we can produce any “useful” obfuscations at all is surprising. Also, both obfuscation and evaluation are embarrassingly parallel and thus would run significantly faster with more cores (the largest machine we experimented on had 32 cores).

Translated into programmer English, a "16 bit point function" is basically a mathematical function that yields either true or false depending on the input. It would correspond to the following C++ function prototype:

bool point_function(short input);

In other words you can hide a 16-bit "password" inside such a function and discover if you got a match or not. Obviously, obfuscating such a function is just a toy to experiment with. "SHA256(x) == y" is also a point function and one that can be implemented in any programming language with ease - short of brute forcing it, there is no way to break such an "obfuscated point function". Thus using this technique doesn't presently make a whole lot of sense. However, it's a great base to build on.

I should note that the reference to AND gates above doesn't mean that the program is an arbitrary circuit - it means that the "program" which is being obfuscated is in fact a boolean formula. Now you can translate boolean circuits into boolean formulas, but often only at great cost. And regular programs can only be translated into circuits at also a great cost. So you can see how far away from practicality we are. Nonetheless, just last year the entire idea that you could do this at all seemed absurd, so to call the progress so far astonishing would be an understatement. Right now the field of iO is developing so fast that the paper's authors note that whilst they were writing it, new optimisations were researched and published, so there are plenty of improvements left open for future work.

Comment Re:Good luck with that. (Score 5, Informative) 124

The objective of "mathematically proven security properties" via program obfuscation is definitely not achievable. After all, it's a given security principle of "security through obfuscation" is unsupportable. If an adversary is capable of obtaining the executable of a program, they can also reverse engineer that same executable. It may take a lot of effort, but it is always achievable.

That is the standard consensus view in the software industry, yes. I'm afraid to tell you though, that it's wrong.

Last year there was a mathematical breakthrough in the field of what is called "indistinguishability obfuscation". This is a mathematical approach to program obfuscation which has sound theoretical foundations. This line of work could in theory yield programs whose functioning cannot be understood no matter how skilled the reverse engineer is.

It is important to note here a few important caveats. The first is that iO (to use the cryptographers name) is presently a theoretical technique. A new paper came out literally 5 days ago that claims to discuss an implementation of the technique but I haven't read it yet. Will do so after posting this comment. Indeed, it seems nobody is quite sure how to make it work with practical performance at this time.

The second caveat is that the most well explored version of it only applies to circuits which can be seen as a kind of pure functional program only. Actually a circuit is closer to a mathematical formula than a real program e.g. you cannot write circuits in C or any other programming language we mortals are familiar with. Researchers are now starting to look at the question of obfuscating "RAM programs" i.e. programs that look like normal imperative programs written in dialects of, say, C. But this work is still quite early.

The third caveat is that because the techniques apply to pure functions only, they cannot do input or output. This makes them somewhat less than useful for obfuscation of the sort of programs that are processed with commercial obfuscators today like video games.

Despite those caveats the technique is very exciting and promising for many reasons, none of which have to do with DRM. For example iO could provide a unifying framework for all kinds of existing cryptographic techniques, and enable cryptographic capabilities that were hereto only conjectured. For example timelock crypto can be implemented using and iO obfuscator and Bitcoin.

Comment Re:Makes Sense (Score 1) 225

There's a whole generation of people using the Internet who literally don't know how to browse to a website directly ..... And browser makers increasing trend to monkeying with the address bar's function only makes it worse.

Wait, which is it? Do you want browser makers to try and fix the address bar so more people know how to use it, or do you want to preserve the status quo?

I'd prefer browser makers to radically step up the level of monkeying with the address bar. The address bar is stupid. It's easily the worst part of the web's entire design and has given us a generation of phishing sites and other crap that exploit the fact that web browsers/apps basically dump a part of their internal memory state onto the user interface. No other app does this ..... because it's stupid.

Comment Re:the solution: (Score 1) 651

Otherwise, it's just lip service. Your government is already ignoring your Constitution on a large scale, but apparently nobody gives a damn

I am not American, still, I do truly believe that hundreds of millions of Americans do give a damn.

The problem is not giving a damn. The problem is that guns are a stupid way to try and change governments, and everyone there must intuitively understand this. I keep reading comments by 2nd amendment fundamentalists saying they're packing guns so they can overthrow the government .... in case it becomes tyrannical. But this day will never arrive, no matter what the US Gov does.

The first problem is that if you go it alone, if you're a solo shooter, you can't achieve anything and will be killed immediately, then written off as mentally unstable. This does happen in the USA and in at least one case the shooter did claim they were rebelling against the government. Regardless, such events are zero impact.

The second problem is that if you try to team up with like minded people and form a group of armed citizens who are going to engage in a revolutionary coup, you will need to communicate in order to find such people, and at that point you are very likely to attract the attention of law enforcement who have totalitarian surveillance powers and the ability to move against "cults" or "terrorists". And almost by definition if you're trying to overthrow the government through force of arms instead of the ballot box you can be described as a domestic terrorist. You will end up sitting in jail for many years, and most people will likely never hear of you, or if they do read about your case in the papers they will just forget about you.

The third problem is that if you do somehow overcome the first two problems and succeed in forming some kind of revolutionary militia, taking over some territory and defending it against the US army in a new American civil war, you will need a system of government for that territory. How exactly you prevent that new government from eventually going the same way as the existing government would be an open question - attempting to encode the principles of the new state in a constitution apparently doesn't work very well, and I don't see many other ideas from the "guns give us freedom!!" crowd. This is the problem repeatedly encountered by countries in the Middle East where governments are overthrown (without guns, normally) and then tend to get immediately replaced with something worse.

So for these reasons the notion that Americans are free because of guns just doesn't seem to line up with common sense, to me. I cannot imagine any situation in which civil war in the USA would be allowed to happen - civil war is so universally catastrophic that an overwhelming majority of American's would strongly support forcible suppression of an armed uprising using all the tools of a professional army. Your Glock ain't gonna do anything against a Predator drone.

Comment Re:CloudFlare is a f.ing nightmare for anonymity (Score 1) 67

Occams Razor says ...... networks like Tor which are incapable of handling abuse by design ...... get a lot of abuse! So not surprisingly networks that have advanced anti-abuse controls in place throttle it a lot. Otherwise you're just asking to get crawled by SQL injector searchers and so on. This is not CloudFlare's problem, it's inherent in how Tor works and what it's trying to achieve. Solving it means finding a way to trade off anonymity against accountability using user reputation systems or the like, but the Tor project has shown little interest in implementing such a thing, so all Tor users get treated as a whole.

Comment Re:There is no political solution. (Score 5, Insightful) 212

It would be nice if that were the case. Unfortunately it's hard to see how it can be. The technology industry has a poor track record of deploying truly strong end to end privacy protections, partly because the physics of how computers work mean that outsourcing things to big powerful third parties that can be easily subverted is very common. E.g. my mobile phone can search gigabytes of email from the last decade in a split second and rank it by importance, despite having nowhere near enough computing capacity to really do that itself, only because it's relying on the Gmail servers to help it out.

That same phone can receive calls only because the mobile network knows where it is. How do you build a mobile phone that is invulnerable to government monitoring of its location? It doesn't seem technically possible. The only solution is to ensure that anonymous SIM cards are easily obtained and used, but many countries have made those illegal as part of the war on drugs.

This trend towards outsourcing, specialisation and sharing of data to obtain useful features is ideal for governments who can then go ahead and silently obtain access to people's information without those people knowing about it. I do not see it reversing any time soon. The best we're going to achieve in the near term future is encryption of links between devices and datacenters, but this doesn't help when politicians are simply voting themselves the power to go reach in to those datacenters.

Ultimately the only long term solutions here can be political, and I fear we will need a far longer and larger history of abuses to become visible before the majority will really shift on this. The problem is a large age skew. Older people skew heavily authoritarian, if you believe the opinion polls, and are much more likely to support this kind of spying. Perhaps they associate it with the cold war. Perhaps the old adage "a libertarian is a republican who wasn't mugged yet" has some truth to it. Whatever the cause, the 1960's baby boom means that demographically, older people can outvote younger people as a block, and for this reason there aren't really any fiscally conservative, economically trusted AND individual rights-respecting parties in the main English speaking countries. People get to pick between borrow-and-spend socialists with an authoritarian bent, and fiscal conservatives with an authoritarian bent, so surprise surprise we end up with people in power who are authoritarians.

Slashdot Top Deals

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...