Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Help them realize they're the asshole, with a b (Score 1) 356

Interesting. I once managed a very bright and young developer, whose coding was exceptional. He was very often (but not always) right. But he was also rude and completely lacking in any social graces. And it wasn't enough that he was right - he also made everyone else feel stupid and frustrated.

I had to find solo projects for him, as the rest of the team ended up flat out refusing to work with him - and I didn't blame them.

Comment Whitening on chip (Score 1) 566

I believe one of the issues with this instruction as a source of random numbers is that the instruction whitens the output with no access to the raw entropy data. Any physical process that acts as an entropy source will have some (possibly small) biases - it won't necessarily appear to be completely random in particular ways.

This can be audited to see that the output conforms to the physical processes which are described.

If the instruction whitens the output through some algorithmic transform (e.g. hashing) to give apparently random numbers as output, there is no way to distinguish that from say encrypting a counter with a secret key - whose output will also appear to be random - but is trivially crackable if you know the secret key.

So it becomes an exercise in trust in Intel, rather than something which an be independently verified. There was a good comment on the cryptography mailing list about this - that it would be better to have hardware entropy sources, leaving the final steps of random number generation to software.

Comment Re:RSA = out of date (Score 1) 282

Easy to confuse all this crypto stuff! I work with it regularly and still have to look quite basic stuff up if I haven't touched it for a while! Yes, I am that Matt Palmer, but no longer at the National Archives...I'm now doing contract security architecture for a consultancy.

The issues on IBE are kind of like trusting a CA, except there are no certificates and therefore no CA. There is a very powerful trusted party who can decrypt anyone's information. The way it works is, there are some all powerful master secrets, from which some public parameters are generated.

Anyone with the public parameters can generate a new public key for anyone (e.g. using your email address as the public key) and encrypt a message for you. The issue is that to decrypt the message, you have to ask the trusted party for a valid private key for that public key, which it can automatically generate for you given knowledge of the public key, using the master secrets.

One security issue of this system is how does the trusted party authenticate that you really are who you claim to be, and how does it distribute that private key to you. Another, possibly more serious objection, is that the trusted party can fundamentally generate private keys for anyone using their parameters, so they can decrypt everyone's data. You have to *really* trust that trusted party.

The only place I've seen IBE commercially used is by Voltage Security. One use case is to allow payment terminals to automatically generate a new public key for each payment. Since the payment provider is supposed to be able to decrypt all of these communications (they are the trusted party), then this works quite nicely.

Comment Re:RSA = out of date (Score 1) 282

Sorry, but this is just wrong.

The whole point of public key encryption is that you don't need to do a key exchange. You have the public key, which is, well, public. The problem then becomes trusting you have the correct public key. Signatures provided by some other trusted party are used for this, usually in certificates. There still needs to be some per-established trusted root or web of trust to enable this. Identity-based public key encryption even does away with the need for this, allowing the generation of arbitrary public keys for someone (although there are other security issues with this sort of encryption scheme which I won't go into here).

Diffie Hellman key exchange is unauthenticated and completely vulnerable to a man in the middle attack. It is used to create a shared secret between two parties, which becomes a shared key, usually for symmetric encryption. It's very old now but still amazingly cool - I love the somewhat counter-intuitive fact that two parties can create a shared secret amongst themselves using only public communications. As long as you accept that neither party has any idea at all who they are creating the shared secret with.

Comment Re:Uh , since around 1998? (Score 1) 371

I think you may be suffering from the same confirmation bias problem. In some cases the Java optimizer can determine that out of bounds access is not possible and it optimizes the additional bounds checks out. For example, a loop that goes from zero to array length - 1 can remove the array bounds checks. So every single array access in Java is not necessarily bounds checked.

But most of them probably are, and in practice C mostly runs faster than Java. But not always...

Comment Re:Security through obscurity? (Score 1) 168

I think we're actually in violent agreement. I completely agree that obscurity doesn't give you any real security, and yes people need to understand this.

But in the specific situation where something in widespread use turns out to have a security flaw, then disclosing the vulnerability until there has been a reasonable amount of time for a fix to be prepared doesn't make anyone safer.

If you agree with that, then you are also acknowleging that the obscurity may be providing very temporary security for some people. If you don't agree with that, then you seem to be saying that revealing vulnerablities immediately before a fix can be prepared does not weaken anyone's security...?

Comment Re:Security through obscurity? (Score 1) 168

Sorry, replying to my own post, but I forgot to make the point I wanted to!

Obscurity definitely doesn't give you real security. But if all you have is obscurity, then it is better to have that than nothing.

It might confer no actual security, but taking the obscurity away straight away will definitely make no-one safer. The possibility exists that some people will be protected by the obscurity, at least in the short term. It just can't be relied upon.

Comment Re:Security through obscurity? (Score 1) 168

Well, I can't say that I speak for the entire crypto and security community, but I do work in the field and I have thought about this a bit.

"No security by obscurity" isn't meant to inform how we approach the entire process of vulnerability disclosure. It just makes the point that relying on obscurity for security will give you no real security. This is what we need people owning, building and maintaining things with security requirements to understand.

When thousands or millions of fielded products are already out there with a vulnerability, then giving the manufacturers time to fix the issue is just responsible disclosure.

Disclosing after some reasonable period of time is also responsible, as an incentive to actually fix it. We take obscurity away after some time, so they can't argue that the obscurity is all their customers need. We don't start with revealing everything when there isn't yet a fix. That makes no one more secure.

Comment Re:Self signed? (Score 2) 276

Good question. The short answer is that they don't know it's really from you. A root CA certificate is the root of trust - it is self signed by the CA. It cannot by itself prove it is genuine.

In a corporate environment where you control the infrastructure you could automatically distribute the root certificate to your users with group policy or some other trusted distribution mechanism. If you don't control the infrastructure, then you would need some other out-of-band method to assert that cert is genuine. Maybe you could publish a hash of the certificate on a web site you control or in some other place they already trust.

It's not turtles all the way down...

Comment Re:Bye bye Dropbox? (Score 1) 404

SpiderOak is good, client-side encryption. If you don't use the web interface, they cannot decrypt your files. It supports Windows, Mac and Linux well, which I need as we don't really use Windows at home. I had a few problems with it at one point, but their support resolved them quickly and they gave me a few free months in compensation. Cost isn't as cheap as some, but it's not bad unless you want to store terabytes of data. You can backup multiple machines on one plan, and sync between machines automatically. You can also share files with others who aren't on the SpiderOak service.

The most interesting thing about it is they do some kind of de-duplication even though the data is encrypted. Questions have been asked about whether this means there's no semantic security, but I haven't found a satisfactory answer to this yet.

Anyway, it's recommended by me (& my family).

Comment Re:This... is a very good idea. (Score 1) 110

Interesting, hadn't thought about synchronizing passwords across systems by having a decryptable password. I guess the other way is to synchronise them all at the point of password change. I'm currently advising an enterprise on security - hopefully not with snake oil! We have a real problem migrating from one legacy system to a new one, as resetting 100,000 passwords is kinda hard in practice.

The legacy system is already using salted hashes (top marks!), although many of the users only have 4 digit PINs (not so secure!). The vendor maintained that reversing them was not possible, due to the security magic of salting and hashing. There is a lot of security snake oil about, from all sides. It amused my colleagues when I demonstrated it wasn't so hard to reverse those - only took about 10ms per PIN. Of course, the longer passwords are not so easily crackable - salted hashes are doing their job very nicely there.

I'm still struggling to understand what a daily salt does for you, other than creating a lot of complexity. Would love to see the threat model for that! Reversible encryption (not deterministic - with semantic security) would have solved quite a few problems for the migration - were I able to travel back in time and re-engineer the legacy system, of course...

Comment Re:I'll Raise You.. (Score 1) 110

Well, the paper does acknowledge that approach:

"Sometimes administrators set up fake user accounts (\honeypot accounts"), so that an alarm can be raised when an adversary who has solved for a password for such an account by inverting a hash from a stolen password le then attempts to login. Since there is really no such legitimate user, the adversary's attempt is reliably detected when this occurs."

The only reason why this approach may not work according to the paper is:

  "However, the adversary may be able to distinguish real usernames from fake usernames, and thus avoid being caught."

I guess this is a concern, since your system must also be able to distinguish between them in order to raise an alert if a fake one is used. Which probably means having a "honey account" server, like the honey word server... At this point, I am unable to see any real difference between the two proposals - it's just a different bit of honey data.

Slashdot Top Deals

"Survey says..." -- Richard Dawson, weenie, on "Family Feud"

Working...