Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Uh , since around 1998? (Score 1) 371

I think you may be suffering from the same confirmation bias problem. In some cases the Java optimizer can determine that out of bounds access is not possible and it optimizes the additional bounds checks out. For example, a loop that goes from zero to array length - 1 can remove the array bounds checks. So every single array access in Java is not necessarily bounds checked.

But most of them probably are, and in practice C mostly runs faster than Java. But not always...

Comment Re:Security through obscurity? (Score 1) 168

I think we're actually in violent agreement. I completely agree that obscurity doesn't give you any real security, and yes people need to understand this.

But in the specific situation where something in widespread use turns out to have a security flaw, then disclosing the vulnerability until there has been a reasonable amount of time for a fix to be prepared doesn't make anyone safer.

If you agree with that, then you are also acknowleging that the obscurity may be providing very temporary security for some people. If you don't agree with that, then you seem to be saying that revealing vulnerablities immediately before a fix can be prepared does not weaken anyone's security...?

Comment Re:Security through obscurity? (Score 1) 168

Sorry, replying to my own post, but I forgot to make the point I wanted to!

Obscurity definitely doesn't give you real security. But if all you have is obscurity, then it is better to have that than nothing.

It might confer no actual security, but taking the obscurity away straight away will definitely make no-one safer. The possibility exists that some people will be protected by the obscurity, at least in the short term. It just can't be relied upon.

Comment Re:Security through obscurity? (Score 1) 168

Well, I can't say that I speak for the entire crypto and security community, but I do work in the field and I have thought about this a bit.

"No security by obscurity" isn't meant to inform how we approach the entire process of vulnerability disclosure. It just makes the point that relying on obscurity for security will give you no real security. This is what we need people owning, building and maintaining things with security requirements to understand.

When thousands or millions of fielded products are already out there with a vulnerability, then giving the manufacturers time to fix the issue is just responsible disclosure.

Disclosing after some reasonable period of time is also responsible, as an incentive to actually fix it. We take obscurity away after some time, so they can't argue that the obscurity is all their customers need. We don't start with revealing everything when there isn't yet a fix. That makes no one more secure.

Comment Re:Self signed? (Score 2) 276

Good question. The short answer is that they don't know it's really from you. A root CA certificate is the root of trust - it is self signed by the CA. It cannot by itself prove it is genuine.

In a corporate environment where you control the infrastructure you could automatically distribute the root certificate to your users with group policy or some other trusted distribution mechanism. If you don't control the infrastructure, then you would need some other out-of-band method to assert that cert is genuine. Maybe you could publish a hash of the certificate on a web site you control or in some other place they already trust.

It's not turtles all the way down...

Comment Re:Bye bye Dropbox? (Score 1) 404

SpiderOak is good, client-side encryption. If you don't use the web interface, they cannot decrypt your files. It supports Windows, Mac and Linux well, which I need as we don't really use Windows at home. I had a few problems with it at one point, but their support resolved them quickly and they gave me a few free months in compensation. Cost isn't as cheap as some, but it's not bad unless you want to store terabytes of data. You can backup multiple machines on one plan, and sync between machines automatically. You can also share files with others who aren't on the SpiderOak service.

The most interesting thing about it is they do some kind of de-duplication even though the data is encrypted. Questions have been asked about whether this means there's no semantic security, but I haven't found a satisfactory answer to this yet.

Anyway, it's recommended by me (& my family).

Comment Re:This... is a very good idea. (Score 1) 110

Interesting, hadn't thought about synchronizing passwords across systems by having a decryptable password. I guess the other way is to synchronise them all at the point of password change. I'm currently advising an enterprise on security - hopefully not with snake oil! We have a real problem migrating from one legacy system to a new one, as resetting 100,000 passwords is kinda hard in practice.

The legacy system is already using salted hashes (top marks!), although many of the users only have 4 digit PINs (not so secure!). The vendor maintained that reversing them was not possible, due to the security magic of salting and hashing. There is a lot of security snake oil about, from all sides. It amused my colleagues when I demonstrated it wasn't so hard to reverse those - only took about 10ms per PIN. Of course, the longer passwords are not so easily crackable - salted hashes are doing their job very nicely there.

I'm still struggling to understand what a daily salt does for you, other than creating a lot of complexity. Would love to see the threat model for that! Reversible encryption (not deterministic - with semantic security) would have solved quite a few problems for the migration - were I able to travel back in time and re-engineer the legacy system, of course...

Comment Re:I'll Raise You.. (Score 1) 110

Well, the paper does acknowledge that approach:

"Sometimes administrators set up fake user accounts (\honeypot accounts"), so that an alarm can be raised when an adversary who has solved for a password for such an account by inverting a hash from a stolen password le then attempts to login. Since there is really no such legitimate user, the adversary's attempt is reliably detected when this occurs."

The only reason why this approach may not work according to the paper is:

  "However, the adversary may be able to distinguish real usernames from fake usernames, and thus avoid being caught."

I guess this is a concern, since your system must also be able to distinguish between them in order to raise an alert if a fake one is used. Which probably means having a "honey account" server, like the honey word server... At this point, I am unable to see any real difference between the two proposals - it's just a different bit of honey data.

Comment Re:Strange failsafe (Score 1) 110

Sorry replying to myself - I didn't think that through very clearly - you are right.

There are only two reasons to mount a denial of service attack against the honey word server. One is pure denial of service to prevent anyone logging in. The other would be if they had already offline compromised the password database and wanted to exploit it.

I guess you could take a specific DOS on your honey word server as an indication that your password database may have been compromised.

Comment Re:Strange failsafe (Score 1) 110

Well no, it doesn't let everyone in. If you have chosen your honey words carefully, they should be about as secure as the original password was. It would just mean that more than one password could theoretically be used to log in with, and you couldn't use them to detect a breach of your password database for that time. This clearly weakens security a bit, but not by much, and maintains availability of the service (presumably temporarily)

Comment Re:This... is a very good idea. (Score 1) 110

Well, that's an interesting proposal, but has weaknesses all of its own. I don't understand what you mean by using daily one time use salts with MD5 or SHA1. RC4 is a stream cipher, not a hash algorithm, and PKCS7 is cryptographic message syntax, not a padding specification. Given that a password is likely to be less than 16 characters in length, you are only going to have a single encrypted block, so I'm not sure what CBC mode gets you. So I'll ignore the cryptographic buzzword part of your proposal - please feel free to elaborate on it.

For the rest, complexity is the enemy of security. Using reversible encryption certainly lets you change the key every now and again, but now your super secret key must be present in the process that validates passwords, archived securely, etc. It can't just reside on a java card.

What additional security does using reversible encryption buy you? It prevents offline brute force attacks on the password database, but on the other hand, compromise of the key automatically compromises all passwords in that database. What additional security does changing the key buy you? It lets you decrypt and re-encrypt existing passwords, changing the value recorded in the database without the user having to change their password. Someone who had compromised your password database could now... what? Strong encryption already prevented offline brute force attacks, so changing the key regularly is only useful if someone has compromised your key, or you suspect they have. If they have done that they already have all your users existing passwords, requiring you to issue new passwords to all your users anyway. So key changing only mitigates the vulnerability that the key can decrypt all passwords - something salting doesn't suffer from in the first place.

Salting has the great advantage that it is simple, keeping cryptographic secrets is not required and is still good enough for most practical purposes. Compromise of a salt only lets an attacker mount a brute force attack against a single password. If you also use a tunable iterative password hashing algorithm, you can selectively increase the strength every year to keep up with advances in hardware. There are even hash algorithms designed not to work well on parallel GPU architectures.

By the way, I'm not saying that salting is the only way to do this, or that reversible encryption should never be used. Just that your proposal doesn't give me any confidence that my security would improve a lot, but does give me a lot of extra complexity and cost to manage.

Comment Re: No surprise (Score 4, Insightful) 61

Been there, done that, made redundant.

This was at a software house selling payment processing middleware that had to be PA-DSS compliant. Achieved compliance, role made redundant.

They clearly made a risk reward calculation and decided the benefit of securing the product was outweighed by the cost of slowing development. Particularly as everyone else's security also sucked and there was no particular liability for them if a breach occurred. It's a classic externality.

I'm also on the steering committee for an initiative trying to improve software security and resilience. They also figured out that the market was failing here, and only legislation for software liability or some other mechanism to correct the externality had any chance of improving the situation. But the cure might be worse than the disease. ..

Comment Re:Is that really the problem? (Score 1) 297

That's a good one. Ironically though, I find that estimates are optimistic precisely *because* the estimator understands it. It's a form of cognitive bias.

The estimator confuses the hardness of understanding the problem with the effort of implementing a solution to it.

Since they understand the problem and can immediately think of ways to achieve it, it doesn't seem that bad. They forget all the things which will occur during the period they have to implement a solution - holidays, sickness, unexpected urgent events and emergencies, old code that needs updating to fit with the new architectural framework they have to use now, testing, bug fixes, creating test data, documentation, release notes, endless meetings caused by another part of the business, etc. etc.

I once had a young developer confidently tell me he could implement a robust and scalable multi user database in a few hours from scratch. Now granted, he was also very inexperienced, but he didn't seem to think his estimate was way, way, way out. I told him to get on with it and I'd look forward to his demo after lunch.

Comment Re:Windows has been "over" for me for years (Score 2) 863

Yes, I liked 10.10 a lot, and held off upgrading until 12.04 due to Unity fears, many of which appear to be quite well founded in the early releases. I had to reinstall anyway, and thought I'd give it a go, since I could always reinstall 10.10 if it came to it.

And... after about a month I found I actually liked Unity. I discovered that I really didn't use multiple windows all at once on the same desktop much - but I could still do that if I wanted to. It's a simpler interface with less configuration available... but I'd already stopped endlessly configuring and tweaking the desktop.

The only thing I still don't like is switching between multiple windows in the same application. I wish I could just click on the icon in the launcher to cycle between them, rather than the screen zooming back to show all open windows together - which can then be a bit too small to easily distinguish them. And the effect just gets on my nerves - just let me cycle the full size windows, dammit! But other than that I find it to be a calm experience working in Unity. It mostly gets out of your way so you can focus on what you're actually doing.

Consider giving it a genuine try - you might be surprised. YMMV, of course :)

Slashdot Top Deals

Work continues in this area. -- DEC's SPR-Answering-Automaton

Working...