It looks like a message authentication code, but it isn't. Hash(Key || data) is vulnerable to a length extension attack.
I think we need a "Misleading" category.
Without the salts, the hashes are essentially uncrackable, if the salts aren't incredibly short. So don't waste your time trying to crack these.
Salts are not secrets. They are usually stored right alongside the account details in the password database.
If your solution is to make the salt secret, you're not using salts anymore. Per-account salts protect against pre-computation attacks and do not need to remain secret to provide this protection. They are a cheap and effective defense for this purpose.
If you want to keep your salts secret, they are technically called "keys", and are expensive and difficult to manage securely.
The software running on the POS is completely known and controlled. In a big organisation there are lots of them, so you want to be able to update over the network. Updates are tested and bundled with any whitelist updates required. It's the perfect environment for whitelisting.
I'm curious why think it won't work on a POS with remote updates?
I attended a conference on XML back in roughly 2004. A police technical architect was describing the ANPR system. He pointed out that the current deployments of the time were entirely local and not joined up nationally - but went on to say that it wasn't a very big step to do this, allowing the tracking of vehicle movements on a national scale. He looked embarrassed and uncomfortable as he said this.
I got the very strong impression at the time that he was trying to give a warning on where this technology was heading.
I'm no expert on id-based encryption, although I can just about understand how it works. It has some attractive properties as well as some serious downsides.
* An encryptor can pick a public key at random for a recipient known to the decrypting authority.
* No prior arrangement is required except for knowledge of the public parameters of the authority, and a recipient to send a message to.
* The private key of the recipient can be calculated at any time by the decrypting authority.
* The recipient must authenticate to the decrypting authority to receive the private key for the sender-chosen public key.
* All messages in the past and in the future can always be decrypted by the decrypting authority at any time.
* You have to trust this authority absolutely.
The fact that the private key can be calculated from the public key and the master secrets is actually a pro as well as a con. This is what lets the sender choose a public key of their choosing with no prior arrangement.
I've seen this work quite well in one setting - payment messages from secure pin entry devices to the payment processor. In this case, the payment processor can decrypt all payment messages at any time, but each message is sent using a different key for each transaction, chosen by the low power pin entry device, and requiring no interaction between them and the processor.
On reflection, it's probably not a good candidate for inclusion into a protocol that would replace TLS. I can't really see how it provides anything useful in that setting. Still, it was just an example of some of the cool ideas being realised in more modern cryptography
Well, I can't really make out what you're proposing here.
As far as I can see, the client side has three secrets to maintain - the GUID, master password and salt. If the GUID is unique to a computer, your accounts only work from a single machine, and if you lose the GUID then you lose access to all your accounts. Correct?
The nonce is a "number used once" - i.e. randomly generated for each session in a cryptographically sound way.... so how do the server and client negotiate the nonce for each session? Does one pick it and encrypt it to send to the other? Do they both participate in picking it? Do they use something like Diffie-Hellman to arrive at the value?
I really don't understand your point about changing the salt equals changing your logins without affecting your password. Do you mean if I wanted to lose access to all my accounts everywhere and begin again, I wouldn't have to change my password?
And... how do you know you're talking to the right server in the first place? I don't see any server authentication at all in your proposal.
That's enough for now. The one thing I've learned from studying protocols is that it's really, really hard to get right. Not because the people creating them are dumb or have malicious intent. It may well be time to start creating a new protocol to replace TLS eventually, using what we now know about trust, authenticated encryption, protecting the handshake and side channel attacks. And possibly using some new techniques in there, like identity-based encryption...
Go for it.
Don't worry - you won't get anywhere close, but I guarantee you will learn a lot.
Start by trying to define what you are protecting from whom, and how two arbitrary endpoints who have never met can know they are talking to each other and not a man in the middle.
This is a really good point. The inefficiency of physical ballots requiring large numbers of people to participate is a security feature!
A belief in GW is entirely compatible with having a beach front house. The problem is that it is slow moving but inexorable.
Personally, I'm with the vast, vast majority of scientists who claim it's real and extremely dangerous. From what I've seen of the human race, we won't do anything until we get badly burned.
I guess everyone will know for sure one way or the other in a few decades. I just hope we can live with it.
I thought we already covered this in the linux rdrand story. It's called unauditable because it whitens the raw entropy output using encryption on chip, making even quite non-random source data appear to be random. It is not called unauditable because it's a black box design. The paper states that the design is very well known.
The attack described in this paper is to modify both the entropy source output "c" and the post-processing encryption key "K", undetectably setting a fraction of them to constant bit values. This weakens the effective random number generation to some chosen n bits of entropy, instead of 128 bits. But because the AES encryption post-processing stage does a very good job of making its output appear random, it will still pass random number tests.
If we had access to the raw entropy source, we could see that it was not providing nearly enough entropy to the encryption post-processing stage.
Interesting. I once managed a very bright and young developer, whose coding was exceptional. He was very often (but not always) right. But he was also rude and completely lacking in any social graces. And it wasn't enough that he was right - he also made everyone else feel stupid and frustrated.
I had to find solo projects for him, as the rest of the team ended up flat out refusing to work with him - and I didn't blame them.
I believe one of the issues with this instruction as a source of random numbers is that the instruction whitens the output with no access to the raw entropy data. Any physical process that acts as an entropy source will have some (possibly small) biases - it won't necessarily appear to be completely random in particular ways.
This can be audited to see that the output conforms to the physical processes which are described.
If the instruction whitens the output through some algorithmic transform (e.g. hashing) to give apparently random numbers as output, there is no way to distinguish that from say encrypting a counter with a secret key - whose output will also appear to be random - but is trivially crackable if you know the secret key.
So it becomes an exercise in trust in Intel, rather than something which an be independently verified. There was a good comment on the cryptography mailing list about this - that it would be better to have hardware entropy sources, leaving the final steps of random number generation to software.
I guess the point the OP was making is that the remaining crime may become more violent. But I agree that a lot of opportunistic crime would essentially disappear.
If you want to cite human rights, your personal right to travel doesn't trump my right to be reasonably safe from hurtling lumps of metal driven by untrained morons.
It's bad enough as it is - so bring on the self driving cars...