Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
I fully agree. I like the GTK+ API and I still continue to use it because shifting to another toolkit for my apps would be costly. But I'm loosing patience:
GTK+ development has become an unprofessional mess. Functions get deprecated even with minor version changes: you develop your app with version 3.xx and distribute it. Then people move to 3.yy (where yy>xx) and bang your app does not work anymore because someone decided to *remove* a function from GTK+ without any consideration for existing apps out there. Sometimes the fix involves a new function that does not exist in the previous version of the library, so you can't even find a real fix that would work with all versions from 3.xx and above. You just add some ugly preprocessor macros in your code to deal with different versions of GTK+ at source level...
With the safe harbour agreement american companies basically "promise" to follow some rules related to privacy, which are compatible with European values. But to make such an approach effective, someone has to verify that the "promises" are real and eventually impose sanctions if they are not. That someone is -- in theory -- the FTC.
The problem with safe harbor is that it is been very weakly enforced. In the first decade since it was created, there has been no real enforcement action that I've heard of. This gives the impression that Safe Harbor is pretty toothless. FTC has only recently (2014) began to enforce this framework, because Europeans threatened to abandon it.
My experience as an end-user in a research project:
I've tried to install OpenStack on a small group of 4 machines (a controller, a network manager and two compute node). It was a real mess to install. The documentation contains omissions and mistakes. You need to write your own shell scripts to get the work done (and redone). Understanding what went wrong from the cryptic python debug messages is like banging your head against a wall. The only way I finally was able to test things was to scale back to a "one-node" system (everything on the same machine) and use DevStack. That works great but it's really far from a "cloud". You need to be HP or RackSpace to get this working well I guess.
Contrast that with OpenNebula. This platform is much less hyped about but it works much better. Even when you hit a bump on the road, you can actually understand the logs, and even debug stuff yourself. I got a 4 node system working with all storage on iSCSI and I can add more compute nodes seamlessly.
Do you know what PFS is or how it works? Clearly the answer is "NO" but you should educate yourself.
The OP is right: smartcards mostly rely on symmetric algorithms. In fact the OP never said anything about PFS, and bank cards for example, which use RSA, do not implement PFS, because it is not needed in that context.
Do you know what a smart card is and how it works? Clearly the answer is "NO" but you should educate yourself
Why is this even a thing? All reversible encryption (which in itself is a tautology) is searchable.
Plaintext record ID > Encryption+key+salt etc > Cyphertext record ID. Search for the cyphertext record ID. Bring encrypted record back from database. Encrypted record > Encryption > Plaintext record.
How is this a marketable product?!
Searchable encryption is more complicated than you think. For example if I encrypt the sentence "I like reading slashot" with traditional encryption I will get a block binary data that is meaningless. Now suppose I want to check if that block contains the word "slashdot"? Your "cyphertext record id" approach won't be of much help. You need a few tricks to do it correctly, notably adding some metadata and additional cryptographic mechanisms. To make things more complicated, you often need the encryption mechanism to be "format preserving": if you encrypt a string field you get a string field, if you encrypt a number field you get a number field, while traditional cryptography outputs binary data.
Note that you may have misunderstand how encryption works, if you believe that all reversible encryption is searchable. Good encryption is randomised: if you encrypt the same plaintext twice with the same key you get 2 different cypher-texts (to take your analogy, you must use different salts).
It's called "searchable" encryption. It already exists in a few commercial products.
See for example:
In practice hashing is often much less secure than encryption for passwords. The devil is in the details.
Here it seems that Adobe made some poor design choices in the encryption algorithm. Yet, despite these flaws, assuming the encryption key is not compromised they might still be better off with their encryption rather than a poor hash mechanism such as the one used for example by Sony and revealed in the playstation hack by anonymous.
In general if the encryption key is not compromised, then encryption provides much more security than pure hashing, or even hashing with a salt. The reason for that is that with encryption, the security of the password depends on the strength of the secret key. With hashing, the security depends of the strength of the password. This is a significant difference. So, if your password is 4 characters long, even the best hash algorithm will fail to protect from a brute force attack. However, if that same password is encrypted, you need to brute force the key which would take centuries assuming the key is long enough.
To be more precise:
1) Pure hashing (applying SHA1 alone for example) is almost the same as having no security at all.
2) Hashing with a salt is a bit better but still won't resist long given computational power offered by GPUs and cloud computing.
3) An iterated hash function with a salt is much better (see PKDF2), and buys you some security but still vulnerable from brute force attacks using GPUs and pooled cloud resources.
4) A "sequential memory-hard" hash function (with salt and iteration) such as "scrypt" is pretty safe today.
Unfortunately in reality most companies use either (1) or (2)...
The drawback of encryption is that you need to make sure that your key is safe. Once the key is compromised you're toast. This means that you should not put the key on the same system that is hosting the password database (it may sound evident, but I've seen it done). It requires putting the key in a HSM (Hardware Security Module) or in a distinct ultra secure server, distinct from the password database.
Of course, if you have the possibility to keep a key secure, the best option is to use a *keyed* hash function (an iterated salted version of HMAC for example), getting the benefits of both worlds...
The US has a lot to offer that Europe doesn't have, but when it comes to privacy, I think Europeans do have a strong lead:
* For a start Europeans have real privacy legislation (it's called Directive 95/46/EC). US Internet corporations will fight with big money against any similar initiative in the US.
* Each European Member State has a data protection authority that enforces privacy legislation, monitors the use of personal information and tries to educate the public. Some of these authorities even have inspection powers (see for example what they did over Streetview's interception of Wifi data). It has a little bureaucratic feel to it, but it works.
* Culturally in Europe, there's always a tendency to find a balance between each party’s legitimate interests and rights. Even in the workplace, people's right to privacy can't fully be obliterated by corporate needs.
The only caveat is that we are living more and more in a global world. My employer might be Chinese, my datacenters might be in the US, and my job might be in Europe. Which law applies then? That's the challenge!
I know this will surprise many slashdot readers but using your fingerprint as described by the poster for the purpose of clocking you in and out of work would be illegal in many countries accross Europe (with the possible exception of the UK). In France, for example, you can actually get fined by the data protection authority for doing so.
It's true that most of these devices don't store an image of your fingerprint but rather a "template" : a description of some special features of your fingerprint. But that doesn't change the problem.
Indeed, many data proctection authorities accross the EU consider that biometrics pose sevreall security and data protection issues and must therefore be used with caution. Fingerprint biometrics are of special concern, in particular when the biometric data (templates) are stored in a central database. The big problem with fingerprints is that we leave them everywhere, on all objects we touch. Someone can pick up your fingerprint and test it against the templates inside the database. (Sounds crazy or technically impossible ? It's much easier than you think : i've tested it myself, that's part of my job). There are other issues whith fingerprint biometrics that I won't detail here.
In the end data protection authorities in the EU consider that the use of a central fingerprint database is excessive if your only objective is only clocking people in and out. Instead, they encourage the use of a smartcard to store the biometric data : you show your finger to the biometric reader and it gets compared with the data stored in the smartcard. This solution offers the same benefits in terms of security but you keep control of your biometric data.
In practice, smartcards are often used as tamperproof devices to represent a third party, such as a bank. In France, for example, the credit card smart cards carry the bank's private key (for a Gilou/Quisquater RSA variant) as well as some additionnal secret information.
This information is not available for any reader but is used internaly for cryptographic computations.