Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Wait a minute... (Score 3, Insightful) 327

Why don't you have him just sign something with that public key signature rather than divulging the private key to the world?

You're right, that's a better idea. He can sign something with the EK rather than publishing the private key. It accomplishes the same thing but maybe causes less disruption to the TPM world.

Comment CHALLENGE TO TARNOVSKY (Score 4, Insightful) 327

I've been reading about this hack for days, but something seems fishy. Some of the earlier reports had him hacking the SLE 66 CL processor chip which is embedded in the TPM, not the TPM itself. This article also describes him as having to work with many copies of the chip to discover its secrets, but it has the chips being inexpensive ones from China. Problem is that Infineon is a German company and I don't think you can get Infineon TPMs cheaply from China. Putting this together, it's not clear to me that he has truly hacked an Infineon TPM. He may have hacked a similar chip and he assumes that the same attack would work on TPM.

However, there is a way for him to easily prove that he has done what he said. Every Infineon TPM comes with an RSA secret key embedded in it, called the Endorsement Key or EK. This key is designed to be kept secret and never revealed off-chip, not to the computer owner or anyone. And Infineon TPMs also come with an X.509 certificate on the public part of the EK (PUBEK), issued by Infineon. If Tarnovsky has really hacked an Infineon TPM and is able to extract keys, he should be able to extract and publish the private part of the EK (PRIVEK), along with the certificate by Infineon on that key. The mere publication of these two pieces of data (PRIVEK and Infineon-signed X.509 cert on PUBEK) will prove that his claim is true.

Comment Avatar 3D at home? (Score 1) 218

Wouldn't it be great if somebody created a cam capture of Avatar 3D with one camera looking through the left lens of the glasses, and the other camera looking through the right lens? Then they could package them together in some format and people can watch them on existing 3D monitors that use glasses. I looked at some movie sites and they have Avatar "telesyncs" but no 3D versions, too bad. I wonder if any of the 3D TVs at CES will be showing Avatar, that would be good too.

Programming

Haskell 2010 Announced 173

paltemalte writes "Simon Marlow has posted an announcement of Haskell 2010, a new revision of the Haskell purely functional programming language. Good news for everyone interested in SMP and concurrency programming."

Comment Re:Great for Spinal Cord Injury but... (Score 1) 42

Actually although ALS kills neurons in the spinal cord, these cells extend from there to the muscles. And as the nerves begin to fail, they first withdraw from the muscles. They die from the muscle end back to the central location in the spinal cord. Once the nerves die, muscles atrophy in ALS and eventually shrink away to almost nothing. However I believe that electrical stimulation still works to make the muscles contract. E-stim can maintain muscles in ALS, but normally there is no point since there will be no more nerves connecting to them. However with this new technology it may be possible to make muscles contract electrically, controlled by sensors in the brain.

Comment Re:Dr. Stephen Hawking (Score 1) 149

Keep in mind that Hawking lives in England, not the United States. The FDA has no jurisdiction over him. Now it may be in practice that the British health service does follow FDA recommendations in large part, so it's not likely that the treatment would be available any too soon there. But it is certainly possible for him to travel to Europe or even to Asia in order to get treatment if he wants it. There are clinics in Germany, Mexico and China at least that are doing experimental stem cell treatments for ALS and similar diseases, many with rather extravagant claims of improvement but none with patients who are walking around cured.

Comment Re:Backend mining (Score 1) 227

How can you trust that the operating system image that you are running, is what you want to be running? Suppose you generate a CentOS image with your applications on it and give it to a cloud provider. You save a SHA-1 hash of the image to detect tampering. When the image is booted by the cloud, is there anyway for the virtualized operating system to verify that it is running from an image that matches the original hash value? I don't think there is a way to do that now. This means a cloud provider could tamper with your images in ways that are undetectable to you. How much can you trust the calculations of your image now?

Great question, to which there is an answer, but it is an active area of research. The answer is Trusted Computing. TC provides exactly this ability, to know the hash of software running on a remote machine. The TPM chip validates the hash as the software loads, and produces cryptographic signatures of that data. Software that implements this kind of functionality includes Oslo, Trusted Grub, TBoot, and patches for Xen and Linux to make them use the TPM. Using these tools, someone could put together a cloud storage provider that would not only provide TPM signatures on OS images, the design could be such that even system operators cannot inspect OS encrypted data, and the TPM could validate that as well.

Unfortunately, Richard Stallman and the FSF have demonized this technology to the point that hardly anyone is working on it for fear of being tarred with the "Treacherous Computing" brush. They refuse to recognize the value of being able to prove to others what software you are running. Apparently they are afraid that people will choose to run software that RMS does not approve of. They want to keep people from having the ability to make these kinds of attestations "for their own good". Of course, taking away choice under the guise of making people freer is the standard modus operandi for the FSF, as in their promotion of the GPL over less restrictive licenses like BSD. So we should not be surprised at their ideological desire to suppress research on a technology that would give people entirely new kinds of choices and abilities in the software they run, to be able for the first time to make credible commitments to third parties about how software will behave.

Comment Clinical trials on chocolate (Score 1) 158

For chocolate lovers who don't fit the demographics, peruse this list of ongoing clinical trials, you might get lucky:

http://clinicaltrials.gov/ct2/results?term=chocolate

I think the article descirbes this one, FLAVO, which compares flavonoid-enhanced chocolate with unenhanced:

http://clinicaltrials.gov/ct2/show/NCT00677599?term=chocolate&rank=18

For study subjects: "Flavonoid compounds from cocoa (including epicatechin) and soy to be consumed for 365days in the experimental intervention (versus placebo consumption). 27g chocolate bar the vehicle for flavonoid enrichment."

For controls: "27g placebo chocolate bar to be consumed for 365 days."

27g is about 1 oz. Typical commercial chocolate bars are maybe 1.5 oz.

Comment What low carbon prices mean (Score 1) 425

I was hoping to see some logic in this thread, but too bad. I've often seen this supposed failure of the EU carbon markets cited, without anyone ever pointing out the obvious implications.

So carbon emission permit prices were very low. What does that mean? It means there was little demand for carbon permits, right? Too much supply, too little demand. And why is that? It means that emitters were already able to meet their emission targets without using the permits much. It means there were plenty of permits available, more than were needed to meet the targets.

It all points to the same thing: the caps were high, so that it was easy to meet them.

That's not necessarily bad! You're phasing in the system, you don't want it to be too disruptive at first. So you will begin by setting caps generally easy to meet, and gradually tightening up. That's how government always does these things.

Then one thing that happened was that the world economy slowed. This caused production to decline, and therefore carbon output declined. This also reduced demand for the permits. Again, that's not bad! It means that the carbon emission targets were met without a great deal of pain, or at least, without adding extra pain to what was already going on due to the recession. It's a good thing that a cap and trade system has this kind of flexibility, that when the economy slows down, its bite decreases, and then if the economy overheats and starts growing rapidly, the permits will become much more expensive. It will tend to smooth out economic fluctuations.

The bottom line is that a cap and trade system allows the government to set the desired carbon emission level. How that level is met is up to the market. Markets are good at finding the least painful and expensive ways to meet resource constraints, and that is exactly what they have done. It often turns out that initial reductions in resources (whether oil inputs or carbon outputs) can be met surprisingly cheaply, because of the economic notion of marginal production. This is the least efficient and most expensive production which still barely makes economic sense to operate. As costs rise, it is the marginal production which is cut first, not the average production. It means that the production which is taken off-line is the production you cared about the least, the most inefficient and wasteful. It means you can reduce your costs without reducing your profits much. This probably goes a long way towards explaining why carbon prices ended up much lower than people predicted.

Keep in mind too that political opponents of these measures will have exaggerated the likely consequences and how painful the caps would be to deal with. They would have been the last people to explain the points I have made here about how much easier than expected it might turn out to be to meet the caps. This too can have led to unrealistic expectations for carbon market prices.

In the end, low carbon market prices are a great sign. It means that the carbon caps are holding, reductions in carbon are happening, without much negative impact on the economy. We should all hope that our U.S. markets encounter the same fortuitous outcome.

Comment Are carbon emissions from cars going to be taxed? (Score 1) 425

In all the thousands of words of discussion I have read on this issue, I haven't seen one mention of the question of whether gasoline use will be affected by the carbon caps. That seems strange.

"Motor vehicles are responsible for almost a quarter of annual US emissions of carbon dioxide"

Comment Clarification on the technology (Score 4, Informative) 199

A few misconceptions continue to circulate here; let me try to shed some light.

First, the encryption system is apparently not practical in its current form. Maybe improvements will occur some day to make it practical, maybe not. It is still a major theoretical breakthrough because fully homomorphic encryption had often been thought to be impossible in the past. It has been a long sought goal in cryptography and it is remarkable to see it finally achieved. So in practice nobody is going to be doing spam filtering, income tax returns, or anonymous google searches any time soon.

Second, several people have gotten tripped up over an apparent weakness: if you can calculate E(X-Y) you can get an encryption of 0; if you can calculate E(X/Y) you can get an encryption of 1; and from these you could get other encryptions and potentially break the system. This idea fails for two reasons: first, it is a public-key system, so you don't need to go through all this rigamarole to get encryptions of 0, 1, or anything. In public key cryptography, anyone can encrypt data under a given key, without knowing any secrets. So it is already possible to get encryptions of known values, even without the special homomorphic properties. Second, in order for public key systems to be secure, they need to have a randomization property. In randomized encryption, there are multiple ciphertext values that encrypt the same plaintext. Basically, the encryption algorithm takes both the plaintext and a random value, and produces the ciphertext. Each different possible random value causes the same plaintext to go to a different ciphertext. The decryption algorithm nevertheless can take any of these different ciphertext values and produce the same plaintext.

This may be confusing because the most well known public key encryption system, RSA is not randomized. At the time it was invented, this aspect was not well understood. Shortly afterwards it became clear how important randomization is. Other encryption systems like ElGamal do use randomization, and RSA was adapted to allow randomization via what is called a "random padding" layer, known by the technical name PKCS-1. This adds the randomness which allows RSA to be used securely.

One other point is that people are getting hung up about what "fully" homomorphic encryption covers. Exactly what operations can you do? I think the best way to think of it is to go down to the binary level. We know that in our computers, at the lowest level everything is 1's and 0's. These get combined with elementary logical operations like AND, OR, NOT, XOR, and so on. Using these primitive operations, all the complexity of modern programs can be built up.

In the case of the homomorphic encryption, it is probably best to think of the values being encrypted in binary form, as encryptions of 1's and 0's. Keep in mind the point above about randomized encryption: all the encryptions of 1 look different, as do all the encryptions of 0. You can't tell whether a given value encrypts a 1 or a 0. Given these encrypted values, you can compute AND, OR, XOR, NOT and so on with these values, and get new encrypted values as the answers. You don't know the value of the outputs, they are encrypted. Only the holder of the private key, who originally encrypted the data, could decrypt the output. But you can continue to work with these output values, do more calculations with them, and so on.

Let me give an example of how you could do an equality comparison. Suppose you have two encrypted values and want to determine if they are the same. Recall that we are working in binary, so you actually have two sequences of encrypted bits; some are encrypted 1's and some are encrypted 0's, but you can't tell which. So the first thing you compute is the XOR of corresponding bits in the two values: XOR the 1st bits of each value; XOR the 2nd bits of each value, and so on. Now if the values are equal, the results are all encryptions of 0's. If the values are different, some of the results will be encryptions of 1's. But again, you can't tell them apart. So now you compute the OR of all of these result bits. If all the result bits are 0's, the output will be an encryption of 0; if any of the result bits were 1's, the output will be an encryption of 1. So this output bit tells whether the original values were the same: if it is an encryption of 0, they were the same; if an encryption of 1, they were not. Once again, you can't tell which it is, but it holds the answer to your equality question.

The point is that you work with encrypted bits, you can perform any calculations you like on them, creating the equivalent of a logic circuit; and all the way through and at the end, all of the intermediate and final results are encrypted bits too. You don't learn anything about any of the values you are working with; a 1 looks the same as a 0 to you. But you can compute any circuit you want. In the end you can send the output back to the person who owns the private key, and only they can decrypt the data and learn the final result.

Comment Re:OK, I don't understand (Score 1) 199

It doesn't prevent equality tests in a single encrypted domain. But in a single encrypted domain, two ciphertexts for the same plaintext (i.e. including an extra block for obfuscation/resolution is cheating) are the same anyways.

No, they are not. This is what is called randomized encryption, and in fact is the only way to make public key encryption secure. Otherwise you could do as you say, guess the plaintext for a particular ciphertext, encrypt your guess yourself (remember in public key cryptography anyone can encrypt data), and compare it with the ciphertext. Systems which allow such guessing are totally insecure!

So of course this new scheme does not allow guess-encrypt-and-compare attacks. No respectable author would propose such an encryption scheme today. Instead, modern public key encryption is always randomized. It means that there are multiple ciphertexts corresponding to the same plaintext.

In the homomorphic scheme, equality tests are possible but the result is encrypted, and only the person who provided you the encrypted data (or more precisely, the person who holds the private keys under which the data is encrypted) can decrypt the result and learn the answer.

Comment Re:But what if it took... a TRILLION times longer? (Score 3, Informative) 199

I read the paper and my guess is that a TRILLION is actually an understatement. It looks to me like it might be almost INFINITELY slower. In other words, completely impractical and only of theoretical value.

However, now that the first step has been taken, it's possible that someone will come up with an improvement that makes the idea practical someday.

Comment Re:OK, I don't understand (Score 2, Informative) 199

What are the operations for which this is homomorphic?
It has to be quite limited. Otherwise for example, lets suppose I have an integer (encrypted of course) and I have comparison and addition/subtraction and multiply/divide.

I can very easily find the encrypted values of both 0 (a-a for any a) and 1 (a/a)

The article neglected to mention that the underlying encryption system is randomized public key encryption. This means (A) you can easily discover encryptions of 0, encryptions of 1, and encryptions of anything else, because it is a public key system and you can encrypt anything you like.

It also means (B) this won't help you with decryption because every encryption of 0 looks different. So knowing some encryptions of 0 will not let you recognize whether a given encrypted value is an encryption of 0, of 1, or of anything else.

And, I don't see how you can prevent equality tests in the encrypted domain. You might have to calculate a Kernel but surely there is no way to prevent that.

You certainly can do equality tests in the encrypted domain. It's just that the result of the equality test is encrypted; for example, it is an encryption of 0 or an encryption of 1. But you have no idea which it is. Only the client who supplied the encrypted data (and more importantly, the public key encryption system) can decrypt the result of your equality test, or of any other calculations you do on encrypted data.

Slashdot Top Deals

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...