Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Crypto Guru Bruce Schneier Answers 105

Most of the questions we got for crypto guru Bruce Schneier earlier this week were pretty deep, and so are his answers. But even if you're not a crypto expert, you'll find them easy to understand, and many of Bruce's thoughts (especially on privacy and the increasing lack thereof) make interesting reading even for those of you who have no interest in crypto because you believe you have "nothing to hide." This is a *long and strong* Q&A session. Click Below to read it all.

First Bruce says, by way of introduction...

"I'd like to start by thanking people for sending in questions. I enjoyed answering all of them.

"I've written on many of these topics before, and often I will point to existing writings on my Web site or in my Crypto-Gram newsletter. This isn't to be annoying; it just seems useful to point to things I have already written. I urge anyone interested to sign up for a free subscription to Crypto-Gram. I write it monthly, and I regularly answer questions such as these, or write about topics in the news (especially ones that have been reported on badly). To subscribe, visit my Web site at"

ryanr asks:
I've heard you say many times that unless a particular crypto alg. has undergone lots of public review, it should not be considered safe. Unless possibly it's from the NSA. (Excluding, of course, the NSA stuff that is INTENTIONALLY backdoored.)

The implication there is that the NSA has applied some many resources to the crypto problems, that they are as good as the rest of the cryptographers put together.

My question is: Do you really think that a private process, no matter how many resources applied, can equal the public process?

Yes. One way of looking at a public process is as a large and distributed private process. If the NSA collected all of the academic cryptographers, gave them a clearance, and locked them in a basement somewhere, it would become a private process. The real issue is whether or not the NSA has equivalent expertise to the public academic community, and whether it can apply that expertise in an effective manner.

The NSA has a lot more cryptographic experience in the narrow fields of making and breaking algorithms. They have been doing this, and nothing else, for decades. I don't believe that they have much expertise in weird digital signature schemes, or zero-knowledge protocols, or even more bizarre electronic commerce and voting schemes, because they aren't really of practical interest. But the NSA certainly has a very strong practical interest in algorithm design and analysis, to a much greater degree than we in the public community do. And they have seen a lot more ciphers: both designs that they have proposed internally and designs in production systems that they have tried to attack.

The NSA also has the ability to target its analysis resources. The public academic community is scattershot. We work on what interests us, and we are each interested by different things. A director at the NSA has the ability to take the top ten cryptanalysts in the building and say: "You. Go into that room and don't come out until you've broken RC4. I don't care if it takes two years." That ability to direct resources at particular problems gives them an edge that we don't have.

But how much of an edge? Until recently, I would have stated unquestionably that the NSA is a decade ahead of the state of the art in cipher design and analysis. Now, I'm not so sure.

Over the past five years, there has been a lot of open research in cryptography. We have discovered many different types of attacks, and have learned a lot about how to design ciphers. The best and brightest of the cryptographers are staying in the open academic community, and are not being swallowed up by the NSA (or by its counterparts in other countries). There is a vibrant academic community in cryptography; people can exchange ideas, share research, build on each other's work. We've seen attacks against the NSA-designed algorithm Skipjack that almost certainly were not known by the NSA. (See for Skipjack information, and for information on impossible-differential cryptanalysis.) We've seen other attacks that, I believe, were not known by the NSA. (See for more information.) The public research community is now doing cutting-edge research in cryptography.

Now this doesn't mean we are better than they are. Certainly the NSA knows more about cryptography than the public community does. They read everything we publish, and we read nothing that they publish. Almost by definition, they know what we do. That imbalance alone will always give them an edge in knowledge. But I think that edge is closing rapidly.

And on a related topic, I don't think the recent press flap about the NSAKEY means that the NSA has a backdoor in Microsoft Windows. I wrote about this in HREF=" But I do think that the NSA deliberately puts back doors in products; see for some details.

Sajma asks:
Your book describes a slew of interesting applications for crypto protocols, including electronic money orders, digital time-stamping, and secure multi-party computation. What are the remaining crypto problems of interest to the general public which have not been solved? (secure distribution of digital media comes to mind -- can you sell someone a music file, allow them to use the file anywhere, but make sure no one else can use it?)


randombit asks:
OK, hypothetical question. You rub a magic lamp, and a genie comes out. Specifically, a cryptographic protocol genie. He can come up with an efficient, secure protocol for any activity you want (assuming a protocol is possible, of course). What would you pick, and more importantly, why?

Two questions; one answer. We actually have all the protocols we need. It's true that I described all sorts of interesting protocols in _Applied Cryptography_. The reality is that none of them is actually useful. What is useful are the few simple primitives -- signatures, encryption, authentication -- and the different ways to mirror real-life trust models using them. These protocols are simpler, easier to understand, and more useful.

The real problem with protocols, and the thing that is the hardest to deal with, is all the non-cryptographic dressing around the core protocols. This is where the real insecurities lie. Security's worst enemy is complexity.

This might seem an odd statement, especially in the light of the many simple systems that exhibit critical security failures. It is true nonetheless. Simple failures are simple to avoid, and often simple to fix. The problem in these cases is not a lack of knowledge of how to do it right, but a refusal (or inability) to apply this knowledge. Complexity, however, is a different beast; we do not really know how to handle it. Complex systems exhibit more failures as well as more complex failures. These failures are harder to fix because the systems are more complex, and before you know it the system has become unmanageable.

Designing any software system is always a matter of weighing and reconciling different requirements: functionality, efficiency, political acceptability, security, backward compatibility, deadlines, flexibility, ease of use, and many more. The unspoken requirement is often simplicity. If the system gets too complex, it becomes too difficult and too expensive to make and maintain. Because fulfilling more of the other requirements usually involves a more complex design, many systems end up with a design that is as complex as the designers and implementers can reasonably handle. (Other systems end up with a design that is too complex to handle, and the project fails accordingly.)

Virtually all software is developed using a try-and-fix methodology. Small pieces are implemented, tested, fixed, and tested again. Several of these small pieces are combined into a larger module, and this module is tested, fixed, and tested again. The end result is software that more or less functions as expected, although we are all familiar with the high frequency of functional failures of software systems.

This process of making fairly complex systems and implementing them with a try-and-fix methodology has a devastating effect on security. The central reason is that you cannot easily test for security; security is not a functional aspect of the system. Therefore, security bugs are not detected and fixed during the development process in the same way that functional bugs are. Suppose a reasonable-sized program is developed without any testing at all during development and quality control. We feel confident in stating that the result will be a completely useless program; most likely it will not perform any of the desired functions correctly. Yet this is exactly what we get from the try-and-fix methodology with respect to security.

The only reasonable way to "test" the security of a system is to perform security reviews on it. A security review is a manual process; it is very expensive in terms of time and effort. And just as functional testing cannot prove the absence of bugs, a security review cannot show that the product is in fact secure. The more complex the system is, the harder a security evaluation becomes. A more complex system will have more security-related errors in the specification, design, and implementation. We claim that the number of errors and difficulty of the evaluation are not linear functions of the complexity, but in fact grow much faster.

For the sake of simplicity, let us assume the system has n different options, each with two possible choices. Then, there are about n^2 different pairs of options that could interact in unexpected ways, and 2^n different configurations altogether. Each possible interaction can lead to a security weakness, and the number of possible complex interactions that involve several options is huge. We therefore expect that the number of actual security weaknesses grows very rapidly with increasing complexity.

The increased number of possible interactions creates more work during the security evaluation. For a system with a moderate number of options, checking all the two-option interactions becomes a huge amount of work. Checking every possible configuration is effectively impossible. Thus the difficulty of performing security evaluations also grows very rapidly with increasing complexity. The combination of additional (potential) weaknesses and a more difficult security analysis unavoidably results in insecure systems.

In actual systems, the situation is not quite so bad; there are often options that are "orthogonal" in that they have no relation or interaction with each other. This occurs, for example, if the options are on different layers in the communication system, and the layers are separated by a well-defined interface that does not "show" the options on either side. For this very reason, such a separation of a system into relatively independent modules with clearly defined interfaces is a hallmark of good design. Good modularization can dramatically reduce the effective complexity of a system without the need to eliminate important features. Options within a single module can of course still have interactions that need to be analyzed, so the number of options per module should be minimized. Modularization works well when used properly, but most actual systems still include cross-dependencies where options in different modules do affect each other.

A more complex system loses on all fronts. It contains more weaknesses to start with, it is much harder to analyze, and it is much harder to implement without introducing security-critical errors in the implementation.

This increase in the number of security weaknesses interacts destructively with the weakest-link property of security: the security of the overall system is limited by the security of its weakest link. Any single weakness can destroy the security of the entire system.

Complexity not only makes it virtually impossible to create a secure system, it also makes the system extremely hard to manage. The people running the actual system typically do not have a thorough understanding of the system and the security issues involved. Configuration options should therefore be kept to a minimum, and the options should provide a very simple model to the user. Complex combinations of options are very likely to be configured erroneously, resulting in a loss of security. There are many stories throughout history that illustrate how management of complex systems is often the weakest link.

I repeat: security's worst enemy is complexity. The most serious protocol problem is how to deal with complex protocols (or how to strip them down to the bone).

Get Behind the Mule asks:
Bruce, thanks very much for making cryptography so much more accessible to us all.

You wrote in Applied Cryptography that IDEA was your "favorite" symmetric cipher at the time. Is that still true today?

It depends what you mean by "favorite." If I needed a secure symmetric algorithm for a design, and performance were not an issue, I would choose triple-DES. No other algorithm has been as well-studied, so nothing can compare in confidence.

The problem is that triple-DES is slow; on a 32-bit microprocessor it encrypts data at a rate of 108 clock cycles per byte. (You have to remember that DES was designed in the mid-1970s for discrete hardware. It is very slow on 32-bit microprocessors.) If I need a faster algorithm, I would use Blowfish. Blowfish encrypts data at a rate of 18 clock cycles per byte. Information on Blowfish is at

Neither is my favorite algorithm. Currently, my favorite algorithm is Twofish. Twofish is our submission to AES. It is still too new to use operationally, but I hope it will see wide use as people analyze it and as confidence grows in its security. Information on Twofish is at It's even faster than Blowfish, and (I think) much better designed.

Faster algorithms are more problematic. I don't really like RC4. SEAL is better, but patented by IBM. I don't care for WAKE. I would probably use one of Belgian cryptographer Joan Daemen's designs.

I don't recommend IDEA anymore for several reasons. One, it isn't very fast; on a 32-bit microprocessor it encrypts data at a rate of 50 clock cycles per byte. Two, IDEA is patented, and the terms change regularly. Also, attacks against IDEA have steadily eaten away at the security margin. IDEA has eight rounds, and the current best attack breaks 4.5 rounds. There are still no attacks against the full eight-round cipher, and there is no reason to believe that any are possible. Still, since there are algorithms with much better performance, it seems improper to suggest IDEA.

Speed comparisons of other algorithms can be found at A detailed paper comparing performance of the AES candidates can be downloaded at And for a current summary of attacks against various algorithms, see

Remember, though -- breaking the cryptographic algorithm is almost never the way to attack a security product. There is almost always an easier way to break the security. I've written about this extensively; see and in particular.

Tet asks:
Scott McNealy claims we've already fought and lost the war for personal privacy. Do you agree with him or not, and why?

One hundred years ago, everyone could have personal privacy. You and a friend could walk into an empty field, look around to see that no one else was nearby, and have a level of privacy that has forever been lost to today's technology. The framers of the Constitution never explicitly put a right to privacy into the document; it never occurred to them that it could be withheld. The ability to have a private conversation, like the ability to keep your thoughts in your head and the ability to fall to the ground when pushed, was a natural consequence of the world. When the Supreme Court found a right to privacy in the Constitution, it's because the language of the Constitution assumed its existence.

Technology has demolished that worldview. Powerful directional microphones can pick up conversations hundreds of yards away. Pinhole cameras -- now being sold over the Internet -- can hide in the smallest cracks; satellite cameras can read the time on your watch from orbit. And the Defense Department is prototyping micro-air vehicles, the size of small birds or butterflies, that can scout out enemy snipers, locate hostages in occupied buildings, or spy on just about anybody.

In the aftermath of the terrorist takeover of the Japanese embassy in Peru, news reports described audio bugs being hidden in shirt buttons that allowed police to pinpoint everyone's location. Van Eck devices can read what's on your computer monitor from halfway down the street. (I heard that the CIA demonstrated this for Scott McNealy at Sun; they captured his password from a van in the company's parking lot.) Lasers bounced off windows can reveal the Doppler effect of compression and rarefaction of air by soundwaves, and eavesdrop on conversations happening on the other side. If an attacker can plug into your power line, it can read it from even further away. Purchase anything lately? Unless you use cash, what, where, and when is recorded in a database. And in many stores, a security camera has recorded your presence while the helpful sales clerk captures your name and personal information.

The ability to trail someone remotely has existed for a while, but it is only used in exceptional circumstances. In 1993, Colombian drug lord Pablo Escobar was found partly by tracking him through his cellular phone usage. Timothy McVeigh's truck was found because the FBI collected the tapes from every surveillance camera in the city, correlated them by time (presumably the explosion acted as a great synch pulse), and looked for it. During Desert Storm the U.S. dropped thousands of miniature robots -- millimeters in diameter -- on Iraq that looked for signs of biological warfare.

The technology to automatically search for drug negotiations in random telephone conversations, for suspicious behavior in satellite images, or for faces on a "wanted list" of criminals in on-street cameras isn't here yet, but it's just a matter of time. Face-recognition software can pick individual faces out of a crowd. Voice recognition will soon be able to scan millions of telephone calls, listening for a particular person; it can already scan for suspicious words or phrases. Moore's Law, which says the industry can double the computing power of a microchip every 18 months, affects surveillance computing just as it does everything else: the next generation will be smaller, faster, and a lot cheaper. As soon as the recognition technologies can find the people, the computers will be able to do the searching automatically.

At the same time, the fear of crime is facilitating a great deal of surveillance, not all of it instigated by the police. Some U.S. airports automatically record the license plates of anyone coming onto airport property, even if it is just to pick up someone. Some cities are installing directional microphones to pinpoint gunfire; others are setting video cameras on lampposts to deter crime. It's getting difficult to walk into a store without being videotaped. Timothy McVeigh couldn't drive a truck through downtown Oklahoma City without it showing up on an in-store surveillance camera, and these cameras were positioned to protect the store, not to track goings-on outside the windows.

The U.S. is initiating a program called "computer-assisted passenger screening," or CAPS. The idea is to match commercial air travelers against profiles of evildoers, using such items as the traveler's address, credit card number, destination, whether or not he is traveling alone, whether the ticket was paid in cash, when the ticket was purchased, whether it was one-way or round trip, and about three dozen other factors that are being kept secret. Needless to say, groups like the ACLU have objected to stopping and searching people based on stereotypes. Not to mention that the data is saved, just in case the government needs to peek into people's pasts. No warrant required, of course.

More is coming. Out of concern for public safety, the FCC has ruled that by 2001, cellular and PCS companies must be able to locate users who dial 911 to within a radius of 125 meters. Consumers will foot this bill through a user tax, and you can be sure that wireless operators will introduce a plethora of other services based on this technology. The companies are probably going to use the cellular technology to locate people, although if they can wait a couple of technological generations they can drop miniature GPS receivers in the phones and do even better. One way or another, people will end up carrying technology that allows them to be digitally tailed. And currently, no warrant is required.

The surveillance infrastructure is being installed in our country under the guise of "customer service." Some hotels track guest preferences in international databases, so that customers will feel at home even if it is their first stay in a particular city. Caterpillar Corporation is installing diagnostic chips into all new farm machinery. These chips alert the local dealer, via satellite, when a part is failing. The dealer can then drive to the farm with a replacement, often before the machine has even broken down. This is great; I'll bet farmers really like the prompt service and the reduction in downtime. But the same technology can be used for other, less benign, purposes.

Automobile surveillance is almost automatic. Rental cars, equipped with GPS navigational systems, can keep a complete record of exactly where that car has been. Mercedes Benz is planning on embedding a Web server into its cars, so that technicians can spot service problems remotely. At least two companies plan on marketing a smart car locator that uses a GPS receiver and a cellular phone to alert the authorities to your whereabouts in case of an emergency. It only takes a slight modification to allow the locator to work automatically when queried by the police. Lojack, the device that can track your car if it has been stolen, can also be used for surveillance. Will net-connected smart cars give police the ability to track everybody in the country simultaneously? Already systems like Lojack do this, as do car phones.

GPS is a dream technology for surveillance. One company is selling an automatic warehouse inventory system, using GPS and affixable transmitters on objects. The transmitters broadcast their location, and a central computer keeps track of where everything is. Spies have probably been able to use this kind of stuff for years, but it's now becoming a consumer item.

Individual privacy is being eroded from a variety of directions. Most of the time the erosions are small, and no one kicks up a fuss. But there is less and less privacy available, and most people are completely oblivious of it. It is very likely that we will soon be living in a world where there is no expectation of privacy, anywhere or at any time.

rise asks:
As one of the stronger voices behind the proposition that only peer reviewed, open, and thoroughly tested algorithms can be trusted you've widely disseminated several algorithms, Solitaire and Yarrow among them. What attacks or interesting analyses have surfaced since their release?

For those who want to know what he is asking about, you can read about Solitaire at, and about Yarrow at And you can read about my position on the importance of using a public, peer-reviewed algorithm at, my snide comments about proprietary cryptography at, and my dismissing of cracking contests at

There has been some excellent analysis of Solitaire by Paul Crowley. He has posted his results to the sci.crypt newsgroup, and you can look them up. Briefly, he found a bug in the code and a problem with the algorithm. I will fix the bug in the code as soon as I get around to it, but the problem with the algorithm is more disturbing. We hope to write a joint paper documenting the problem, and proposing a fix.

I don't think it is a problem operationally, though. Solitaire is a pencil-and-paper cipher designed for very short messages, and the attack will require a lot of ciphertext. Still, it is a problem and one that I should fix.

As to Yarrow, I don't know any outside cryptanalysis. I'd like to see some.

Thagg asks:
I bought your first edition of Applied Cryptography, and you say two things that bother me, with respect to your submission of Twofish as a Federal standard for encryption.

In the forward, you describe how you got interested in cryptography, and that you had no background or training in the field, but you thought it was interesting. Also, several times throughout the book you caution people not to trust cryptosystems from amateurs.

Clearly you have become well versed in the history and application of cryptography, your book makes all other descriptions of the state of the art invisible by comparison. Still, it appears to me that cryptosystem design and analysis requires fairly extreme mathematical proficiency, which I do not believe that you have.

Now, of course, Twofish is published in detail, and the best people in the world have attempted to crack it (and I think that the competitive process that the US Gov't has promoted is a spectacular way to get the best people to attack each other's ciphers). But, I remain somewhat worried that at the foundations of there something missing that a PhD in mathematics and number theory would have seen?

The winner of this competition will likely be the next DES, and will provide security for a fairly large percentage of the planet. The stakes are high. I'm sure that you have an answer to this criticism, and I'm eager to hear it.

Certainly you should not trust cryptographic algorithms designed by people who have no experience designing and analyzing cryptographic algorithms. The question you ask is different. You are asking if a Ph.D. in mathematics and number theory gives someone any special insights that someone without the Ph.D. would miss. I believe that cryptographic experience is something that is learned through both training and through experience, and that someone with a Ph.D. is not automatically a better cryptographer.

Cryptography is interesting, because there are no absolute metrics. Anyone can design an algorithm that he himself cannot break. This means that anyone, from the best cryptographer to the sorriest man on the street, can design a cryptographic algorithm that works: that encrypts and decrypts data properly, and that the designer cannot break. The false reasoning that often follows is this: "I can't break it, therefore it is secure." The first question that anyone else should ask is: "You say you can't break it; well, who the hell are you?" More on this topic can be found at

The experience of the designers is something that I look at very carefully when I evaluate an algorithm. I can't devote the months and years necessary to convince myself that an algorithm is secure, so I want to know about the people who are convinced. And I don't look at their academic degrees; I look at what else they have broken.

The Twofish team has dozens of published cryptanalytic attacks, breaking all kinds of ciphers. (A list of Counterpane papers can be found at, and David Wagner's published papers can be found at These are impressive results: mod n cryptanalysis, boomerang attacks, slide attacks, side-channel cryptanalysis, related-key differential cryptanalysis, and attacks against Skipjack, Speed, Akelarre, RC5a, CMEA, ORYX, TwoPrime, etc., etc., etc. Interestingly enough, all five AES finalists have been designed by teams that have a similarly impressive list of published cryptanalytic attack. With a couple of exceptions, none of the non-finalists have any cryptanalysts on their teams.

Another thing to look at is the quality of the designer's analysis. I like designs that have long and detailed documents that discuss how the designers have attacked their own design. You can see this in the submissions for Twofish, and for Mars, RC6, and E2. I worry about a cipher like Serpent that does not come with any analysis. Either the designers didn't do any, which is bad -- or they did it and are hiding it, which is worse.

I think these things speak more to the strength of the design than academic degrees.

In fact, I have seen many systems designed by Ph.D. mathematicians with little cryptographic experience, that have been quickly broken. Experience in cryptography is much more important than experience in general mathematics.

It is certainly possible that there are attacks against an algorithm that the designers missed. This is why AES is a public process. Before AES is chosen, dozens of people with Ph.D.s in mathematics will be performing their own analyses on the submissions. If Twofish is chosen, it will because none of those Ph.D.s have found any weaknesses.

But if you want Ph.D.s on the Twofish team, co-designer Doug Whiting has a Ph.D. in computer science from CalTech. His dissertation was on building Reed-Solomon error-correcting codes in VLSI, so it had a heavy math content.

Enoch Root asks:
It was noted in your biography that you hold a degree in Physics in addition to your M.S. in Computer Science. This seems to be a developing trend in IT, as many Physics graduates turn to CS. Neal Stephenson undertook studies in Physics before becoming a writer. I am myself a physics graduate turned computer geek.

What impact do you think your science studies have on your current career? I suspect the high mathematical background of physics prepared you for cryptology, but what other aspects of a science degree come into play in your line of work? Would you call your B.S. in Physics an advantage or a disadvantage?

The unfortunate answer is that it wasn't very relevant. It was neither an advantage nor a disadvantage, although it was harder to get a job out of college with a physics degree. Physics teaches mathematics, and that was helpful.

If you want to become a cryptographer, study mathematics and computer science. I wrote an essay on this very topic; see

Hobbex asks:
One would think that cryptographers, who study the mathematical means for controlling information (not just secrecy, but also signatures, zero knowledge proofs etc) would be the least inclined to support the artificial limits to information set up by our legal system, and yet the field is littered with patents (probably more so than any other field of mathematics).

You, on the other hand, have been very generous with your algorithms and cryptos. Is there a political, ideological, or practical reason behind this?

It is impossible to make money selling a cryptographic algorithm. It's difficult, but not impossible, to make money selling a cryptographic protocol.

Look at algorithms first. There are free encryption algorithms all over the place: triple-DES, Blowfish, CAST, most of the AES submissions. Look again at the URLs attached to question 6. It makes sense for a designer to use one of these public algorithms. If I patented Blowfish, no one would have used it. No one would have analyzed it. (Why would they do work for free, if I were making money off of it?) Only because Blowfish is free is it in over 100 products. (For a list, see A HREF=""> If I patented it and charged for it, it would be much less widely used. IDEA is a good example of this. IDEA could have been everywhere; for a while it was the only trusted DES replacement. But it was patented, and there were licensing rules. As a result, IDEA is barely anywhere. SEAL is a great-looking stream cipher. But because IBM has a patent on it, no one uses it.

The early public-key patents are the only exception to this. Because the patents controlled the concept of public-key cryptography, there was no way to do public-key encryption, key exchange, or digital signatures without licensing the patents. This exception is over; the Diffie-Hellman patents have expired.

Regularly I hear from algorithm inventors who want to patent their new cool algorithm and then sell it. This business plan has absolutely zero percent chance of succeeding. I recommend that people give their cryptography away, and use the PR benefit to make a living. If the IDEA patent holders did that, they would be much better off.

Protocols are a little bit different. You can patent a protocol and turn it into a useful business. It's hard; most of the time a competitor can engineer around a protocol patent. But it is possible, and many companies are making a go at marketing patents in authentication, certificate revocation, digital content protection, etc. I'm not thrilled by this, but it's the reality of American business today.

aheitner asks:
As many know, your Twofish algorithm is one of the (many) submissions to become the AES standard. The goal for these algorithms is to be able to implement them extremely cheaply in hardware -- say on a 6800 with 256 bytes of RAM. In other words, cheaply enough to put on a smart card.

But IBM's team alleges that any algorithm that simple can be fairly easily cracked by doing a power usage analysis on the chip (by watching fluctuations in the electrical contacts with the reader) and that the necessary equipment to protect against power analysis would be equivalent to a much more complex processor -- so much so you might as well just implement a different and more complex (and hopefully power-random) algorithm. Of course IBM suggests their own implementation.

What do you think? Is there a way to build a simple smart card so that power analysis isn't a problem? Perhaps the whole question will become irrelevant since we'll be carrying around so much processing power in our PDAs that we'll just use them?

Power analysis involves breaking a cryptographic algorithm by looking at the power trace from a chip executing that algorithm. This is a specific case of what I call "side-channel attacks." I wrote about side-channel attacks at, and some excellent information on power analysis can be found at

I don't believe it is possible to create a cryptographic algorithm that, because of its mathematics, is immune from side-channel attacks.

Any cryptographic primitive, such as a block cipher or a digital signature algorithm, can be thought of in two very different ways. It can be viewed as a mathematical object; typically, a function taking an n-bit input and producing an m-bit output. Alternatively, it can be viewed as a concrete implementation of that mathematical object. Traditionally, cryptanalysis has been directed solely against the mathematical object, and the resultant attacks necessarily apply to any concrete implementation. The statistical attacks against block ciphers -- differential and linear cryptanalysis -- are example of this; these attacks will work against DES regardless of which implementation of DES is being attacked.

In the last few years, new kinds of cryptanalytic attacks have begun to appear in the literature: attacks that target specific implementation details. Both timing attacks and differential fault analysis make assumptions about the implementation, and use additional information garnered from attacking certain implementations. Failure analysis assumes a one-bit feedback from the implementation -- was the message successfully decrypted -- in order to break the underlying cryptographic primitive. Related-key cryptanalysis also makes assumptions about the implementation, in this case about related keys used to encrypt different texts. Side-channels attack are a generalization of this idea.

These attacks don't necessarily generalize -- a fault-analysis attack just isn't possible against an implementation that doesn't permit an attacker to create and exploit the required faults -- but can be much more powerful. For example, differential fault analysis of DES requires between 50 and 200 ciphertext blocks (no plaintext) to recover a key.

A side-channel attack occurs when an attacker is able to use some additional information leaked from the implementation of a cryptographic function to cryptanalyze the function. Clearly, given enough side-channel information, it is trivial to break a cipher. An attacker who can, for example, learn every input into every S-box in every one of DES's rounds can trivially calculate the key. What is surprising is how little side-channel information is necessary to break an algorithm. I have published a paper on side-channel attacks against block ciphers at

You can't design the math to solve this problem. You can try to design he hardware so that side-channel information does not leak; this seems to be far more difficult than it appears. Or you can design your cryptographic protocols so that side-channel attacks don't matter. This makes the most sense. Choosing AES based on side-channel resistance is short-sighted.

A long digression on AES, for those who haven't been following the process and for those who care about the outcome: AES is the Advanced Encryption Standard, the new encryption algorithm that will replace DES. NIST is in the process of defining a new encryption standard, which will have a longer key length (128-, 192-, and 256-bit), larger block size (128-bit), be faster than DES, be patent-free, and hopefully will remain strong for a long, long time.

The process is an interesting one. In 1997, NIST sent out a request for candidate algorithms. They received fifteen submissions by the June 1998 deadline, five from the U.S. and ten from other countries. In August, we (my own algorithm, Twofish, was one of the submissions) presented our algorithms to the world at the First AES Candidate Conference.

There was a Second AES Candidate Conference in Rome in March 1999, where people presented analyses of the algorithms. NIST chose five finalist algorithms this summer: Mars, RC6, Rijndael, Serpent, and Twofish. There will be a third AES Candidate Conference in New York in April 2000. NIST is also accepting public comments on the algorithms through 15 April 2000. Finally, NIST will choose an algorithm to become the standard (or perhaps more than one algorithm).

Cryptographers are busily analyzing the submissions for security. It's tempting to think of the process as a big demolition derby: everyone submits their algorithms and then attacks all the others...the last one standing wins. Really, it won't be like that. I strongly believe at the end of the process most of the candidates will be unbroken. The winner will be chosen based on other factors: performance, flexibility, suitability.

This means that we need your input into this process. I know you're not cryptographers, and you won't be able to comment on the mathematics of the various submissions. But you can comment on your encryption requirements, and whether the algorithms will suit your needs.

  • AES will have to work in a variety of current and future applications, doing all sorts of different encryption tasks. Specifically:
  • AES will have to be able to encrypt bulk data quickly on top-end 32-bit CPUs and 64-bit CPUs. The algorithm will be used to encrypt streaming video and audio to the desktop in real time.
  • AES will have to be able to fit on small 8-bit CPUs in smart cards. To a first approximation, all encryption DES implementations in the world are on small CPUs with very little RAM. It's in burglar alarms, electricity meters, pay-TV devices, and smart cards. Sure, some of these applications will get 32-bit CPUs as those get cheaper, but that just means that there will be another set of even smaller 8-bit applications.
  • AES will have to be efficient on the smaller, weaker, 32-bit CPUs. Smart cards won't be getting Pentium-class CPUs for a long time. The first 32-bit smart cards will have simple CPUs with a simple instruction set. 16-bit CPUs will be used in embedded systems that need more power than an 8-bit CPU, but can't afford a 32-bit CPU.
  • AES will have to be efficient in hardware, in not very many gates. There are lots of encryption applications in dedicated hardware: contactless cards for fare payment, for example.
  • AES will have to be key agile. There are many applications where small amounts of text are encrypted with each key, and the key changes frequently. This is a very different optimization problem than encrypting a lot of data with a single key.
  • AES will have to be able to be parallelized. Sometimes you have a lot of gates in hardware, and raw speed is all you care about.
  • AES will have to work on DSPs. Sooner or later, your cell phone will have proper encryption built in. So will your digital camera and your digital video recorder.
  • AES will need to be secure as a hash function. There are many applications where DES is used both for encryption and authentication; there just isn't enough room for a second cryptographic primitive. AES will have to serve these same two roles.
  • AES needs to be secure for a long time. Infrastructure is hard to update. Like DES, AES hardware is likely to be installed and used for decades. A radical new algorithm, with interesting and exciting ideas, just doesn't make sense. A conservative algorithm is what is needed.
Choosing a single algorithm (or even a pair of algorithms) for all these applications is not easy, but that's what we have to do. It might make more sense to have a family of algorithms, each tuned to a particular application, but there will be only one AES. And when AES becomes a standard, customers will want their encryption products to be "buzzword compliant." They'll demand it in hardware, in desktop computer software, on smart cards, in electronic-commerce terminals, and other places we never thought it would be used. Anything we pick for AES has to work in all those applications.

So how do you comment? NIST is accepting formal comments either on paper or by e-mail. See for instructions. Be sure to identify who you represent and what cryptography interests you have. And if you have any weird cryptography applications or environments, tell me. I'd like to know. Remember, the AES is going to be your cryptography standard for the 21st century. Tell NIST what you think.

Christopher B. Brown
Several announcements have been made lately about ciphers being assortedly vulnerable/invulnerable against Quantum cryptography.

Quantum physics seems to be the "magical" form of physics, and its application to cryptography even more magical. I don't think I properly understand "quantum cryptography," and I don't think that most of the people that have made public comment on it understand it terribly well either.

Could you comment on the present state of Quantum cryptography, and its probable relevance in public matters short term (which appears nonexistent), medium term (where the research of today may be in 5-10 years), and longer term?

There are two separate applications of quantum mechanics to cryptography: quantum computing and quantum cryptography. I think you are conflating the two. Let me take them up in turn.

Quantum cryptography is a means of using quantum mechanics for key exchange. Basically, because it is impossible to measure the state of a quantum system without disturbing it, the physics of the key-exchange protocol allows you to detect eavesdroppers. It's a cool idea, and I spend a few pages in _Applied Cryptography_ explaining it in detail.

Quantum computing is newer. It turns out that it is theoretically possible to build a quantum computer. And, if you can build such a beast, it can factor large numbers and calculate discrete logarithms efficiently. Hence, it renders pretty much all of public-key cryptography insecure.

Both of these things are pretty far out. Quantum cryptography has been demonstrated in the lab; British Telcom researchers have exchanged keys over a 10km fiber-optic link. This is interesting, but I don't see it being used very much. The mathematics of cryptography, while not perfect by any means, is the one thing we can do well. There are so many easier ways of breaking into systems, it doesn't make any sense to replace mathematics with physics. And math is always cheaper.

Quantum computing is far out technologically. These computers are theoretically possible to build. We actually have no idea how to build them. Figure it will be ten or twenty or more years before this has a possibility of being a reality. An excellent article was published in a magazine called _The Sciences_. You can find it at

And when it becomes a reality, it does not destroy all cryptography. Quantum computing reduces the complexity of arbitrary calculations by a factor of a square root. This means that key lengths are effectively halved. 128-bit keys are more than secure enough today; 256-bit keys are more than secure enough against quantum computers.

Neville asks:
What's your response to the notion that the web's reliance on centralized Certificate Authorities for secure commerce is ultimately flawed? There are those, like the Meta Certificate Group, who feel that a hierarchical chain of certificates leading back to only a couple of elite organizations won't hold up in the distributed environment of the Internet. The entire framework of e-commerce seems to stand on the private keys of Verisign and Thawte. Do you feel this is a danger, and will there be viable alternatives?

I agree with you 100%. The notion of a single global public-key infrastructure (PKI) makes no sense. Open your wallet, and you will see a variety of different authentication credentials: driver's license, credit cards, airline frequent flyer cards, library cards, a passport, and so forth. All of these are analog certificates, and will eventually have digital equivalents. There's no reason in the world why Visa can't use your driver's license number as a credit card number; it's just a pointer into a database, after all. But Visa never, ever will. They want control over issuance, update, and revocation. Similarly, digital credentials only make sense when the entity who cares about the credential controls the issuance, use, update, and revocation of that credential.

I have jointly written a paper with Carl Ellison called "Ten Risks of PIKI: What you aren't being told about Public Key Infrastructure." It will be published in the Winter 2000 issue of the _Computer Security Journal_. It's not on my Web site yet, but will be in by December. (Watch for details.) You can find a lot more good material on the problems with PKI at Carl Ellison's Web site:

jovlinger asks:


In a recent cryptogram, you write that most symmetric ciphers need more entropy than people can remember and hence supply. Even with bio-metrics adding more bits, it is not really worth the effort to construct ciphers with more than 128 bits of entropy in the key, because people won't give them more than that much entropy in the pass phrase.

However, social and technological pressures make longer and longer keys a necessity. What promising approaches do you see for making remembering and entering -- even though I have long passages of text memorized, I don't want to type them in for each e-mail I want to send -- usefully long passphrases?

I.e., to paraphrase, would you discuss the state of the art of cipher/human interaction, as it pertains to key management?

For the rest of you, the Crypto-Gram article he mentions is at In it, I argue that people just can't remember complicated enough keys and passwords to be immune from brute-force attacks (for example, L0phtcrack, see Some of us can, but the masses that are using the Internet aren't able to, can't be bothered to, and won't be cajoled to.

The other way to carry around a large pool of random bits is on a data storage mechanism: a smart card, a Dallas Semiconductor iButton, a chip inside a physical key (like the device DataKey sells). These mechanisms are more annoying than passwords and passphrases, but they work. There's no real alternative; if something is too large to memorize, the only solution is to store it somewhere.

I don't believe that biometrics will ever become cryptographic keys. I wrote about this at Biometrics does have use as an authentication mechanism (note the difference), if it is engineered properly.


Next week: Mick Morgan, "the Queen of England's Webmaster," will answer questions about why not only the Royal Family's Web site but also the huge site (and more than 80 other official UK Web sites) now run on Linux.

This discussion has been archived. No new comments can be posted.

Crypto Guru Bruce Schneier Answers

Comments Filter:
  • Dr. Brian Gladman has already independently implemented each of the AES candidates. Their source code is available on his web site at (yep, United Kingdom, i.e., not U.S.).

    The idiocy about U.S. export restrictions is that I cannot give Dr. Gladman a copy of a program whose only cryptographic component is Dr. Gladman's own code. See the bottom of my Ocotillo page at [] for more details.


  • I believe the NASA Mars Pathfinder mission used something that might be called super-resolution. The idea was to take several pictures, from slightly different locations. These pictures were then mathematically combined and filtered, to provide an equivalent picture that was higher resolution than the original raster, but with the same field of view.

    ie, NASA seems to have worked out how to get an 800x600 image out of a bunch of 640x480 pictures, without mosaicing them.

    Many of these algorithms came out of trying to solve the "oops, wrong mirror" problem with the Hubble Telescope.

    The exact mathematics are beyond me, and I couldn't find them the last time I looked at NASAs information on the web, though they did have sample images (before/after processing).

    Altogether, an interesting idea. Much better than the guesstimating done by the average scanner to go from 600x600 optical to 4800x4800 "take more space than necessary" dpi... NASA takes info from multiple images, and the scanner just takes one.

    Astronomical data processing is amazing.
  • Well I hate to be obvious, but if you're really worried about the NSA spying on you then you might want to encrypt your communications. I mean, really: encrypting an email is not that hard. Get GnuPG or PGP and have a ball.


    "You can't shake the Devil's hand and say you're only kidding."

  • Almost true. Strangely, you can't select a URL in NS and paste it in to the window from which is was selected. You must paste it to another NS window (or plop it into the ``Location:'' bar).
  • by Anonymous Coward on Friday October 29, 1999 @01:45PM (#1576968)
    (I'm normally a confirmed lurker on Slashdot,
    but the message I'm replying to is sufficiently
    mistaken (esp. for a "3") that I feel obliged to

    C. Worth said:
    I think Bruce made an error in his last answer: yes, a quantum computer would reduce the difficulty of forcing a key by a square root - but that applies for every qubit you build into the system.

    The number of qubits in a quantum computer has nothing whatsoever to do with its speed, any more than the number of bits in a classical computer; all it affects is the size of the computation you can perform.

    To expand on the "square root" result Dr. Schneier mentioned (due to Lov Grover of Bell Labs): What Grover's algorithm does is speed up exhaustive searches. If it takes time T to check out one candidate key and there are n bits of key, then a classical computer would take time 2^{n-1} T to find the right key (assuming a secure algorithm). On the other hand, a quantum computer would take time 2^{n/2} T. Under suitable assumptions, it can be shown (I forget the reference at the moment) that this is best possible on a quantum computer. In other words, Dr. Schneier got it right.

    It may be worth pointing out that these comments only apply to exhaustive search. This cuts both ways. On the one hand, if we have a classical attack that beats exhaustion, Grover's algorithm won't necessarily speed it up by a full square-root factor (a good example is finding collisions in hash functions; classically, it takes O(\sqrt{N}), but the best known quantum algorithm takes O(N^{1/3}), not O(N^{1/4})). On the other hand, while we can't break black box ciphers any faster than O(\sqrt{N}), any actual cipher could conceivably be vulnerable to a quantum attack (just as it could conceivably be vulnerable to a classical attack). Since quantum computing is really only a few years old (especially where cryptanalysis is concerned), there's really no way to tell what quantum vulnerabilities (if any) might exist.

    Regarding quantum cryptography: I'd like to expand on Dr. Schneier's comments somewhat. There's a definite analogy between quantum cryptography and one time pads. In each case, there's a "proof" that the system is perfectly secure, but neither is a panacea. Not only is expense an issue ("math is always cheaper"), but in the quantum case, the very physics that giveth security also taketh away. The basis of quantum cryptography is the fact that quantum information cannot be copied, so any eavesdropper has to remove quantum information. This actually gives a very simple attack on quantum cryptography: eavesdrop! You won't get any information, but your "victims" won't be able to exchange key. So they'll have to fall back on another (presumably more vulnerable) mechanism anyway. Another difficulty is that quantum cryptography absolutely requires both a quantum channel and an authenticated classical channel. The latter isn't easy, especially if people have quantum computers that can break most of the known public key authentication algorithms...

    A. Coward
  • This seems like an interesting mindbender for me. But i'll need a more detailed description of it(i don't have a clue of what you are trying to say:).
    Maby if you had some code implementing it or something.

    LINUX stands for: Linux Inux Nux Ux X
  • Its not going to change. I guess I'm just a pessimistic American. Will moving to Sweden restore my faith in Gov't? If so, lemme know. :>

    Quite possibly. Then, after 2 minutes, it will destroy your faith in the democracy that put that government in place...

    /. is like a steer's horns, a point here, a point there and a lot of bull in between.

  • But he didn't post the comment. Most media will retain the copyright for what you say when they interview you, I believe.

    I mean, I can't take an interview with me from Spin (cause you know I'm in there all the time) and sell it straight out to Rolling Stone...

    /. is like a steer's horns, a point here, a point there and a lot of bull in between.
  • by Anonymous Coward
    Recent developments have allowed us to obtain near-Hubble-quality imagery from ground-based optical telescopes, by performing what I imagine amounts to single-point interferometry with vibrating mirror surfaces. Pretty neat stuff. It's not unreasonable to expect similar technology in surveillance satellites before long.
  • Van Eck devices can read what's on your computer monitor from halfway down the street. (I heard that the CIA demonstrated this for Scott McNealy at Sun; they captured his password from a van in the company's parking lot.)

    I first heard of Van Eck freaking (imaging of computer screens by detection of emitted radiation) and while at first I was skeptical, I was eventually convinced it would be possible (my laptop screen lets off a LOT of noise as windows or pointers are moved, so picking this up doesn't seem impossible).

    However, in reading the above it seems to me that either McNealy has his password printed in plain text on the screen (I don't see why he would, all password entries I know of either don't show characters or just show `*'s), or somehow the demo applied Van Eck methods to something other than the screen (keyboard? memory??). Does anyone have details on this?

  • Bruce mentioned my work on Solitaire in his article; you can read about that in Problems with Bruce Schneier's "Solitaire" []. It also includes a C implementation of the cipher...
  • Link: /group/super-res/2d/mpf/
    Mars Pathfinder Super-resolved images []
  • I think that's a safety feature. Mildly annoying, but I can live with it.
  • I don't think you can necessarily say quantum computers will "kill public-key cryptography forever". Even if, 10 years from now, it's possible to build a device for, say, $1e6 that can quickly break RSA, I'd still feel comfortable using ssh RSA authentication to protect my computer, because nobody in their right mind would go to that kind of effort to break into my machine. Public key crypto is just too damn useful to disappear anytime soon, and you can bet that significant effort will be applied to keep it viable. Quantum computers of that ability are far enough off that it's not unlikely that public-key algorithms that are resistant to such methods will be developed by the time RSA or El-Gamal are broken.
  • "ryanr" Wrote:
    You want your 8-bit smartcard to be able to comunicate with your 64-bit desktop, don't you? If they're not using the same alg. & protocol, that won't work.

    Does this really make sense? While it's arguable that an 8-bit smartcard might be limited to a single low-end algorithm designed for a small memory footprint, is it reasonable to assume that on the other end a 64-bit processor couldn't support the first algorithm as well? The point isn't to make an arbitrary separation where one encryption standard is used here, while another there -- thus forcing incompatability where none need exist -- but to use the best software for a specific purpose.

    All Bruce says on the matter is:
    "Choosing a single algorithm (or even a pair of algorithms) for all these applications is not easy, but that's what we have to do. It might make more sense to have a family of algorithms, each tuned to a particular application, but there will be only one AES."

    Which, at face value I can accept, but am still curious as to why. It looks like a political, not technical, decision... if so, is this appropriate?
  • he actually says that in the insterview:
    It might make more sense to have a family of algorithms, each tuned to a particular application,

    he continues:
    but there will be only one AES. And when AES becomes a standard, customers will want their encryption products to be "buzzword compliant." They'll demand it in hardware, in desktop computer software, on smart cards, in electronic-commerce terminals, and other places we never thought it would be used.

    so, no, we don't need one algorithm to do everything. but there can only be one standard, and when there is, everyone will want to use that standard, even if it's not the best tool for the job. so they want to choose the standard that is as close as possible to being the best tool for all of these jobs.
  • Yep. They can't brute force your encrypted message, but they can look in your swap partition from fragments of your passphrase, or even the decrypted key itself. If you've ever typed your passphrase in a telnet session or on an X-server where the client was elsewhere on the network, etc. etc. etc.

    They can also, if you've been using crypto in a crime (or if they accuse you of using crypto in a crime) they can create powerful incentives for you to give up the key.

    Truth is, you should protect that passphrase like all get-out. You should keep your private keys on a CD-R and you should carry it with you. You should throw it on a fire when you are done with it. You should use gpg and pay attention to the secure memory features. Now you have a crypto system that is so difficult to use, its very annoying. That's just as well. You'll only use it when you really need it. The less ciphertext made with a given key, the better.

    The NSA is probably better at breaking things than you think because, as Bruce says, the weak links are not the crypto algorithms.
  • I have to agree. Bruce's responses are spectacularly well thought out. I thought his answer to my (somewhat snotty, I'll admit) question covered the ground completely, and from angles I hadn't thought of.

    I'm serene, now, about the quality of work that embodied in the AES submissions, including (and especially) Twofish. We'll see what happens.


  • The Federation of American Scientists [] sponsors the Project on Government Secrecy [] that has done a lot of very good work aimed at shining some light on the problem. From their site,
    "Through research, advocacy, and public education, the Project on Government Secrecy works to challenge excessive government secrecy and to promote public oversight. The Project supports journalists and fosters enhanced public awareness of secrecy issues through publication of the Secrecy Government Bulletin."
    This group has sued the CIA in order to force them to discolse more information about their annual budgets. I think this is a very savy strategy and it appears to be effective. FAS sponsors a lot of very worth projects. Go ahead and /. 'em!
  • Since the FSF, Linus Torvalds and Alan Cox only distribute source code, and since tons of very smart people are studying that code in detail, their room to be sneaky is limited.

    If I were the NSA, I wouldn't attempt to corrupt them. I'd get inside Red Hat and f*ck with the binaries they distribute. Who'd know? You may think that you can recompile from source and see if it matches the binaries, but what if they've messed with the compiler (along the lines of the famous Ken Thompson hack [])?

  • This topic has already been discussed to a brief extent here. [] Perhaps adaptive optics would be a good idea for a full slashdot topic?

  • I don't quite understand the equivalence of software pirates and the NSA. A software pirate is violating my property in an oblique way whereas someone monitoring my activities is violating my privacy in a direct way. It would be closer to the truth to equate the NSA to a credit bureau or internet ad agency or medical insurance company, which all have the potential of invading my privacy in a direct and consequential manner. I think it is right and proper that we curb the abilities of all powerful corporations, government agencies, and individuals from invasion of privacy.

    The difficulty of implementation of privacy controls is quite difficult once the data is collected and centralized, but there are large barriers and costs to collecting data, and presumably even if it is collected and centralized, there can be reasonable limits on it's use.

    On the collection side, we need to resist the efforts to force telecommunications companies to build in taps on digital communications. We need to resist the introduction of large scale video monitoring systems. It is not that these systems are ineffective in reducing crime, but that their cost in terms of loss in privacy for law abiding citizens and opportunity cost in terms of the money that could have gone to proven solutions for reducing crime such as economic development and education. Most such large scale systems have not been implemented yet, so we have some time to allocate those resources more effectively to meet the goals of sucurity and privacy.

    On the centralization side the largest danger is the merger and growth of large corporations, especially financial institutions, telecommunications corporations, and medical insurers, which have the power and desire to collect increasingly complete personal information. Unfortunately, much of this consolidation has already taken place. We are left with weak options for legislation and regulation. Much more than the government, the mega-corporation is the most immediate threat as a potential Big Brother, since their power trancends national borders and is largely closed to public review.

    Unfortunately, the cost of opting out of such a system is more costly to me than the loss of privacy encurred by living in it. One can still live outside the system, even if that means leaving your nation of origin. I can only hold on to the glimmer of hope that with encryption, at least my private coorespondence can be fairly private. I do not expect my financial or medical privacy to be well kept. But that should not discourage me from fighting for every last safeguard.
  • Your machine will probably be safe from a US$1e6 solution, after all, for US$1e6, I can break into your computer today (well ok, maybe next week, I'm kind of busy). This is true even if your computer is not connected to the network and has reasonably good physical and password security.

    The real issue is not you at all, but large scale financial transactions. In 10 years I'd be surprised if less than US$1e12 in consumer transactions were conducted annually over the internet. If banks begin to use the internet for interbank transactions, that would probably be closer to US$1e14 per day. Now you can begin to see the advantages of spending a few billion to produce an encryption breaking machine. The money laundering applications alone (as opposed to direct stealing, which is more noticable) are probably worth that.

    So I wouldn't be too concerned about you just yet, unless you're hearing odd clicks on your phone already. But I'd be a bit worried about your bank.

  • Reportedly such systems are already in use at racetracks to screen customers from whom they will not accept bets. I just heard about this recently. I don't know enough about racetrack operations to understand why they wouldn't take bets from just anyone.
  • I had a long response citing Reflections on Trusting Trust [], the Ken Thomson article in which he describes trojaning the C compiler to trojan the C compiler to trojan login.c. But it vanished before my eyes (Damn you Microsoft...or was it the NSA...) So I'll try to be more brief.

    In short, I don't think we need to worry about the source code generated by Linus, Alan et al. or even the asm for that matter. Peer review is great at picking out back doors. One should worry about every binary you installed on your computer and every binary that touched those binaries. Not to mention all the hardware. To be safe, I think one would have to boot the computer by hand (your computer does have a front panel with toggle switched right?) toggle in a compiler that you completely understand, then use that to compile an open source compiler, then use that compiler to compile every piece of software on your system. Even then you'd be only sort of safe.
  • Um, yeah, transparency has some nice features. I want my mugger to get caught too.

    The problem is, despite our wishes, everyone will be transparent, but most people will be blind :)

    It costs money to get access to surveillance. The CIA, NSA, etc have money. They can watch us. I don't see a near future where we can watch them back.

    So we'd be infinitely accountable for our actions, but they will not. People do scary shit when they aren't accountable. That's why we need to at least attempt to guard our own privacy as long as possible.

    It is possible that soon, your average slashdot-reading geek will have access to surveillance, but it'll be a damn long time before the uneducated poor does. That's scary too.
  • by NatePuri ( 9870 ) on Friday October 29, 1999 @04:00PM (#1577002) Homepage

    S.Ct. has stated one thing that will not change. A person *always* has a reasonable expectation of privacy in one's home. It is still the last bastion of privacy rights we have. I will *never* be willing to exchange that expectation for *any* so-called 'greater public interest.'

    I'm going to be harsh against you poster and you Mr. Schneier. The notion that we should just accept privacy erosion as an inescapable inevitablity is un-American, defeatist and cowardly. When I speak of privacy erosions, I am speaking strictly at those mandated by the government, *and not* the conflict that occurs when one private party's right to bodily and property integrity means an erosion of my personal privacy. Namely, shopowners have a right to surveil their private property. Parents have a right to surveil their home while the babysitter is there. In short, we have a right to spy on each other when we access each other's private property.

    However, where the government fails us or intrudes upon us, harming us occurs in the following ways. First, it is not currently illegal for private parties to surveil another's private property. Peeping-Toms & Tinas are not prohibited from spying. This is a failure of the government to act in our interests. And the fact that they have so failed us is intentional. They want us to spy on each other because it gives them free labor. They would love to pat you on the back and say what a good citizen you are when you ratted out your neighbor based on 'your suspicions.' The fact that you have no actual proof or that you had no right to obtain that evidence against your neighbor is not relevant to the Executive. It seeks to gain as much power as it possibly can in an effort to gain informational advantage. Second, the state and federal Executive takes active steps to gain legal advantage over your informational world. It seeks lobbies heavily in legislatures and exerts intimidating influence over corporations and researchers to maintain its advantage. The question is 'so what? What's the cost?'

    You are the cost. You are the thing that must be overcome. You are the obstacle. We speak in generic terms about 'privacy' and 'public interest.' Let me put to you all in different terms. The inability to tell someone else to leave you alone if you wish it means that you are a subject in the other's reign. In old times, you could not have told the King 'no' to anything. We do have more ways to say 'no' now but only one really matters. The power to say 'no, go away' is the cornerstone of American ordered liberty. The ability to say openly 'you do not have the right' without being labelled a dissident is the essence of your communicative freedoms.

    Freedom in this country is two-fold envisioned. First, freedom to manage your bodily integrity. It is physical control over yourself. The second is freedom to say what is on your mind an whatever meaningful way you choose. It is control over your own mind.

    If you are satisified to live without either of these rights, particularly the right to speak freely (i.e., the first right to go when the state takes over), then you would have loved Hitler's Germany, Franco's Spain, Mussolini's Italy, or Stalin's USSR. If you don't *know* in your heart of hearts that the Executive is the right wing branch of our three branch system (where the judicial is more left considering societal goods, and the legislative being centrist) then you do not have enough of an understanding of our system. The trend this century has been for the Executive to aggrandize rights at the expense of the other branches. We have seen the rise of Administrative agencies (the so-called fourth branch) where the Executive has expanded is scope and reach to an amazingly extreme degree. There has never in the history of the world been a more powerful organization than the American Executive Branch. If you don't fear it, you should. Congress and the courts have little effective controls over it. They do, at times, exercise some controls, but in comparison to the power wielded they are minimal. The addage, 'power corrupts,' does not just apply to individuals. It applies to organizations too.

    Poster, you are wrong that we must fight the power. That is futile. Rather, collectively we must *take* individual powers. We must demand our controls. It there is technology that can shed privacy; then there is technology that can regain it. Information technology is not synonymous with transparency. That's merely the social trend. And one that I will not accept. I will take my controls from you and the government using the same technology that has adulterated it. Nor is privacy synonymous with dissidence. That is the propaganda from the powers that be. .

    I realize that there are practical realities and inevitablities. But rather than acquiescence we should make the Founders proud, declare ourselves in a public state of emergency and put all our smartest people to the task of developing policies, technologies and laws that aggrandize power to the individual rather to society or an organization that claims to be its representative.

    This is long and there are typos, o well.

  • Tanenbaum still holds the record for
    misspellings. How come nobody misspells
  • I prefer to refer to the NSA at the Ministry of Love or Miniluv if you like.
  • OOPs: I meant 10 nanoradians. 575 nanodegrees. Blame my HP 15C.
  • I'm glad to hear the lack of privacy is lessening.
  • I think Bruce made an error in his last answer: yes, a quantum computer would reduce the difficulty of forcing a key by a square root - but that applies for every qubit you build into the system. It's not the limit for any quantum computer.

    Granted, these things decohere easily, and it's possible even a ten-qubit quantum computer will never be built. But it'd be dangerous to assume that...Chris []
  • I believe the man's name is Bruce Schneier.
    Not Schneir.
    See his website. []
  • One method which Schneier did not mention for getting better security out the amount of entropy you can remember is using a workload factor - apply a function which wastes CPU and possibly memory to the passphrase before using it as a key. You can probably spare 512k and 200ms any time you decrypt your private key for use, but it will make brute-forcing your passphrase much harder.

    My question is - what function would you use? The trivial answer would be to run the hash function over and over again, but this is not necessarily a good idea: these functions would run very fast on dedicated hardware cracker. I believe it may be better to use an algorithm which is known to be the most efficient solution for a certain problem (something from Knuth?), an algorithm which makes good use of the fetures of a general purpose CPU so an FPGA will not really be any faster.
    Add a cryptographic hash function here and there just to make sure there are no shortcuts or invertible stages.

    Any suggestions?

  • by Skyshadow ( 508 ) on Friday October 29, 1999 @07:53AM (#1577012) Homepage
    The discussion fairly early in the article about public/academic review vs. the focused efforts of the NSA reminded me quite a bit of the Thumb from the Hitchhiker's Guide -- half the scientists in the galaxy are trying to come up with new ways to jam the Thumb, while the other half are trying to come up with new ways to jam the jamming.

    What really worries me about the NSA is the seeming lack of oversight. Their charter should protect me against them (I'm a US citizen, and no I don't care about the rest of you =) ). Does anyone really know what mechanisms exist to keep this agency in check? I mean, Echelon must intercept my email whenever I send a "whazup" to anyone overseas; how can I be sure that my privacy rights are being protected?

    Hell, how can we be sure that this agency isn't involved in even deeper black ops? I've heard the NSA described as basically a think-tank for the other government agencies like the CIA, BATF and FBI, but is this really true? It seems rather ironic that I'm paying for someone to spy on me.

    Of course, this is just a symptom of the government's larger problem: routinely keeping secrets from the people. There was probably a time when this really was in the interests on National Security, but in an era where newspaper articles are routinely stamped "Top Secret", it seems to me that this is no longer the case. Time to reform the system, but it won't happen 'cause Americans just don't care.


  • by technos ( 73414 ) on Friday October 29, 1999 @08:32AM (#1577013) Homepage Journal
    Bruce's main site. []
    Information on Skipjack []
    Information on impossible-differential cryptanalysis []
    Information on attacks unknown to the NSA []
    About the Windows NSAKEY flap []
    Probable NSA backdoors []
    Information on the Blowfish algo []
    Information on the Twofish algo []
    Speed comparison of known algos []
    Speed comparison of the AES candidates []
    Summary of attacks on various algos []
    Breaking crypto isn't the best way to beat security. Article 1 [] Article 2 []
    Information on the Solitare algo []
    Information on the Yarrow algo []
    Importance of peer-reviewed crypto []
    Comments on propriatary encryption []
    Dismissal of cracking contests []
    You say you can't break it; well, who the hell are you?" []
    Twofish team's published papers []
    David Wagner's published papers []
    So you wanna become a cryptographer? []
    Information on side-channel attacks []
    Information on power-analysis attacks []
    More information on side-channel attacks []
    Article on Quantum computing []
    The problems with the public-key infrastructure []
    The problem with longer keys []
    l0phtcrack []
    Biometrics as keys? []
  • by Signal 11 ( 7608 ) on Friday October 29, 1999 @08:03AM (#1577014)
    I've been using computers since I was 5 years old. History has taught me that while technology is a tool, and as such can be neither good nor evil, there is good and evil in the world. As such, the technology is used by both sides.

    The SPA considers software piracy wrong. Federal law says it's illegal. It's also widespread, and I haven't found many geeks that care what's written in the books - the software is free, the risk is low, and the information is infinitely reproduceable.

    Is there any reason to think survellance won't follow the same logic? We may tell the NSA and CIA and everybody else what they can and cannot due, but the technology will still be used. As a computer-user, I gave up on the legal system a long time ago - it's hopelessly out of date and broken. I'm sure many gov't agencies view this in a similar light.

    We can pass all the laws and legislation and make all the fuss we want and strip away everybody's rights to try to enforce that... but you'll still wind up in the same position whether you do that or not. We must address the underlying issues here - one of which is accountability. If you want to invade my privacy, I can't stop you... but if you use that information against me, there needs to be a clear, defined, and enforced method of repayment for the damage you've caused. If the NSA spies on thousands of "innocent" people, they should be held directly accountable for that. The question is... how?

    True direct democracy would solve alot of these problems by replacing them with another set of problems. Your call.


  • Sometime in the far future, space-based telescopes in the visual spectrum range will be able to perform very long baseline interferometry.

    This is an enormous technical undertaking, as it requires positioning a constellation of telescopes over hundred, thousands, or tens of thousands of kilometers to within nanometers of accuracy.

    Difficult, but not possible. NASA is going to build and fly several missions in the next ten years to develop the concept. Flying satellites in formation, laser metrology (measuring distances to the nanometer), and ultimately, building a nulling interferometer that can image planets around distant stars.

    There is nothing in the laws of physics that prevents the Hollywood fantasy of spy sats from eventually becoming true. Atmospheric haze can be fixed by interferometry, maximum-likelihood-estimation style techniques, and the super-resolution methods.

  • by zorgon ( 66258 ) on Friday October 29, 1999 @08:11AM (#1577016) Homepage Journal
    This was a very interesting and enjoyable session. Thanks /. and Schneier. I am none the less compelled to object to one small statement in the otherwise interesting discussion of privacy. Satellite cameras cannot and probably never will be able to read your wristwatch from orbit (unless you are standing on a celestial body like the moon). Even if you could overcome the basic optics problems (resolution of 1 mm at a distance of 100 km is about 575 nanoradians) you still have the atmosphere to contend with. Contrast Hubble Space Telescope imagery with comparable telescopes located on the Earth. Astronomers are planning clusters of widely spaced, (possibly adaptive optics) mirrors to reproduce or exceed Hubble capabilities on Earth, but it would be challenging to say the least to do this in orbit... I think your watch or even your PalmPilot display is safe from orbital surveillance for the forseeable future.
  • by Ledge Kindred ( 82988 ) on Friday October 29, 1999 @08:41AM (#1577017)
    I think this has got the be the best interview /. has yet done. While brief and amusing answers are fun and keep one amused for a time while reading them (like the cDc interview) Bruce's long and thoughtful answers have much more brain food in them and will keep me thinking for quite some time.

    I love all the links he's included in the text as well. It's not only long and well-thought-out but he even has more stuff to say about a lot of these topics!

    I would like to know who holds copyright on this text (/., Bruce, Andover?) and if they would be willing to allow us to reproduce portions of this interview as long as we maintain the original source?

    Specifically, I'd love to have Bruce's answer to Tet's question on privacy made up into a poster I could hang up above my computers and printed out in pamphlets I could just hand to people whenever I try to explain my views on privacy; Bruce just does it so much better than I've ever been able to and having a wonderfully detailed and well-argued statement like that might keep people from just seeing me as "one of those privacy kooks. He must have something to hide to be so pro-privacy. I bet he watches X-Files all the time and cheers for Mulder any time he proposes another government sponsored conspiracy theory." No, I just value my privacy and I'm more and more frightened every day at how industry and government continue to chip away at it and how the majority of the rest of the public just doesn't see a problem.


  • Umm..Ok, I'll feed the troll..

    Now, keep in mind that linear algebra was difficult for me, and I've repressed most of it. :)

    2) What's a Norman Transform?

    3) Clarify "join" in this case. Is M going to be invertible in every case?

    4) What's a Gery-Sinner transform?

    Do you have any program code for an example? How about a walkthrough of one round w/example data and key?

    This alg. doesn't actually do anything, does it?

    What's the decrypt process? (same?)
  • You may wish to see how FreeBSD hacks MD5 to be an excellent slow password hash, or how OpenBSD makes an efficient has out of Blowfish.

    - Sam

  • wmnetselect [] is an applet that sits in your dock/panel/whatever. Select text in any X app and middle-click on the wmnetselect icon, and Netscape will go to that URL. It's designed for WindowMaker, but works in other environments as well.
  • The thing is, transparency does not have to be bad. Transparency makes people accountable for their actions. It means that people will once again have to take responsibility for what they do. In my opinion, adults not taking responsibity for the effects of their own actions (be it through the American legal system or European welfair states) is one of the biggest problems with todays society.

    To bad transparency really only works on those of us *in* the system. The people who commit the most crimes don't have credit cards, don't have checking accounts, or a perm address. So how am I going to be protected in this new world of transparency?

    Will the cops pull all the video footage from store cameres when my car gets stolen to find where it goes? Nope.

    If there is an arrest warrant out for me will my location be tracked via my Credit Card purchases? Yep. You don't have to do anything bad to have a warrant out either. Forgetting to pay a speeding ticket or having the Police forget you paid a speeding ticket will get you arrested.

    Transparency only helps keep the sheeple in line, to bad we are the sheeple.

  • To prevent a precomputation attack all you need is a small salt value per user.

  • How should people publish them to the world? Just post them to /.? Ah heck, here is one to play with...

    Let me start with a really bad algorithm. Take a fixed buffer of random data, and xor it with your data stream, reusing data as you go. What is wrong with this? Well if you xor the result with itself shifted by the period your key drops out and you just have plain-text xored with plaintext, which can easily be attacked (try xoring sections with common words), and with each piece you attack you can extract some key, extracting other text and before long... voila! (There are techniques to identify the length...)

    OK, so let us be a bit smarter. Have 2 buffers of different lengths! Well OK, a little harder since you have made the effective period in the above longer, but you can attack it as before or you can use the non-random nature of your "longer" key to attack it as well. 2 implementations of a bad idea is still bad.

    But wait! Look at the stream. It is quite easy to have 1 stream of data encode 3. All that you do is have protocol that sends a header saying which stream is getting the next x characters, then send the block, then send the next header. So the sender can send 3 streams of data, one of which is information on how to replace your first cipher, the second containing information on how to replace the other, and the third being your actual data. The actual placements of header and choices of sizes of data is random so an attacker has no idea what is data and what are actual keys.

    (Clearly this is only usable for an ongoing stream of information where you have large amounts of random data available, RAM is not an issue, and bandwidth is not very important to you.)

    So my question is how you can attack this algorithm? Where would you start?

    And if it is hard to break, how does the replacement rates in the two buffers correlate with the minimum amount of plaintext sent to crack it?

  • After reading this post on Cryptography, combined with the earlier posts on the Best hacks, where someone placed a backdoor into the login.c in Unix, I am wondering about the security of Linux and whether there could be backdoors floating around inside the kernel code, init.d or whatever.

    The answer is, strictly speaking, "yes, there could be backdoors in init.d, login.c, ftpd, etc."

    However, one of the traditionally-acknowledged strengths of open source software over proprietary software is that bugs or backdoors can more easily be detected, reported, and fixed -- be it by the code maintainer, a third-party vendor (Red Hat, Caldera, etc.) or one of the end users themselves. With proporietary software, you are on the vendor's schedule as to when (or even if) a bug is corrected or a backdoor closed.

    The rest of your question is, therefore, moot; even if Alan Cox or Linus Torvalds have a secret plan to backdoor or booby-trap the Linux kernel source, there are too many people banging away at it on a regular basis for it to go undetected.

    Jay (=
  • It's Bruce's copyrighted material, unless he made some sort of arrangement. As the bottom of the page says:

    All trademarks and copyrights on this page are owned by their respective owners. Comments are owned by the Poster. The Rest © 1997-99 Andover.Net


    "You can't shake the Devil's hand and say you're only kidding."

  • by ryanr ( 30917 ) <> on Friday October 29, 1999 @11:30AM (#1577027) Homepage Journal
    Well, uhh, those of us who are posting, obviously. (not you.)

    Bruce is fairly famous, in the appropriate circles. The general public doesn't know even the most famous cryptographers. If you're concerned about Bruce's fame, wait a little while and watch his new company. If it takes off like I expect it might, he'll be famous enough.

    There's a worse problem with your assertion, though. Bruce is a very interesting guy, and a good speaker. He rarely gives an answer (in this type of situation) with any math or technical details in it. In other words, he does a good job of giving answers that everyone can read and understand. Did you read his answers?

    I suspect the relatively small number of replies so far (for Slashdot) are due to the fact that Bruce often leaves little to refute, and that he answers very matter-of-factly, leaving little room for "religious" debate.
  • by hey! ( 33014 ) on Friday October 29, 1999 @11:32AM (#1577028) Homepage Journal
    Actually, the most advanced ground based telescopes will probably beat the Hubble in resolution, if they haven't already. They combine larger apertures (resolution is a linear function of aperture) and adaptive optics.

    There's a guy at Boston's Museum of Science who can take pictures of the Shuttle [] or satellites in broad daylight with a slightly modified closed circuit security camera on a 12 inch reflector.

    Throw in a billion bucks for better CCDs, larger and adaptive optics and better image processing software and if they can't read your watch from space they can probably read your sundial.
  • by ryanr ( 30917 ) <> on Friday October 29, 1999 @11:44AM (#1577032) Homepage Journal
    Ocasionally, I browse sci.crypt, and I see a fair number of algorithms posted there. I also see some "breaks" for them posted. So, there's one possible place.

    I've seen a number of cryptographers say they don't have a lot of time, and no, they won't break your algorithm for you. Well, not unless you pay them. Bruce is one of those.

    However, I've seen posts where they've broken someone's alg. in the time it took to read it, and they sometimes reply.

    Here's something I've had in mind for some time: An Internet-based crypto club, where rank amateurs could post their own algs., and break those of others. Sci.crypt does that to some degree. Something that is more explicitly for creating & breaking would be appropriate, though. I don't have time to run such a thing, but I'd like to participate casually. I imagine that there could be a built-in prestige system that tracks how long particular algs. take to break, etc..

    It could also possibly attract better cryptographers as well. If there is a ranking system, then that could act as filter for the real cryptographers. For example, if a particular alg. has "survived" for 6 months, it could be ranked as "good" for the crypto club, and then a real cryptographer could come along and dash the author's hopes.

  • Its possiable in the US, right now, to collect money from all of the parents who owe child support and alimoni, but we don't. Owe the IRS 5 bucks and you will have your checking account frozen untill the IRS is paid.

    Its not going to change. I guess I'm just a pessimistic American. Will moving to Sweden restore my faith in Gov't? If so, lemme know. :>

  • A professor of mine for my signals and systems class at Purdue University, told us about this. The basic idea (if I mess it up blame me not my professor or Purdue) is to build a black box that undoes the atmospheric distortion.

    In order to do this one needs to know what the image "should" look like without atmospheric distortion. Then through some sort of magic build a filter that when applied to the dirty, distorted image returns it to the original. The basic premise is that the atmospheric distortion is an invertible system and that we can build a second system that does the inverting.

    The biggest problem with this approach was how the heck do you figure out what things are supposed to really look like? If I remember correctly, there were a few stars that they could use as points of data to build the "black box." The breakthrough for this came when some department of defense guys got involved with their work. The final system (being built? already built?) was to shine a laser at one point in the sky that would excite cesium atoms. There is a layer of cesium atoms around the earth's atmosphere. We know what the excited cesium ion's are supposed to look like. Move the laser around the sky, and voila, you can build a black box that will "undo" the atmospheric distortion across the whole sky.

    I would love to hear from anyone who can add more to this or correct me where I was mistaken. Does anyone know how well this works? It seems like the system is not completely invertible. Some data must be lost, but how much? Also I believe they were applying this technology to a telescope in Hawaii.
  • Some readers of your post may not live in America (I don't) and may not have the luxury of a government they feel they can trust, nor the luxury of free elections. A significant number of your countrymen do not share your confidence that the government works in their best interests.

    For such people, particularly those battling an oppressive regime from within (Iran? East Timor until recently, etc. etc.), privacy may be literally a matter of life and death.

    You are privileged to live in America. Just try to avoid assuming everyone else on the Internet does too.
  • by Anonymous Coward
    "If you don't *know* in your heart of hearts that the Executive is the right wing branch of our three branch system (where the judicial is more left considering societal goods, and the legislative being centrist) then you do not have enough of an understanding of our system."

    If you don't *know* that "right", "left", and "centrist" are meaningless political terms, then you do not have enough understanding of the English language as it relates to contemporary politics. This is more a slam on the last part of your comment: "then you do not have enough of an understanding of our system".

    At best, "right" is a pro-establishment term applying to both religious and non-religious support of republics, democracies, monarchies, dictatorships, oligarchies, etc, - whichever is in power. OTOH, "left" tends to be an anti-establishment term applicable to communists and libertarians (among many, many, other groups as well). AFAIK, "centrist" means that one will do and say anything to keep or get power - although best implemented by not actually appearing do such unless you do it so well *cough* Clinton *cough* that nobody gives a shit.

    Bottomline, no one term - left, right, centrists - meaningfully conveys any information that is not relevant to specific issue at a specific time and in a specific place (even then it does so poorly). Just as cryptagraphy obscures data, fuzzy political terms make rational political discourse more obscure. It is at the point that 95%+ of all people shouldn't even watch/read political news or vote, until properly educated in all political schools of thought and terminology. People shouldn't vote anyways - nobody has a right to tell me what to do, not in whole, not in part, not alone, and not in groups.

    Anonymous because I'm lazy, cowardly because I'm smart. Posting twice 'cause I'm careless.
  • For those wondering what this is about: in the
    original posting Schneier was spelled Schneir.
    Roblimo seems to have corrected that already,

  • In the interview where Bruce Schneier lists the various hardware and software requirements for AES, that it both work in very small 8bit low RAM environments along with higher end equipment; that it stream well for desktop video on demand; that it parallelize well across many devices; that it fit in specialized embedded cryptographic hardware in few gates, etc etc etc... I ask:

    Why do we need one algorithm for all these functions? Beyond political constraints of the AES selection process, wouldn't it make more sense to choose two, or possibly several, candidates each suited for different purposes?

    Can anyone answer this rationally? Thanks....
  • by Hobbex ( 41473 ) on Friday October 29, 1999 @09:20AM (#1577042)
    The only real problem here is one of denial. I don't think that the loss of privacy is a problem, because there is nothing we can do about it. If anything, we need to face up to it.

    Transparency is coming, like it, hate it, deny it, or embrace it. I don't think that the Sun dude is right, because in saying saying that we have already lost the war he makes it sound like it won't get much ~worse~. It will.

    The thing is, transparency does not have to be bad. Transparency makes people accountable for their actions. It means that people will once again have to take responsibility for what they do. In my opinion, adults not taking responsibity for the effects of their own actions (be it through the American legal system or European welfair states) is one of the biggest problems with todays society.

    If you are constantly broadcast on the Net, you are never alone. This has its vices, but it also has amazing virtues. Schneier notes that the telephones giving away their positions is for security reasons. And if you consider it, it does mean an amazing boost for safety to have a phone that can help rescue services find you. Store cameras, street cameras, home cameras all make your world safer. If you are walking around constantly webcasting everything you see, then you are never going to be alone in a dark alley again. Sure, there are people dumb enough to attack you even if it means putting their face on the web: but a hell of a lot fewer then if it doesn't.

    The loss of privacy is a double bladed sword: It could topple us towards the true idealized global village, or towards Orwellian inferno. What we have to do is make sure we choose the right side.

    Passing more laws to giving more power to the state to restrict peoples freedoms is not the right side. I don't care if the intentions are good, its a misguided idea.

    If you ask me what makes the Orwellian world a hell is not the lack of privacy, but the combination of POWER and transparency. To fight the transparency is as futile as fighting the piracy, the drugs, or the gene technology. You can't kill ideas. But we can fight the power. For too long we have been imagining that we have found the perfect balance between totalitarian power and freedom, and have stood still at it. Well now technology is catching up with us. Transparency makes the governments that were designed to protect us our greatest enemies, while it at the same time weakens all the threats the power was established to protect us from. Its time to start moving.

    I'm sick and I'm rambling. This probably made no sense. He :-)

    /. is like a steer's horns, a point here, a point there and a lot of bull in between.
  • Sometime in the far future, space-based telescopes in the visual spectrum range will be able to perform very long baseline interferometry.

    I didn't think about that, thanks for bringing it up. Very exciting, and not that far in the future. Might even be relevant to cryptography (which is what we're supposed to be talking about, sorry moderators, my fault).

    There is nothing in the laws of physics that prevents the Hollywood fantasy of spy sats from eventually becoming true.

    No, of course not, but that is like saying celestial mechanics is an exact science -- all you have to do is compute the effect of all the mass in the universe and its exact location at any time...

    Atmospheric haze can be fixed by interferometry, maximum-likelihood-estimation style techniques, and the super-resolution methods.

    Ok, you lost me there. I follow what you're saying about interferometry, but MLE techniques? Do you mean something like principal component or EOF analysis? And just what are the super-resolution methods? Spatial resolution, or spectral? Could be a translation thing, but these terms are not familiar to me in any context related to atmospheric optics for the purpose of remote sensing (can you guess what I do for a living?). So my curiosity is piqued (and my BS detection bit is set). I'd like to hear more. Cheers

  • No one misspells Torvalds because Swedish is such a lovely, logical language that it leaves no room for misspelling. Except when I'm writing.

    /me waits patiently for the first american to say "You mean Finish"...

    /. is like a steer's horns, a point here, a point there and a lot of bull in between.
  • You want your 8-bit smartcard to be able to comunicate with your 64-bit desktop, don't you? If they're not using the same alg. & protocol, that won't work.
  • If the NSA spies on thousands of "innocent" people, they should be held directly accountable for that. The question is... how?

    I think it is interesting that you think the NSA should be held directly accountable but that software pirates shouldn't. You are in essence claiming that government should be held to a higher moral standard than the people it governs. Yet we are in a government of the people and by the people - making it difficult to reconcile a moral government with an immoral people. Not to mention many people have a tough time swallowing it when people apply double standards of morality.

    Think how outraged your average person feels when Senator So-And-So says that adultery is immoral and then is found to have had an 12-year affair. Aren't the people at the NSA going to feel essentially the same way when the people of the US say "you have to play by the rules but we don't"?

    I'm not saying that government shouldn't be held to a higher standard -- they have more power than the average person. But I think you are going to have a hell of a time implementing it. I don't think true direct democracy would solve anything -- especially with the historically low voter turn outs in most republican nations; I think republican governments tend to moderate the power of swing voting blocks, which is a good thing.
  • by Anonymous Coward
    From recent disclosures in the media, it appears that NSA, the National Reconnaisance Office, et al do not spy on Americans. They get our allies to do it for us--probably from downloads from an American satelite. E.g., it appears the Canadians take the satellite intercepts from North America and analyze them. They may or may not spy on their own, but information on US citizens they exchange with the US government and vice versa. A Canadian with a conscience broke this story, and he had credible credentials (this was on one of the US network news magazines recently). There are suggestions that this goes on with our Nato allies as well: we tell the Germans "We heard Hans Schmidt plotting something that may be bad and you should know about" and they reply "Danke schon, und by the way we overheard John Smith talking about something suspicious." Nobody has to break their own laws against spying on their own citizens. And it all seems to come from sharing US satellite data, i.e., the Canadians probably don't have any KH-13's in orbit. Ditto, the Brits and the Germans (two other nations that have been mentioned).

    Echelon is probably just the information exchange agreement underlying all this--its a cutout mechanism to prevent charges of domentic spying as much as it is an effort to monitor every emission on the planet (to the limit of available resources). We spy on our friends and they do the same favor for us.
  • 1. XOR each byte with the next, and the last one with the first

    2. For each byte A, compute the Norman Tranformation with all other bytes in the message and save the results on a vector

    3. Join all vectors you got for each byte on a matrix M and invert M using rational integer

    4. Create a vector K filled with the key repeated
    through a Gery-Sinner Transformation.

    5. Make C = M * K

    6. Xor each byte on C with next and the last one with the first for 2 times

    7. Repeat 2 to 6 for 7 more times

  • I don't know if satellites can read your watch, but I have worked with optics that could pick up the dashboard lights of an aircraft six miles out, and that was pre 1993.

    If it can't read your watch, it might be able to see your house from there!

  • You are assumming a single image. Many blurry images of the same object can be combined to make a single clear image. The information is all there, it is just scattered across the set of images. Anyway, reading a wristwatch still stikes me as a bit of an exaggeration though.


  • A lot of people think this way, in a way even Schneier is guilty of it when he says we can learn a 30 character passcode but most people can't be expected too

    Here is a clue:

    We are the future.

    We may be the early adapters of the technology, but you bet your ass everybody will be doing it soon enough. Contrary to what one might think when watching daytime TV, society is not getting dumber as a whole. Here in Sweden the guy who mugs you is likely to have a mobile phone on him. In two years he is likely to have a WAP phone/PDA. In five years it likely to be permanently connected to the net. Etc.

    /. is like a steer's horns, a point here, a point there and a lot of bull in between.

  • True, but the fact is that 99% of the time the police is not there to actively watch over you either. As individuals criminals are irrational, but on a whole crime is pretty rational. The reason I can walk my dog at night here without getting mugged and killed is not that there are police everywhere, but because your sure enough to get caught to make getting my $4 of pocket change not worth it. In some areas this is obviously not the case (well, my dog is rather large, so...)

    Violence will obviously be needed to fight violence. But as the technology protecting me gets smarter, I the authority of the violence protecting me can, and will, get weaker.

    /. is like a steer's horns, a point here, a point there and a lot of bull in between.
  • Part of it has to do with open source.
    If you don't trust it look at the source.

    There are also many other people looking at the source. This is similar to what was said in the interview. Don't trust anything until is has been thoroughly reviewed by people you put some faith into.

    On the other hand, hiding the true functionality of sections of the source via subtle repeated manipulations from vastly differing sections of the code may be very difficult to discover. I have no idea as to the feasibility of designing such a system. (although i believe microsoft has)

  • Doubtful.

    One time pad. See "Applied Cryptography" for more detail.
  • The NIST appears to have opened themselves up to the possibility of having more than one algorithm under the AES umbrella. See their web site at for their current views. While still not saying that there WILL be more than one algorithm as part of the AES standard, they no longer make a blanket statement about "the" AES algorithm.

    Previously the web site was pretty clear that AES was going to be a single algorithm. As Bruce notes, a family of algorithms might be more practical, though in the past NIST was not open to that suggestion.


  • And with enough qbits, the problem of breaking a public-key goes from being exponentially hard to being soluble in polynomial time. Real, large-scale quantum computers could well kill public-key cryptography forever, forcing us to instead concentrate on secure systems for distributing private keys.
  • Don't forget that much simpler methods of invaiding privacy exist. The best crypt, OS, etc, amount to very little if a there was a camera recording as you typed your pass phrase or you were being otherwise monitored. Government agencies have vast powers to exert when they really want to know something. I have little doubt that in general your friend is correct.

    Just don't think that every attack is computational.

    My name is not spam, it's patrick
  • You can cut-n-paste urls into any netscape window just like any other app. -- as long as you don't middle-click a hyperlink :-)
  • The problem with this idea is that it doesn't actually enlarge the effective keyspace. It could be effective against someone trying to attack *your* entropy, but becomes less so when someone is trying to attack *everyone's* entropy. To put another way, you can do a dictonary attack against a password, and that may be effective. With a workload scheme, you feed your dictionary into the workload function *once* and create a new dictionary, which may be a bit larger than your original, but has the same odds of success.

    The more people who use the same function, the more economical it is to do this.
  • If you are constantly broadcast on the Net, you are never alone. This has its vices, but it also has amazing virtues. ... Store cameras, street cameras, home cameras all make your world safer. If you are walking around constantly webcasting everything you see, then you are never going to be alone in a dark alley again. Sure, there are people dumb enough to attack you even if it means putting their face on the web: but a hell of a lot fewer then if it doesn't.


    This reminds me of an especially poignant scene in a book I read (Moonwar, I think it was) where the leader of the lunar revolution witnessed the rape/murder of his female friend on Earth via VR. She wasn't alone, but it didn't save her either.

  • After reading this post on Cryptography, combined with the earlier posts on the Best hacks, where someone placed a backdoor into the login.c in Unix, I am wondering about the security of Linux and whether there could be backdoors floating around inside the kernel code, init.d or whatever.

    Assuming there are some pretty intelligent computer scientists working on the low level code for Linux, I am wondering if someone has come up with a way of using GCC and source code to create a security breach. I mean, who the heck *are* Linus Torvalds, or Alan Cox? Sure they are now internet personalities, but could they also be somebody more insidious?

    Just curious - not meant to be flamebait.

When a fellow says, "It ain't the money but the principle of the thing," it's the money. -- Kim Hubbard