|Feature:Security Through Obscurity|
from the its-a-bad-idea dept.
Bruce Perens has sent us another writeup, and this one hopefully won't cause a flamewar that brings the server to its knees! This one is on Security Through Obscurity, and why it just doesn't work. Specifically, Bruce talks about cryptography and why open source is necessary to produce truly secure internet applications.
The following is a feature written by Slashdot Reader Bruce Perens
Why Security-Through-Obscurity Won't WorkBruce Perens email@example.com
With all of the threats of new cryptography export laws, and the copyright bill in congress that would illegalize the circumvention of a copyright-protection system, it's time for us to go over the concept of security through obscurity, and why it's a bad idea. I try to explain the concepts as simply as possible in this article, so that non-programmers can understand them.
The encryption feature of a popular commercial spreadsheet program was broken several years ago, when a programmer realized that the spreadsheet always stored the same character sequences at fixed locations in its encrypted file. Because he knew what those characters were when decoded, and because the spreadsheet used a rather unsophisticated code, he could look at the encoded file and work out the code key. That programmer wrote an application that would automatically work out the code key and decode the entire spreadsheet. At that point, the programmer notified the spreadsheet manufacturer that their encryption function had a security bug, and should not be used. He expected the manufacturer to pass this information on to their customers and issue a revision of the program with a less-easily-broken encryption feature.
The spreadsheet manufacturer tried to solve this problem by security through obscurity. They threatened the programmer with a lawsuit or criminal action if he revealed the method of breaking the code. Because another programmer might figure out the method given only the clue that the code had been broken, the spreadsheet manufacturer also threatened lawsuit or criminal action if the programmer even told anyone else that their encryption function was breakable.
Security through obscurity is always a bad idea. In this case, the manufacturer assumed that nobody as smart as that particular programmer would come along for a while, and thus their customers would be secure if this one programmer could be dealt with. However, they did not consider that someone else might have already broken the spreadsheet code without telling the manufacturer, and might already be using the technique to eavesdrop on some rich corporation's secrets.
Many software manufacturers hide bugs that impair the security of programs, or even entire operating systems, without knowing whether some outsider has already found and exploited these bugs. The only proper course for a software manufacturer is to issue a software update as soon as possible after a problem is found, and to inform all customers that the update must be installed to correct an existing security problem. Until more manufacturers understand that security through obscurity is a fallacy, you should consider that popular computer operating systems, applications, and cryptography programs are presently compromised. Do not rely on the security features of these systems.
One exception to the above are Open Source operating systems such as Linux and FreeBSD, and cryptography programs such as GNU Privacy Guard. Because the developers of these systems publish all of their source code for others to read, they can't rely on security through obscurity. The publication of source code actually improves security because the program or operating system can be peer-reviewed by anyone who cares to read it. Many security bugs that are overlooked in other operating systems have been caught and repaired in Linux, because of its extensive peer-review process.
National governments attempt to use security through obscurity. For example, the United States Government forbids the export of strong encryption software, seemingly under the assumption that people in other nations aren't smart enough to write similar software! A copyright law now in the U.S. congress attempts to legislate security through obscurity by banning code-breaking tools, and placing a half-Million-dollar penalty on the act of code-breaking. This would tremendously cripple computer security, because the only way to tell a good code from a bad one is to attempt to break it.
Here's the test that should be applied to all of your cryptography software, the applications you use for privileged data, and the operating systems on which you run those programs. This test is already used by knowledgeable cryptography manufacturers like RSA Data Security. First, publish the source code to your program, or, in the case of a cryptography program, publish complete details of the encryption algorithm so that a programmer can understand exactly how the code works. Encourage programmers to study your system and to attempt to break it. Only when a program has been publicly reviewed this way, and when people have tried to break it and have failed, can you be assured that it's useful for concealing your secrets.
Scientists who review codes will rarely call them unbreakable. There are only a few special circumstances in which an unbreakable code is even theoretically possible. Instead, they will tell you that a good code can't be broken in less than decades using the most powerful computers available today, and that is what makes it practical for you to use. Faster computers are constantly being developed, and we are learning more about computer security every day. To assure the continued integrity of your system, you must continue to encourage people to break it as the process becomes "easier". Eventually, computers become powerful enough to break a particular code, and that code must be retired to make way for a more difficult one.
Some time ago, programmers hit upon the fact that they could couple thousands of momentarily-idle workstations together over the Internet and make them all work on the same problem simultaneously. By doing this, they could create "virtual" supercomputers, at low or no cost, more powerful than the Billion-dollar supercomputers in government code-breaking agencies like the U.S. National Security Agency. These networks, run by thousands of amateurs guided by a few cryptography scientists, are now able to make short work of breaking codes like the old 56-bit DES, once recommended for business use by the U.S. Government. As the news of these codes being broken is made public, their retirement is forced in favor of more difficult-to-break versions. This is exactly the work that the circumvention law would ban, and the result would be that people would continue to use obsolete codes long after they became breakable.
Copyright 1998 Bruce Perens. Contact firstname.lastname@example.org for use permission. Trademarks are the property of their respective owners. This editorial is not Open Source. All of the software I write for Linux is Open Source. With editorials, the contents are mostly opinion and I want to have more control over them. For example, I might say something stupid, and then I'd want to take it back and not have it copied and passed around forever! So don't give me a hard time about my editorials not being Open Source. - Bruce
Slashdot has posted several stories covering the news from the terrorist attacks on September 11, 2001. Here they are, in sequence:
World Trade Towers and Pentagon Attacked, 9:12 AM 2001-09-11 (all times EDT)
Update: 2001-09-13 12:00 by michael:
|This discussion has been archived. No new comments can be posted.|
|Feature:Security Through Obscurity | Login/Create an Account | Top | Search Discussion|
|The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.|
Chemist who falls in acid is absorbed in work.