Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Computer Security, The Next 50 Years 234

bariswheel writes "Alan Cox, fellow at Red Hat Linux, gives a short-and-sweet talk at the European OSCON on the The Next 50 Years of Computer Security. Implementations of modularity, Trusted Computing hardware, 'separation of secrets,' and overcoming the challenge of users not reading dialog boxes, will be crucial milestones as we head on to the future. He states: "As security improves, we need to keep building things which are usable, which are turned on by default, which means understanding users is the target for the next 50 years. You don't buy a car with optional bumpers. You can have a steering wheel fitted if you like, but it comes with a spike by default." All of this has to be shipped in a way that doesn't stop the user from doing things."
This discussion has been archived. No new comments can be posted.

Computer Security, The Next 50 Years

Comments Filter:
  • Haskell. (Score:2, Informative)

    by Anonymous Coward
    We will likely see software security improve once languages like Haskell and Erlang are more often used.

    There are, of course, some security issues that are independent of the language used. Some are inherent to protocols, for instance. However, buffer overflows and so forth are a thing of the past when using a language with proper memory management.

    Security glitches caused by basic concurrency errors are also avoided when using a language such as Haskell, that can automatically parallelize computations.
    • Re:Haskell. (Score:5, Informative)

      by PsychicX ( 866028 ) on Monday May 08, 2006 @02:41AM (#15283806)
      More importantly, the security models currently used in the kernel are broken, and we can formally prove that they are inadequate. Academic research in this area has been extremely productive, but there are major barriers to entry in the commercial world for the obvious reasons.

      At the moment it looks like micrkernel architectures (real ones, none of this hybrid stuff) coupled with capability based security systems, should be able to provide real, formally verifiable security. As with most things there are a handful of practical barriers to overcome (primarily performance related), but another 5-10 years and those problems should be sorted out.

      For a more in-depth discussion of capability systems, see the wiki page [wikipedia.org], and this essay by Dr. Jonathan Shapiro [eros-os.org]. (And to be perfectly honest, he's a professor of mine and my views are colored as such.)
      • As with most things there are a handful of practical barriers to overcome (primarily performance related), but another 5-10 years and those problems should be sorted out.

        I would hardly call more powerful hardware being made available 'sorting out' the problem.
      • There is more than one OS.

    • Re:Haskell. (Score:3, Insightful)

      by Detritus ( 11846 )
      We will likely see software security improve once languages like Haskell and Erlang are more often used.

      How long are you willing to wait? Plenty of people still use FORTRAN and LISP, and C/C++ will probably outlive many of us. Short of government regulation, I'm pessimistic about the chances of any major migration to a fundamentally new language. The economic factors strongly favor more of the same.

    • Re:Haskell. (Score:3, Insightful)

      by ajs318 ( 655362 )
      Oh, please. If you have to rely on a programming language to keep you from doing "dangerous" things, you have already lost.

      If the language really doesn't enable you to do "dangerous" things, then it's in all probability computationally incomplete. {Of course, any computer with finite memory and hard drive space is technically computationally incomplete ..... the question is, to what extent, and is it likely to have a detrimental effect on real-life applications? Does the newest version of ADA allow a
  • Educating users (Score:5, Insightful)

    by reldruH ( 956292 ) on Monday May 08, 2006 @01:31AM (#15283657) Journal
    What the article is basically saying is that we have to teach people how to use their computers. >85% of all the computer problems I encounter are PEBKAC (Problem Exists Between Keyboard And Chair). It's like the old saying goes, make something idiot proof and the world will make a better idiot. If people just learn how to use their computers (you shouldn't download exe's from people you don't know, a firewall is a good thing to have, ActiveX controls aren't safe and your default response shouldn't be to install them no matter what IE says) a huge number of problems would be eliminated. Like it or not, users are the biggest computer problem today. The problem shouldn't be usability, it should be user-ability.
    • Re:Educating users (Score:3, Interesting)

      by jkrise ( 535370 )
      If people just learn how to use their computers (you shouldn't download exe's from people you don't know, a firewall is a good thing to have, ActiveX controls aren't safe and your default response shouldn't be to install them no matter what IE says) a huge number of problems would be eliminated.

      I can see many practically feasible solutions, if the above is true:

      1. Eliminate all people - that would guarantee security.
      2. Eliminate ActiveX controls and IE - can't see that happening even 50 years from now - D
    • by Beryllium Sphere(tm) ( 193358 ) on Monday May 08, 2006 @02:15AM (#15283751) Journal
      Aviation went through this phase a long time ago. Accidents were called "pilot error" unless the airplane broke up in midair.

      The field of "human factors" recognized that controls and displays need to be designed so that it's possible for a well trained human to get things right even in a hurry. Controls with opposite effects should not be right next to each other. Controls should give meaningful feedback. Important controls should be out in the open where someone can see them.

      The aviation world fixed up the cockpit and many "pilot errors" disappeared.

      You can't apply these lessons too directly to computer security because bad guys are actively trying to trick computer users. Nobody sends pilots email in flight saying "You must pull the red lever immediately to avoid running out of fuel!". But at least it should be easy enough to secure a computer that an employee from a security firm can do it. We're not there yet -- a recent security conference had vendors running open WiFi access points without firewalls.
      • Controls with opposite effects should not be right next to each other.

        So we'll be seeing a great reduction in 'driver error' when the brake pedal is moved away from the accelerator? Actually, this isn't a joke, a new scientist article [newscientist.com] discusses the possibilty of combining the brake and accelerator into one pedal, with completely different foot actions required to trigger the appropriate response. They do mention that accidents are sometimes caused by drivers applying the incorrect pedal.

        • What happens if one of the test drivers who drove a car with one of these convenient single pedals gets back to a normal car with two pedals? One moment of forgetness, and you've got an instant crash.

          At least, if you go back from automatic to shiftstick, and forget to shift gears or push the clutch, all you get is a stalled engine, but usually no crash.

      • You forget one major difference: Not everyone gets to be a pilot. And if someone wants to be a pilot, he goes trhrough extensive training and tests. This ensures that only people who are mentally and physically able to to fly a plane get to do it. This basically extincts all error sources between console and seat (if you don't count failures due to tiring etc.). It's no coincidence, that most plane crashes happen with private and chartered planes. It's simple because those people don't get that high standar
        • " And if someone wants to be a pilot, he goes trhrough extensive training and tests. This ensures that only people who are mentally and physically able to to fly a plane get to do it. This basically extincts all error sources between console and seat (if you don't count failures due to tiring etc.)."

          If anything, this comment supports his point. Despite all of this training, pilot error still occurs. A few years ago I saw a TV show regarding plane crashes. They showed one example of a commercial airliner
          • Yes, but still interface isn't the solution to the problem. Can you imagine a simpler interface than a pencil and a paper? If it was all about the interface, then everyone would be a DaVinci. Or do you know a simpler interface than a knife? If it was all about the interface, the people would never cut themselves and could dice 2 pounds of onions in a minute and prepare Sushi and Fugu. The only thing interfaces can do is making the first steps easier. Everything that comes after that is the capabily and will
            • "Can you imagine a simpler interface than a pencil and a paper? If it was all about the interface, then everyone would be a DaVinci."

              That example swings both ways. Pencils typically have erasers.

              "The only thing interfaces can do is making the first steps easier."

              Only thing? No. Interfaces also make common mistakes easier to recover from. This is where things like "Undo", "Recycle Bin", and "[Yes] [No] [Cancel]" come from. That's the point. Shit happens. People make mistakes. Regardless of training,
              • Only thing? No. Interfaces also make common mistakes easier to recover from

                However, some mistakes cannot be recovered from - for example, if you click the "yes" button on the "would you like to install this malware" dialogue. In this case you might be able to use journalling features of the filesystem to undo the damage, but if you've done other things since then you probably couldn't selectively roll back the filesystem changes associated with the malware without rolling back everything else too.

                In this case the UI has to be designed to make unrecoverable mistakes difficult or impossible to do in the first place so the "how do I recover?" problem (almost) never comes up. This is a very hard thing to do unless you want to turn computers into appliances (most people wouldn't like appliance computers since they wouldn't be able to install their favorite software) and becomes even harder when the people who want you to make mistakes (malware writers) are actively trying to trick you into making them.

                One possibility that has been suggested is kind of a halfway-house between computers as we know them now and appliance computers - the OS would require all executable code to be signed by a "trusted party". However, this brings up some serious problems:
                1. Who can be a "trusted party"? Lets say it's the OS vendor, why should I trust Microsoft to guarantee that the signed software is malware-free (especially since they are probably getting paid by the software vendor)? There will certainly need to be stiff penalties for signing software which turns out to be malware.
                2. The inability to run unsigned software could be used to lock out the competition - for example, Microsoft could refuse to sign OpenOffice.
                3. How much would this "signing service" cost - you can bet that thoroughly inspecting the software to ensure it really isn't malware is going to be very expensive so you just locked out all the small vendors who can't afford it.
                4. How are you going to run code you compiled yourself since it won't be signed by the trusted party? This could either be FOSS code that you choose to compile yourself, or your own personal code.

                These are certainly not easy problems. I do, however, feel that the ISPs need to take more action against people running malware infected machines. It seems all too common these days for ISPs to ignore abuse reports, let alone run monitoring software to proactively drop the connection to infected machines.
                The ISPs should monitor people's connections for malware signatures and upon finding an infected host they should drop the entire internet connection until it's fixed (probably redirecting all web requests at a server containing patches and instructions to fix the problem).

                Part of the problem is definately that most of the malware doesn't actually cause a problem for the owner of the infected machine - they don't know or care that their machine is actively being a spambot. Cause hassle for the owners of infected machines and they might actually pay attention to the security of their own systems (viruses were considered a much bigger deal back in the days when their payload often trashed your data).
      • The field of "human factors" recognized that controls and displays need to be designed so that it's possible for a well trained human to get things right even in a hurry.

        Agreed 100%. The problem is that a lot of people aren't ready to admit that secure computing is going to require learning... An example: E-mail applications with working crypto can (and hopefully will) be a lot easier to use than they are now, but they will always require some understanding of the underlying system -- otherwise social eng

      • by Tom ( 822 ) on Monday May 08, 2006 @03:36AM (#15283912) Homepage Journal
        The field of "human factors" recognized that controls and displays need to be designed so that it's possible for a well trained human to get things right even in a hurry.

        And there's your problem right there.

        a) Most computer users are not "well trained", even by the widest possible streching of the definition
        b) For a pilot, flying the thing is his main concern at the time. He might be in a hurry, but he wants to do things, and do them right. For a computer user, security is a nuissance, a distraction from his actual work. He doesn't care, or bother, and if you would pop up a dialog box saying "do you want the system to stop bothering you with security warnings and just allow anything no matter the risk?", I'd say 80% or so of the users would click "yes".
      • The aviation world fixed up the cockpit and many "pilot errors" disappeared.

        Pilots are also very well trained individuals with a certain personality type.

        Not reading dialog boxes? If anybody has ever used an OS like Windows, the reason they don't read them is because they are bombarded with stupid ones all the time.

        Although its almost a historical part of psychology like Jung and Freud, I'm a big fan of "signal detection theory".

        It comes (maybe not directly) from Decarte's notion of "clear and distinct".

        I
      • by cquark ( 246669 )
        Usability is a growing area of research within computer security. The SOUPS [cmu.edu] conference focuses on that subject. The SOUPS Blog [usablesecurity.com] discusses user interface changes that would help computer users realize that bad guys are attempting to trick them, like using per-user labels or backgrounds so that phishers can't emulate a site since it differs for each users in ways the phisher can't predict.

        If you design user interfaces to secure applications, I highly recommend the O'Reilly book Security and Usability. It'

    • Re:Educating users (Score:3, Insightful)

      by mcrbids ( 148650 )
      If people just learn how to use their computers (you shouldn't download exe's from people you don't know, a firewall is a good thing to have, ActiveX controls aren't safe and your default response shouldn't be to install them no matter what IE says)

      You write these things in as though they were long-established rules of convention that could be written down and shared, and accepted because of their ubiquity and long duration as good rules.

      But go back just 10 years. The Internet was fresh and new. A firewall
      • Re:Educating users (Score:2, Insightful)

        by reldruH ( 956292 )
        The issue I have with your comment is that knowing that you're not going to be having these same problems in 2, 5, 10 years doesn't relieve you of the responsibility to solve them today. If nobody worries about solving todays problems because they're not tomorrows problems, we never get to the next set of problems. Windows, IE, and ActiveX all still have a huge market share. Just because you (a linux using technophile) don't have those problems doesn't mean the rest of the world still doesn't and still won'
    • Re:Educating users (Score:5, Insightful)

      by R3d M3rcury ( 871886 ) on Monday May 08, 2006 @02:28AM (#15283778) Journal
      Well, I'd make the argument that the problem exists between the keyboard and chair of the software developer--not the user.

      Comments like yours remind me of the automobile industry of the 1960s. The problem, they insisted, was not with the cars but with the people who drove them. There was no way to make cars safe and the only hope was better driver education. Of course, the reality is more that they didn't want to devote the time, effort, and money to making cars safer because they'd see no real benefit in regards to sales. And to a certain degree, they were right. It actually took the government to come in and mandate safety standards for cars.

      To me, blaming the user is a typical programmer cop-out. "Well, if the user was as smart as me, they wouldn't have these problems." Yeah, I too have seen users do the stupidest things with my software. The difference is that I try to find out what they were thinking when they did this and then work to make sure that others aren't inspired to do the same thing.
      • Re:Educating users (Score:2, Interesting)

        by reldruH ( 956292 )
        Good point, and I might have to plead guilty to jumping the gun. I've written software that was definitely too difficult for anybody who wasn't me to figure out on their own; I think most software developers have. Responsibility is two-fold, it falls on both users and programmers. Programmers have to take the time to make sure their software is intuitive and not confusing, but users have to learn the basics. I can't tell you how many of my friends (really smart people) can't download a file, then find it la
        • Re:Educating users (Score:5, Insightful)

          by R3d M3rcury ( 871886 ) on Monday May 08, 2006 @03:27AM (#15283891) Journal
          "I can't tell you how many of my friends (really smart people) can't download a file, then find it later. They just click OK, they don't know what a file extension is."

          Exactly. How many tech support stories have we all heard that started with customers who claim to be very smart and know all about this stuff and have made some stupid mistake. Heck, I can plead guilty to that (Oops! The Firewall is blocking FTP--that's why I can't get to your FTP site...).

          But some of it comes from the fact that there are things that we don't need to know, but the computer insists that we do know. File extensions are a great example. What does the extension "jpg" tell me? That it is an image encoded in JPEG. What is JPEG? Why do I care what it's encoded in? Why is that different from an image with the extension "tif"? They both look like the same image to me. Why do I need to know whether it's JPEG encoded or TIFF encoded? Why can't it just be a picture?

          Well, because some programmer decided it would be easier to detect what kind of file something was if we gave the computer a hint. Thus, if the file extension is "jpg", the program uses a JPEG algorithm to extract the image. If the file extension is "tif", the program uses a TIFF algorithm. This is alot easier for the programmer and faster for the computer rather than reading, say, the first four bytes and looking for FFD8FFE0 and saying, "Ah! It's JPEG," or looking for "II" or "MM" in the first two bytes and saying, "Ah! It's TIFF!" So the file extensions "jpg" or "tif" really aren't there for the user's benefit at all--they're there to make the programmer's life easier.

          But what about all these other three letter extensions, like "gif", "pgm", "psd", "bmp"? How is the user supposed to remember this alphabet soup of extensions and what they all mean? Why can't they just hide them? Because then the user won't see the "exe" that denotes a program and may inadvertently run a program which does nasty things.

          See? File extensions seem basic to us, but they're pretty superfluous to most people.
          • File extensions are primarily there for program associations. 99% of programs DO check that the file format is correct before processing, many programs, such as image viewers, media players and archivers can automatically use the correct decoding algorithm, even for files with incorrect extensions.

            File associations benefit the user greatly, ie, the do not have to guess which program will open which file. We simply cannot abolish file extensions (or whatever metadata is used for association).

            With 'unsafe' fi
          • Ok, so how do you propose an user sees a difference between PNG, GIF and JPG?

            Thing is, the file format IS important:

            PNG: Photos would be awfully large.
            GIF: Photos would look like crap.
            JPG: Text, webcomics, etc would look like crap.

            Programs are capable just fine of handling the difference without the extension. Now tell me, without an extension, how does the user figure out which format it is, without using some specific tool or opening each image?

            Sure, grandma doesn't think she cares whether it's JPG or PNG
          • Why do I need to know whether it's JPEG encoded or TIFF encoded? Why can't it just be a picture?

            I've been saying this for years. Hopefully, this is the beginning of the end of the codec/container alphabet soup.

            I know this stuff pretty well, and its difficult for me to convince my Mac to play a movie from time to time.

            There must be at least 30+ combinations of audio/video codecs and container options out there, yet the all do the same damn thing. Display audio and video on my computer.

            Even standard mpegs
      • Well, I'd make the argument that the problem exists between the keyboard and chair of the software developer--not the user.

        I agree this is (often) the case. A lot of really crappy buggy software exists that users will (and arguably should) trust - like, say, the software drivers for your new scanner/printer combo. The fact that it may hog your resources for no good reason is not something the user would expect. Perhaps this example is not a security threat, but if that software had bugs or some other way p
      • by tehshen ( 794722 ) <tehshen@gmail.com> on Monday May 08, 2006 @04:14AM (#15283965)
        I'm with you here. My sibling post (correct term?) and you make nice points about lazy programmers, so I'm going to go and bash some bad designers, too.

        I've found that Windows and its applications are really, really stupid with the way they handle dialog boxes. Kind of off-topic, I know, but since most security issues are luser error, I can guess that most of those are caused by blind click-click-clicking Yes to dialog boxes.

        I get a dialog box when I try to delete a file. I get several dialog boxes whenever a program crashes - something about an error report. At my school, they've managed to set up Word so you get three dialog boxes when you open it: one asking you to disable macros (to which the average user goes What?), another telling you that macros have been disabled (yes, that's why I clicked that button) and another telling you that there's a window open.

        With so many dialog boxes around, most of them unnecessary, I don't blame the average user for ignoring the important ones. If you press Yes, the nasty evil dialog box will go away. Sooner or later the times comes when you install some spyware trying to get rid of the dialog box.

        And what has Vista done? Put even more of them in. Quoth even Paul Thurrott [winsupersite.com]: The problem with UAP is that it throws up an unbelievable number of warning dialogs for even the simplest of tasks. That these dialogs pop up repeatedly for the same action would be comical if it weren't so amazingly frustrating. It would be hilarious if it weren't going to affect hundreds of millions of people in a few short months. It is, in fact, almost criminal in its insidiousness. Gah, showering the user with more dialog boxes is useless, as they ignore them all anyway!

        I'm on a roll here. What else?

        When I want to Save a document, I go to the button marked Save. At least, I do on Gnome and OS X: Windows likes to have buttons called "Yes", "No" and "Cancel" instead. So instead of doing what I want (Saving), I have to read the dialog to find out which button Saves my document. And most people wouldn't even try to read it; they'd just click Yes and hope it was the right one. Oh, and the dialog text is often in a small font with no discernable main point about what it does.

        Windows dialog boxes are obtrusive enough that people would rather make them go away (think: click Yes) than working out what they do. Here's [xvsxp.com] an example of a Mac one - I can tell what each button does before reading, and even if I have to read, there's some nice bold text so I don't have to read it all. Here's [xvsxp.com] the worst example of a Windows one I could find. Note none of the above things that the Mac does right. This isn't the best example, I know, but it points out where Windows fails best.

        I reckon you could've eliminated a fair few spyware installs if the "Yes" button was labelled "Install Software", or the "Next" button was lebelled "Accept this Licence", or whatever it is. No more "Let's click Yes to make the nasty evil dialog box go away", but some people will think "Do I really want to install this software?" or "Do I really want to run this program?". It makes people think, and thinking is good when you're trying to make decisions.

        Oh, and:

        "How dare you try to type at another window when I am here, infidel scum!"

        "And Vista dyes the rest of the screen black, just in case you didn't notice me the first time. See? [winsupersite.com]"

        Where was I? Oh yes, computer security. I don't think it's fair to blame any and all spyware installations on user error. Windows places you on a path above a crevasse with a bicycle, and expects you to pedal to the other side. Sure, you might get blown off by wind (read: security holes in the OS). Many people
        • I always loved the Sade Mode dialog box [umd.edu]. They couldn't just have a button for "safe mode" and one for System Restore.

          ---
          watch funny commercials [tubespot.com]
      • The lack of security _is_ a market decision. People still buy Windows, regardless of security problems. The market is not willing to pay for security.
    • (you shouldn't download exe's from people you don't know, a firewall is a good thing to have, ActiveX controls aren't safe and your default response shouldn't be to install them no matter what IE says)

      And what braindead operating system will let normal users install software or tell it that executing those ActiveX controls are safe?
    • It's not users that are the problem, it is inferior technology. Executables from unknown people would be harmless if they were executed under a properly privileged environment; firewalls would not be needed if network resources had a proper security system just like other resources; ActiveX controls are executables, so the properly privileged execution environment is also valid for ActiveX...etc.

      The reasons for security problems are:

      1) inferior programming languages: C and C++ more specifically. The open na
  • by ericdano ( 113424 ) on Monday May 08, 2006 @01:33AM (#15283662) Homepage
    Oh, but we know that Microsoft will be on top of the game. For sure. Absolutely. Windows 2050 will be THE safest, THE most secure version of Windows yet.
    • Available in 2075. No, really this time. We're serious.
    • by Council ( 514577 ) <rmunroe@gFREEBSDmail.com minus bsd> on Monday May 08, 2006 @01:53AM (#15283708) Homepage
      Oh, but we know that Microsoft will be on top of the game. For sure. Absolutely. Windows 2050 will be THE safest, THE most secure version of Windows yet.


      I was really surprised to see someone arguing that Windows does kernel security really well [informit.com], and that the problem is that people don't want a detailed permissions control system so at all levels they enable everything. But they've provided a good security architecture as far as thread control goes -- it's just that coders down the line are ignoring it.

      Of course, how many of those 'down-the-line coders' are at Microsoft itself?
      • if it was really done well coders down the line wouldn't have to worry about ignoring or using it. If your security depends on various pieces of software implementing a specific security model your security is already doomed.

        Security configuration should depend entirely on the OS and user configuration, with security settings for each applicaiton created by the user or a reasonably safe defaultchosen by the OS.
      • Although I agree that Windows provide a security model which offers very fine-grained control, the complexity of the system is what drives programmers away from it. Even Microsoft falls victim to its own complexity most of the time.

        What I would like to see from operating systems is the concept of 'ring protection' as in CPUs: each executable shall belong to a specific privilege ring, and the higher the ring number, the less privileges the executable shall have. Most problems would go away with this mechanis
    • As long as you only compare a system with prior generations, it would be a VERY big sign of incompetence if a successive generation sucks even more than anything that came before.

      Then again, there's WinXP...
  • Interesting points (Score:5, Insightful)

    by mikesd81 ( 518581 ) <<mikesd1> <at> <verizon.net>> on Monday May 08, 2006 @01:35AM (#15283666) Homepage
    and overcoming the challenge of users not reading dialog boxes,

    That's true. So true. Tons of times I just clicked yes without reading or reading fully and then later on down the road...oops.

    I updated outlook express for my mom the one time and it autmatically blocked attachments, confusing her. And me, until I found where to uncheck that.

    The computer can be taught to enforce security policies that the users themselves are unlikely to uphold, given their propensity to ignore advisories and software dialog boxes. Software engineers must build in security that is active by default, and they must understand the user so that security tools are actually used.

    But also keep in mind who the user will be. Some advanced users would probaly not need/want the security by default. New users or non-advanced ones would need it. We would need to find security to be adaptable.

    In a comical way maybe the system can say "well you hosed /etc once, do you wanna do it again?"
    • by Verloc ( 119412 )
      Some advanced users would probaly not need/want the security by default.

      I think that advanced users will be able to change their settings until they find a sweet-spot. Default protection protects against my mother, who may not look dangerous but is involved in multiple DDOS attacks across the eastern united states.

      I personally think the solution is some sort of PSA before opening the file of the horrors of viruses. You know, hospital equipment going down, people going crazy, real 'reefer madness' he
    • how to solve this:

      have 5 buttons at the bottom of the dialog box, labeled one to five (in words, not numbers). in the dalog text state "to continue press button XXXX, or any other button to cancel".

      which means you actualy have to read the text to continue. so long as dailogs are suitably verbose the "button to press" text will be in a different location each time.

      an alternative option is to tie it in with sudo permissions, a dialog could pop up explaining that admin rights are needed to proceed and show a p
      • If you want to type something, make them have to type "yes" or "no" to continue.

        Not just "y" or "n" but "yes" or "no".

        It's the same thing, and better than entering "banannas". And no I'm not being a jerk, I got what you meant. I was just adding something real world to it :)
    • I HATE dialog boxes. Get rid of them I say - find a different standard way to present the information.

      Like yourself, I mindlessly click through dialog boxes, occasionally missing important information because 90% of the time, Windows dialog boxes offer me nothing important or new so I automatically "Ok" them.

      I got one of those Mac Mini's when they first came out - my first experience with a Mac and it was only 3 months later that I realised the main reason I found it much friendlier to use was that it
    • I find the `advanced users' claim rather hollow. I know one end of a computer from another --- I've been using Unix boxes since 6th edition, and I was running TCP in a wide-area environment in the mid eighties. And I know one end of computer security from another --- I've managed a BS7799 certification programme, and I handle security evaluation for telecoms products in major networks. But I wouldn't trust myself to pick up a random release of Solaris (hey, I've _used_ Sun 2s) or Windows or OSX or Linux
    • There are a couple of things that could be done to improve dialog boxes (which, incidentally, are also good ways of improving communication with management):
      1. Instead of a paragraph of text, provide a short bulleted list of import key points, i.e.:
        • The sender of this E-mail is not in your adress book
        • This type of attachment is dangerous
        • Running this attachment may result in viral infection
        • Running this attachment is not recommended

        Something about the organization of bulleted lists makes them infinitely more read

  • by Umbral Blot ( 737704 ) on Monday May 08, 2006 @01:37AM (#15283670) Homepage
    This article seems to focus more on security by design, which is of course important. However security also can be implemented at the language level, for example Java's sandbox. I predict that over the next 50 years languages will improve to prevent programmer from making "stupid" mistakes such as copying user input directly into a buffer that will be become an html document. Tainting already solves some of these problems, but there is still work to be done. (for example to discourage programmers from creating empty "de-tainting" routines when they don't have time to do it properly, de-tainting should really be done by libraries and by the language alone, but I digress)
    • Safer languages are great if you want to write something with pace, but they're really just hiding the underlying problem: that developers produce these buffer overflow problems when under pressure to deliver (either pressured by a deadline or by pure enthusiasm).

      I personally do not feel that using bigger safety nets makes for a better circus act.
  • by caitsith01 ( 606117 ) on Monday May 08, 2006 @01:42AM (#15283682) Journal
    Am I alone in finding this kind of topic - "The state of X in the year 2050" - really, incredibly pointless?

    Given that no-one has been able to make accurate predictions about computer technology over a 5-year horizon, what possible basis is there for thinking that anyone can predict what the state of technology will be in 50 years time? By then we may be keeping our data secure by storing it in a hidden pocket of space-time in a parallel universe 10,000,000 years back in time and retrieving it through a wormhole when required. Or civilization may have collapsed, leaving us with the 'pointy rock tied to a stick' device as our best form of security.

    My point is: no-one knows. It's pointless to predict this far into the future.

    I would prefer people stick to making these kinds of predictions about large, relatively predictable fields (e.g.: the climate; oil supplies; population; tectonic plate movement) and leave their prognostications about ridiculous things like 'computer security' to something like a 2-10 year window.

    Or we could, you know, read some *news* instead of some random predictions.
    • Good points caitsith. Though if you listened to the speech I posted today, Alan is just bringing up issues that'll be important in the future; he's not making any predictions like Cringely does. The next 50 years...sure that's a bit of a stretch, but the issues he brings up are fundemental, across the board and theoretically significant. thanks for the comment -baris
    • Am I alone in finding this kind of topic - "The state of X in the year 2050" - really, incredibly pointless?

      Probably not, but I disagree. It isn't pointless. It isn't immediately practical, yes. But there are many good reasons to dare a guess into the far future. And that doesn't mean you can't at the same time plan for next year.

      Also, there are things that will change a lot and in ways we can not yet imagine within the next 50 years. And there are things that will very probably not change very much, like t
    • A computer security conference in 1956 would have been mainly about gaurding the building the thing was sitting in! Actually, you STILL need physical security, so maybe that's not so dumb....
    • Excellent point, but I thought I'd point out that the fields you listed as "relatively predictable" don't seem all that predictable to me.

      Climate - We can't really predict the weather a week from now. I'm not all that convinced that we have any idea what's going to happen in 1, 10, or 100 years. (Preparing for flames from the global warming crowd...)

      Oil Supplies - You'd stand to make quite a lot of money if you actually knew what would happen to oil supplies in the future. There's too many variables, though
  • by bblboy54 ( 926265 ) on Monday May 08, 2006 @01:47AM (#15283690) Homepage
    ....and overcoming the challenge of users not reading dialog boxes....


    I have to agree that this is a serious concern and as a tech, I often want to blame the stupid user since I deal with them frequently but on the other hand, can you really blame them? In any given day, an end user sees an unmeasurable amount of dialog boxes and our minds are designed to filter out things that are annoying. Instead of "Hey your email wasnt sent" you get 3 dialog boxes first that have no meaning. Of course, there is the next-next-finish epidemic as well. Does anyone really ready any options anymore? We all just go for the next button until it turns into a finish button. There are 2 huge problems with this. The first is that mixed in with all these stupid notices, there are important messages that go unnoticed. The second issue is that this is something that spyware companies thrive on for legalities.... in the middle of those next-next-finish games is the little line that signs your computer over to the dark side.
    • Firefox handles this problem with it's plugin installed by having a timer on the Install button, I think they should use a similar technique for everything important.
      • I disagree. It's a hack that doesn't fix a problem, but does create another one.
        • I'm willing to bet it won't prevent a significant amount of malware installations -- it's still just another OK button to press. People become illiterate when shown a security notice, showing it longer won't help.
        • Although it might help a few people, it is an annoyance for everyone -- another example of a program "knowing" what I want better than I do...
  • by Quirk ( 36086 ) on Monday May 08, 2006 @01:50AM (#15283699) Homepage Journal
    "In the long run we'll all be dead."

    J.M.Keynes [wikipedia.org]

  • by gmuslera ( 3436 ) on Monday May 08, 2006 @01:52AM (#15283705) Homepage Journal
    The future is not what is used to be. All the "the next 50 years" of 50 years ago predictions (on almost everything) were something wrong, something right, but if you read that you dont feel like being there (oh, we have some sort flying cars in a way or another, or civilians in space, to put 2 examples, but is not like for everyone, or everyday).

    Wonder how many of those will become obsolete in 10 years only, not because the problem stopped to exist, just because terms of the problem changed giving little meaning for that to normal people. Today computing security is a tangible problem, even normal users have to worry about virus, trojans, worms, spyware, not having trivial keys, etc, but how much of that problems could remain for users in 20-50 years from here, or how they will be perceived?

    We can be here discussing war strategy with sticks and stones while in 50 years (to be a bit exaggerated :) they use rayguns, but some of the things discussed now could remain valid then, some could work if some fallback must be done to something similar to stick and stones, and other things could had no meaning anymore.

  • by Opportunist ( 166417 ) on Monday May 08, 2006 @01:53AM (#15283707)
    As a responsible parent, you don't give your kids alcohol. As a responsible driver, you don't drive 100mph near a school. And there are actually laws that, if you happen to be careless and negligant, you get fined or worse.

    Only when it comes to computers and the 'net, you can be as irresponsible as you want and you won't get any negative feedback from the feds. You may click on every "please click here to become a spambot" message. You may install every kind of adware, while at the same time ignoring or even blocking updates for your system (and thus becoming the primary target for exploits like the recent WMF desaster). Nobody will hold you accountable for it. Even if you manage to fall for some cheap "please insert all your personal, bank and credit card info, and send us a copy of your passport" scam, more often than not your bank will cover for you.

    Why is ignorance and irresponsibility an excuse when it comes to computers and the 'net? Because judges and legislators can't make sense outta it? At least, given some laws I'd get that impression.

    Security starts with teaching the users, and most of all teaching them responsibility. Not better tech. You can have the best high secuirity door if you falls for the cheapest con job and let anyone in, you'll still have some things missing after every visit.
    • Why is ignorance and irresponsibility an excuse when it comes to computers and the 'net?

      As a general rule ignorance and irresponsibility in the use of a computer is unlikely to kill someone.
      • True. Note to self: Use some DDoS sheep to DDoS the next hospital.

        Quite seriously now. It is not YET possible to kill directly with a computer. But the way is paved. You can commit very serious crimes that outrank any damage ever done to the *AAs by magnitudes.

        But since the victims are usually "normal" people, it just doesn't hit so close to home for those who could change something there. The threat is here, though. While not killing, it can definitly ruin your life.
  • Dialog boxes (Score:2, Insightful)

    I always found the term "dialog" box to be an amusing misnomer. If they were really dialogs, I suspect the user would rarely have constructive things to say to their computer. On the other hand, monologue boxes would be far too dramatic, with the spotlight and all.
  • by Anonymous Coward on Monday May 08, 2006 @02:32AM (#15283782)
    Remember - if you are going to extend the analogy:

    1) You can't drive a car unless you have proven that you posess a minimum level of competency.
    2) The car has to meet certain standards to be roadworthy
    3) People by and large don't expect others to maintain their car for free
    4) You have to pay the governmnet ragularly to be allowed to drive it on the road

    I's either a bad analogy, or a very good one - you pick.
    • The problem is that most of that does not apply if you stay off public roads. At what point does a computer present a hazard to the public?
      • As soon as you connect it somehow to the (public) Internet.

        A virus-infected computer is a danger to other connected computers as a drunken driver is for other users of the road.

        Markus

    • 1) You can't drive a car unless you have proven that you posess a minimum level of competency.

      Minimum level is right! 40,000+ deaths/year in the US.
      Similar 'licensing' for computers would start with 'This is the mouse', and end with 'Here's how to save a document in MS Word.'

      2) The car has to meet certain standards to be roadworthy

      Ok...you only get your virus updates once a year at inspection time...:)

      3) People by and large don't expect others to maintain their car for free

      I take it you don't hav

  • Did anybody click on the article (yeah, yeah, I know) and actually look at that guy? I respect him, really I do - and the first thing I thought was, "Buddy, you really need to shave."

    He's got to do something about the scrag before someone misidentifies him and his hoary mug ends up on Coast-to-Coast AM's web site. Or worse, someone mistakes him for Saint IGNUcious [stallman.org].
  • And it should also be easy to use.
    Making security easy to use CAN be done.

    Email encryption for example, when you install the mail client, it could generate a public/private keypair automatically and submit the public key to public key servers automatically.

    Then when you send an email, it can automatically look up the public key of the person you are emailing and encrypt the email (unless you tell it not to).

    When explaining it all to the user, dont call it "Encryption", just tell them that if they use this f
  • by Eideewt ( 603267 ) on Monday May 08, 2006 @03:33AM (#15283903)
    I think a less complex interface would do wonders for the PEBKAC angle of computer security. It seems to me that computers try to do much more than the average user wants or needs, which just creates more opportunities to screw up, and also makes the computer seem a lot more intimidating.

    If we were to hide most of what the computer can do, then users could focus on what they really need it to do. As it is, non-technical folk just learn to tune stuff out, which isn't exactly good when we want them to pay attention to security (like just where that attachment came from, and whether that wonderful program they see is going to screw their computer). A normal user doesn't hope to comprehend everything that their computer is doing, so they don't think about the effects of their actions so hard. The computer is a wily and unpredictable beast. How will they know if it was something that they did that messed the computer up, or whether it did it on its own. Users need to be able to get comfortable with the machine before they'll really worry about it. User interfaces these days are just too much for anyone without an affinity for machines (like many of us here) to come to grips with. They just learn the tasks they need to do and hope the thing doesn't break.

    Most users need to be able to use a word processor, a web browser, and maybe an IM client and music player. Why do computers give them lengthy lists of programs which can be run, windows that can obscure each other and take on funny proportions (I hate those things), zillions of little icons in the tray and even more on the desktop, and why do they sprinkle system settings in with all that? That's a lot of stuff to tune out.

    If I were designing an interface for noobs, I'd get rid of all that stuff.

    I'd have just one menu bar, which would contain at minimum the four essential applications that I mentioned. There would probably also be a couple of popup menus for less frequently used programs (less commonly used office apps, games). Programs would be sorted by function, and the guys writing installers would absolutely not get to create a new submenu for their company, to prevent the mess that any Start Menu will turn itself in to after a while.

    Programs would always run full screen. I know there are plenty of slashdotters here who are very upset by that, but this interface wouldn't be aimed at you. You can do whatever you like with your giant monitors. On a screen only a thousand pixels across, overlapping resizable windows are just a complicated waste of time. Most any program will require all the screen real-estate to be useful, so it makes sense to just let them have it.

    My four main apps would not only be launched by clicking their icons; the same icons would also give them focus. There's no reason do duplicate them (I realize that this means those four would have to be MDI apps. Tabs seem like a good solution.). When users want a web browsers they'll be able to always click in the same place. Additional apps launched from the menu would just hop into the bar next to them. (This sounds a little like OS X's dock, but I'm not too familiar with it, so I'm not sure how close it is.)

    I might also put in a file manager. It wouldn't display system files, or even hint to the user that they exist. I think it would be search based, but it's way too late at night for me to put serious thought into it. A file manager might not be the best idea any way. If users can just start up their apps and let them handle the file types they know about, then the old "porn.jpg.exe" attack gets pretty much foiled.

    That's about it, really. I think that would accomplish most everything that needs doing for most users. Naturally an admin mode of some kind would be required. I envision a simple one that would allow users to tweak the OS's look and install software from repositories (either online or from CD). Real admins could go yet further. Maybe just a CLI. It doesn't matter much. Anyone with the will and the know-how to muck around with the system's guts will figure out whatever you throw at them.

    Oh, and mouse cursors would be big, because I like them.
    • I think a less complex interface would do wonders for the PEBKAC angle ... If we were to hide most of what the computer can do, then users could focus

      MS tried that. So Outlook Express didn't ask you if you wanted to display HTML in an email, to "present a less complex interface", it just did it. And ran any scripts or exes. People are suckers for features that look good in the demo, or give it more checkmarks in the magazine surveys. People voted with their wallets for more feautures, ignoring security or

    • I agree with you. The real problem with computers is that they are not information management systems, but binary data processors. Computers should be elevated above binary data processors, at least from the common user's perspective.

      I will take the filesystem as an example: a user sees millions of files in his computer, with 99% of files having a funny name and icon that tells the users nothing about it use...that is because users are exposed to the details of the computer file system.

      What should the user
  • by PixieDust ( 971386 ) on Monday May 08, 2006 @03:34AM (#15283905)
    The problem with IT security, historically, has been a "Default Allow" approach. This is getting better, but still has a LOOOONG ways to go. Things should not be automatically allowed, they hsould have to be turned on.

    Consider Windows 98/98SE. File sharing is off. And the OS itself was more or less a fairly secure (for it's time) OS on a DEFAULT install. Compare to Win2k/WinXP. Default admin shares open, often in upgrade cases we have Administrative accounts with NO password, which (with the exception of XP) could log on remotely. XP at least was intelligent enough in it's design so as not to allow remote logins with blank passwords for Administrative accounts (UNLESS ENABLED). THAT, my friends, is the correct approach to security. Default = NO!

    Once this has been accomplished, and the general mindset of programmers when considering security (and Admins, etc.) is to assume the user knows NOTHING, and that things just should NOT be permitted without full warning of the consequences (this is where figuring out how to get users to read dialogue boxes comes in handy) security will be much tighter. And lets not forget about vendors and programmers just ignoring security glitches. It's sad to see a Buffer Overflow attack remain a vulnerability in a program beyond a single patch release, once identified. Even sadder, is when further program releases STILL have not addressed the issue (see, Medal of Honor Voting)). The 'solution' is disabling a bonafide FEATURE. This type of nonchalant approach to security will always land the general populaec in the grips of security vulnerabilities, with no clearn end in sight.

    My thoughts.
  • by patio11 ( 857072 ) on Monday May 08, 2006 @05:16AM (#15284046)
    1) Topic: Securing your mainframes from insects and rodents.
    2) Wasted CPU cycles and how you can prevent them.
    3) Proper punch card disposal protocols.

    The point? We have *no clue* what the computer will look fifty years from now, to say nothing of the security environment. Todays threats will be laughable in light of the technology and practices of tomorrow (many of the threats we spend a lot of time worrying about, such as spyware, are features not of all computing, not even of a particular application class, but that plague one particular implementation of an application which just happens to have a majority share of the market today -- who can say whether a security researcher in 2056 will even remember the words "Internet Explorer" from his history class or whether browsing any analogue to the Internet will be a common activity?). Prognosticating the threat environment that far out is a waste of time. Look to the near term (next 5 years: spam, viruses, malware) and address the perinneals (dumb users, men on the inside, etc) rather than trying to prognosticate what year we'll have the computer equivalent of flying cars.

  • Without RTFA, this is completely stupid. If someone in the 50s had thought to lecture me about computer security, in a world even without networking, it would be totally irrelevant to todays environment. I would think we can extrapolate maybe the next twenty years. At most. Already WebOSs are coming out, as are apps for those.

    I could be wrong, without reading the article and all, but 50 years is a little long to be speaking authoratively. After all, my much respected pedagogue, Mr Tanenbaum, said Linux
  • by Maljin Jolt ( 746064 ) on Monday May 08, 2006 @07:13AM (#15284313) Journal
    Yes, let's hope someone will actually invent some in that period. For I am afraid my graveyard identity could be stolen...
  • ...and tell me that you could've predicted where computer security was going for the next 50 years then.

    DDoS attacks? Botnets? Spam zombies? "Old school" viruses (and by old school, I mean it seems like these kind of viruses have become less-common than they were in the early-mid 1990s) that wipe your whole HDD? Mail clients that auto-execute a scripting language that a maliciously-minded high schooler can understand? Exploit-discovery tools like Metasploit? (or heck, even the very concept of an "expl

He who has but four and spends five has no need for a wallet.

Working...