Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

DieHard, the Software 230

Roland Piquepaille writes "No, it's not another movie sequel. DieHard is a piece of software which helps programs to run correctly and protects them from a range of security vulnerabilities. It has been developed by computer scientists from the University of Massachusetts Amherst — and Microsoft. DieHard prevents crashes and hacker attacks by focusing on memory. Our computers have thousands times more memory than 20 years ago. Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows" which are exploited by hackers."
This discussion has been archived. No new comments can be posted.

DieHard, the Software

Comments Filter:
  • by PurifyYourMind ( 776223 ) on Monday January 01, 2007 @09:27PM (#17427454) Homepage
    Along the same lines anyway... a new feature in Vista: Address space layout randomization (ASLR) is a computer security technique which involves arranging the positions of key data areas, usually including the base of the executable and position of libraries, heap, and stack, randomly in a process' address space. http://en.wikipedia.org/wiki/Address_space_layout_ randomization [wikipedia.org]
    • by Anonymous Coward on Monday January 01, 2007 @09:39PM (#17427552)
      This came out in OpenBSD 3.3 [openbsd.org] over three years ago. Nice to see Microsoft keeping up with the times.
       
      • Doh! I should have known. :-)
      • by Jeremi ( 14640 )
        This came out in OpenBSD 3.3 over three years ago. Nice to see Microsoft keeping up with the times.


        Is this feature standard in Linux yet? I'd hate to see us OSS guys get shown up by Bill... ;^)

      • by Ristretto ( 79399 ) <emery@c[ ]mass.edu ['s.u' in gap]> on Tuesday January 02, 2007 @12:23AM (#17428728) Homepage
        Hi Slashdot readers,

        DieHard's randomization is very different from what OpenBSD does, not to mention Vista's address-space randomization. I've added a note to the FAQs that explains the difference in some detail, and answers several other questions, but in short: "address-space randomization" randomizes the base address of the heap and also mmapped-chunks of memory, leaving the relative position of objects intact. By contrast, DieHard randomizes the location of every single object across the entire heap. It also goes further in that it prevents a wide range of memory errors automatically, like double frees and illegal frees, and effectively eliminates heap corruption.

        -- Emery Berger
        • by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Tuesday January 02, 2007 @03:32AM (#17429596) Journal
          Seems like OpenBSD's implementation does what DieHard claims, or at least some of it. See this interview from August 2005 for information:

          http://kerneltrap.org/node/5584 [kerneltrap.org]

          Any thoughts?
           
          • by Ristretto ( 79399 ) <emery@c[ ]mass.edu ['s.u' in gap]> on Tuesday January 02, 2007 @09:59AM (#17431034) Homepage
            Hi,

            Here's a more detailed answer -- I'll add it to the FAQ.

            OpenBSD (a variant of PHKmalloc) does some of what DieHard's allocator does, but DieHard does much more. On the security side, DieHard adds much more "entropy"; on the reliability side, it mathematically reduces the risk that a programmer bug will have any impact on program execution.

            OpenBSD randomly locates pages of memory and allocates small objects from these pages. It improves security by avoiding the effect of certain errors. Like DieHard, it is resilient to double and invalid frees. It places guard pages around large chunks and frees such large chunks back to the OS (causing later references through dangling pointers to fail unless the chunk is reused). It attempts to block some buffer overflows by using page protection. Finally, it shuffles some allocated objects around on a page, randomizing their location within a page.

            DieHard goes much further. First, it completely segregates heap metadata from the heap, making heap corruption (and hijack attacks) nearly impossible. On OpenBSD, a large-enough underflow on OpenBSD can overwrite the page directory or local page info struct (at the beginning of each page), hijacking the allocator. This presentation [ruxcon.org.au] describes several ways OpenBSD's allocator can be attacked. By contrast, none of DieHard's metadata is located in the allocated object space.

            Second, DieHard randomizes the placement of objects across the entire heap. This has numerous advantages. On the security side, it makes brute-force attempts to locate adjacent objects nearly impossible -- in OpenBSD, knowing the allocation sequence determines which pages objects will land on (see the presentation pointed to above).

            DieHard's complete randomization is key to provably avoiding a range of errors with high probability. It reduces the worst-case odds that a buffer overflow has any impact to 50%. The actual likelihood is even lower when the heap is not full. DieHard also avoids dangling pointer errors with very high probability (e.g., 99.999%), making it nearly impervious to such mistakes. You can read our PLDI paper for more details and formulae.

            -- Emery Berger

        • Re: (Score:3, Informative)

          by Alioth ( 221270 )
          OpenBSD prevents double frees and illegal frees and heap corruption too, and has been doing so for at least a couple of years. The code is BSD licensed too, so you can use it in closed source products like Windows. OpenBSD also has had something called W^X (Write XOR execute) for several years now, even for CPU architectures that don't support making executable pages read-only.
      • by jnf ( 846084 )
        Yea and was in PaX about 6 years ago, so much for being proactively secure.
      • by Tim C ( 15259 ) on Tuesday January 02, 2007 @03:34AM (#17429610)
        Vista has been in development for around 5 years; unless you were expecting this to be released as a service pack for XP or Server 2003, what's your point? It's in MS's latest release, what more do you want? (Yeah, a shorter release cycle would be nice - except that then people would bitch about the upgrade treadmill...)
    • by Salvance ( 1014001 ) * on Monday January 01, 2007 @10:35PM (#17428020) Homepage Journal
      Sure, but wouldn't it be better if everything ran in it's own virtual session (or within a virtual secure space)? This was Microsoft's original plan with it's Palladium component of Longhorn [com.com], but my understanding is that this was almost entirely scrapped to get Vista out the door.

      Part of the other problem is that most home users expect secure data, but they aren't willing to do anything about it (e.g. set up non-admin users, install virus checkers/firewalls/etc).
      • by iamacat ( 583406 ) on Tuesday January 02, 2007 @05:03AM (#17429908)
        No, it won't. Programs need to interoperate - you might want to explicitly upload a photo to shutterfly using your web browser, but you don't want a rogue website to just siphon off all your private photos by exploiting a memory bug in one of the endless plugins.

        The real solution is programming in a language with secure memory management, such as .Net, Java or even LISP. I suspect that overhead is far smaller than running 3 copies of the program at once like DieHard does.
      • by DrSkwid ( 118965 )
        Virtualisation adds another attack vector, suddenly ALL of your programs could be vulnerable.

        If your OS can only be trusted in a virtual environment, what's the point of using an OS at all?
    • by Alien54 ( 180860 )
      So of course, the two systems will conflict with each other, and lock up the system tighter than the improper use of superglue in NSFW situations

      or randomly locate virtual memory around the HD without regard to pre-existing magnetic conditions.

      ;-)
  • Different program? (Score:2, Informative)

    by Anonymous Coward
    I thought DieHard was a random number generator test suite. It is annoying when people dont even look around for other programs with the same name and do similar things.
  • Correction (Score:5, Insightful)

    by realmolo ( 574068 ) on Monday January 01, 2007 @09:29PM (#17427476)
    "Still, programmers are privileging speed and efficiency over security..."

    Speed and efficiency of *development*, maybe.

    Which is the problem. Modern software is so dependent on toolkits and compiler optimizations and various other "pre-made" pieces, that any program of even moderate complexity is doing things that the programmer isn't really aware of.

    • Re:Correction (Score:5, Insightful)

      by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Monday January 01, 2007 @09:46PM (#17427604) Homepage
      This is one of the arguments for a language running on a VM like Java, C#, or Python. They can do runtime checking of array bounds and such and throw an exception or crash instead of silently overwriting some other variable that only may or may not cause a crash or some other noticeable side effect later.
    • Re:Correction (Score:5, Insightful)

      by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Monday January 01, 2007 @09:48PM (#17427618) Homepage Journal
      "Still, programmers are privileging speed and efficiency over security..."

      Speed and efficiency of *development*, maybe.

      No, it was right the first time. Java is several orders of magnitude more secure by default than any random C or C++ program. Yet mention Java on a forum like, say, Slashdot, and you'll hear no end to how much Java sucks because "it's slow". (Usually ignoring the massive speedups that have happened since they last tried it 1996.) It doesn't matter that the tradeoff for that speed is flexibility, security, and portability. They want things to be fast for some undefined quantity of fast.

      In fact, I predict that someone will be along to argue just how slow Java is in 3... 2... 1...
      • by dodobh ( 65811 )
        As a user, my concern isn't about development time at all. It's about how the application consumes my resources. Java is great on the server side (one app, long run times, lots of memory). On the desktop side? Not yet (less memory, lots of concurrent apps, short run times for most apps).
      • Re:Correction (Score:4, Interesting)

        by evilviper ( 135110 ) on Tuesday January 02, 2007 @01:32AM (#17429114) Journal
        It doesn't matter that the tradeoff for that speed is flexibility, security, and portability. They want things to be fast for some undefined quantity of fast.

        I've got to call you on the "portability" crap.

        Java is about as portable as Flash... Sure, the major platforms are supported, but that's it. 3rd parties spent a lot of time trying to impliment java, but never did get everything 100%. Licensing issues, above all else, made it a real hassle to get Java on platforms like FreeBSD.

        Meanwhile, C and C++ compiler are installed in the base system by default.

        The only "portability" advantage Java has is perhaps in GUI apps, and that's at the expense of a program that doesn't look or work remotely similar to any other app on the system...

        There are a great many reasons people don't use java. Performance is only a minor one.
        • Licensing issues, above all else, made it a real hassle to get Java on platforms like FreeBSD.
          Sun just formally announced that they'll release Java under the GPL.
      • Re:Correction (Score:5, Insightful)

        by Anonymous Coward on Tuesday January 02, 2007 @01:41AM (#17429154)
        "Java is slow" is the stated reason. As you noted, it is not the actual reason. To tell the actual reason is difficult, but in short Java reminds us too much of what it should have been.

        The basic complaints I have heard are these:

        Complaint 1: Java is slow.
          As you stated, this is not a meaningful complaint.

        Complaint 2: Garbage Collection stinks
          GC is an obvious requirement of a "safe" language. As implemented in Java, it is downright stupid. When doing something CPU intensive, the GC never runs, leading to gobbling up memory until there is no more and thrashing to death. I'm sure that somebody is going to dig up that paging-free GC paper, but pay attention: that is a kernel-level GC.

        Complaint 3: Swing is ugly/leaks memory
          The first is a matter of opinion. The second is well-known. Swing keeps references to long-dead components hidden in internal collections leading to massive memory leaks. These memory leaks can be propagated to the parent application if it is also written in Java.

        Complaint 4: Bad build system
          Java cannot do incremental builds if class files have circular references. In a small project of about ten classes I was working on, the only way to build it was "rm *.class ; javac *.java"

        Complaint 5: Tied class hierarchy to filesystem hierarchy
          This was just stupid and interacts badly with Windows (and anything else with a case insensitive filesystem). It is even worse for someone who is first learning the language. It also makes renaming classes have a very bad effect on source control.

        Complaint 6: Lack of C++ templates
          C++ has some of its own faults. Fortunately its template system can be leveraged to fix quite a few of them. Java's generics have insufficient power to do the same thing.

        Complaint 7: Lack of unsigned integer
          These are oh-so-necessary when doing all kinds of things with binary formats. Too bad Java and all its descendents don't have them.

        Complaint 8: Verbosity without a point
          It has gotten so bad in places that I am strongly tempted to pass Java through the C preprocessor first, but I can't do that very well because of 4.
        • Corrections (Score:3, Informative)

          by SuperKendall ( 25149 )
          Basically almost every point you raised can be addressed simply by saying "get your head out of five years in the past". Moderm GC can take little overhead, and will run when needed even with the CPU being consumed.

          Swing does not really have the problems you speak of any longer, if you are using it right... heck, it didn't really have those problem to any great degree about seven years ago when I was building a large custom client app all in swing for only desktop deployment.

          Complaining about the build sys
        • by sgtrock ( 191182 )

          Complaint 4: Bad build system
          Java cannot do incremental builds if class files have circular references. In a small project of about ten classes I was working on, the only way to build it was "rm *.class ; javac *.java"

          emphasis added

          I'm very surprised that no one else jumped on this one. I've never seen a well designed app that had circular references of any sort. I'll stipulate that such probably do exist, as there seems to be a case for doing things that would otherwise be dumb for just ab

        • by julesh ( 229690 )
          GC is an obvious requirement of a "safe" language.

          No, it isn't. Memory safety can be achieved without the use of garbage collection, by avoid reallocation of freed memory to a differently-typed object. Access to freed pointers can be caught by page table manipulation. Sure, there's an overhead to these techniques, but then there's a non-trivial overhead to GC as well.

          I'm sure that somebody is going to dig up that paging-free GC paper, but pay attention: that is a kernel-level GC.

          Which just indicates tha
      • Re: (Score:2, Insightful)

        by tulrich ( 737161 )

        Java is several orders of magnitude more secure by default than any random C or C++ program.

        Do you know what "several orders of magnitude" means? For variety, next time you should write "... exponentially more secure ..." or "... takes security to the next level!"

        BTW, it's funny you should mention Java performance in this thread -- one of the DieHard authors published this fascinating paper on Java GC performance: http://citeseer.ist.psu.edu/hertz05quantifying.htm l [psu.edu] -- executive summary: GC can theore

        • It does not have to be *as* fast as malloc/free. It needs to be sufficiently fast to run applications with. Explicit malloc/free's are very problematic, you need to keep score of each object, and that is not always easy to do. At least it is already much faster (the default GC of Java) than smart pointers (reference counting), which is the direct opponent against the GC I suppose.

          And I agree, speaking on "several orders of magnitude" is taking it a bit far. It's much more secure, but since security is *very
      • Ok, I wil bite. I am not a programmer, but a user and yes, Java apps are slow and tend to consume lots of memory and not let it go. I don't know if that is due to the language or the developers use of the language, but I do know that whatever the cause, the net effect is slow apps.

        Other issues, let's see, for some reason, firefox hangs when more than one java app is run in the browser. I see alot of Java apps where dialogs are non-modal (you can access/view one window at a time). Java apps aften *requi
      • by DrSkwid ( 118965 )
        Java sucks because it's Java
    • by TopSpin ( 753 ) *
      Speed and efficiency of *development*, maybe. Which is the problem. Modern software is so dependent on toolkits and compiler optimizations and...

      I wondered where all those vulnerabilities were coming from. It's not humans misusing memory references and overrunning ad hoc fixed length buffers, etc. It's the toolkits, libraries and compilers! Glad we got that figured out.

      From the post:

      Our computers have thousands times more memory than 20 years ago. Still, programmers are privileging speed and efficiency
      • Re: (Score:3, Insightful)

        by smallfries ( 601545 )

        This implies is that because memory is larger less attention can be paid to efficiency, but the hapless programmers don't know better. I used to use quicksort when I had 640 KiB of RAM, but now that I have 8 GiB, I'll just use bubble sort. Brilliant.

        You are really misrepresenting his point here. We both know that bubble sort would run much slower on a 8Gb dataset than quicksort. The real comparison is "should we some really tricky and nasty code for this particular function or should it be a giant lookup table?" When memory is (relatively) cheaper than processor time, the set of tradeoffs changes. Some of these tradeoffs then mean than code can be written more correctly (securely) at the expense of higher memory usage. These tradeoffs are intuitively

  • No, putting arrays on the stack causes buffer overflows. Which is trivial to not do, and trivial to check for.

    The fact that Microsoft doesn't HAVE a security model, IE/Outlook are jokes, and users run as admin has a bit more to do with it.

    • by codegen ( 103601 )
      Buffer overflows can also happen in the data segment (both global variables and heap).
      And it is almost as easy to exploit. Intead of overflowing to the return address, you overflow
      to the nearest vptr (if C++ is being used), to the nearest function pointer or to the nearest green bit.
    • There is nothing wrong with putting an array on the stack. I once had the need to copy a function into a local int[50] and run it from there - no issues (embedded system, the function needed to run from RAM). The problem is when people write code that can blow right past the end of an array. They don't stop to think that the functions they call to dump data in there don't know where the end of the available space is. Oh right, the data told me how much space to allocate and I just allocated that much and re
      • "Anyway arrays on the stack are not inherently bad."

        Maybe not "inherently bad", but certainly inefficient as regards speed and space since the entire array must be copied to the stack rather than just an array pointer.
        • I dont see why you necessarily have to copy an entire array.

          void foo(){
          int arr1[52]; /* a bunch of code that stores values into arr1 and then manipulates them and reads them back*/
          }

          The only overhead that had to be done here was moving the stack pointer down an extra 52*4 bytes, which is no more work than what it was doing already. Assuming you are in a language that doesnt initialize every element of an array when you declare it. Arrays on the stack are not inherintly inefficient although they certainly c
    • There are pointers to the code segment in every section of memory. (heap, stack, you name it) Do you honestly think that the only time a pointer to code gets followed is when you are bouncing around in between stack frames?

      I am amazed that you would so arrogantly declare that simply doing a bit of static analysis would be sufficient to fix all (or even most) buffer overflows in complex programs with hundreds of thousands or even millions of lines of code. It sounds like you just looked at one tutorial of
  • by Wilson_6500 ( 896824 ) on Monday January 01, 2007 @09:51PM (#17427644)
    If you were somehow to install DieHard software on a DieBold machine, does the universe collapse in on itself? This is one of those pasta plus antipasto situations, I think.
    • Quiet, you! The last thing we need is an El Queso terrorist strolling into a crowded airport with a voting machine strapped to his chest...
  • You should never program thinking about security issues.
    Write the algorithms correctly and there won't BE any buffer-overflows.

    What's so hard about this?

    • by jd ( 1658 ) <imipak@ y a hoo.com> on Monday January 01, 2007 @10:28PM (#17427948) Homepage Journal
      ...the number of programmers like ourselves who learned how to code correctly is vanishingly small in comparison to the number of coders who assume that if it doesn't crash, it's good enough. Whether you validate the inputs against the constraints, engineer the program such that constraints must always be met, or force a module to crash when something is invalid so that you can trap and handle it by controlled means - the method is irrelevant. What matters is less that you're using a method than you remember to use a method.

      Even assuming nobody wants to go to all that trouble, there are solutions. ElectricFence and dmalloc are hardly new and far from obscure. If a developer can't be bothered to link against a debugging malloc before testing then you can't expect their software to be immune to such absurd defects. A few runs whilst using memprof isn't a bad idea, either.

      This assumes you're using a language like C, which is not a trivial language to write correct software in. For many programs, you are better off with a language like Occam (provided for Unix/Linux/Windows via KROC) where the combination of language and compiler heavily limits the errors you can introduce. Yes, languages this strict are a pain to write in, but the increase in the initial pain is vastly outweighed by the incredible reduction in agony when debugging - if there's any debugging at all.

      I do not expect anyone to re-write glibc in Occam or any other nearly bug-proof language. It would be helpful, but it's not going to happen.

      • Thats quite bold of you to claim that you are in an elite group that can churn out large programs in C with zero bugs.

        Your claim that smart programmers using dmalloc, electric fence, or some other bounds checker will find all buffer overflows seems misguided to me. Those tools are great for catching buffer overflows that are actually being caused by your test suite. But arent most buffer overflow security holes caused by weird corner cases noone though of? I mean in the real world its never caused by som
      • I would go farther than that and say that writing a correct program in any language is not trivial. While it is true that certain languages (like Java) limit the kinds of errors a programmer can make, they do nothing to limit the number of errors a programmer can make.

        While I understand there are some languages more appropriate for solving certain types of problems, making a language programmer-proof is never a worthy goal. Usually, the attempt to make a "better COBOL" ends up evolving like this:

        1. Re
    • by Jeremi ( 14640 )
      Write the algorithms correctly and there won't BE any buffer-overflows.

      What's so hard about this?


      The "write the algorithms correctly" part. The demand for programs is much larger than the supply of sufficiently trained/disciplined/talented programmers. Therefore, we need a solution that gives acceptable results even when the programmer isn't a guru (and preferably when the programmer is a trained monkey, because he often will be)

  • Wouldn't using languages like Lisp do basically the same job? I mention lisp, besides it being a favorite language of mine, because I know the end product can be coded/compiled fast and efficiently while maintaining security in many cases. Other more popular languages like Python, while getting more lispish, seem to have a inherent speed penalty that cannot be reduced as easily come compile time though I am not sure, saying this as more of a spectator to that language.

    Note: I'm sure other functional lang
    • Wouldn't using languages like Lisp do basically the same job?

      Yes.

      But it's not a practical solution for about 185 different reasons starting with the fact that very few commercial apps are written in any kind of dynamic language, let alone LISP, and they're not likely to be rewritten anytime soon for such an intangible reason as security. rpg was right that worse is better, and the last language will be C. he wrote that before java, ruby, etc., but I think it's still right. Like it or hate it.

    • Lisp is all that safe a language, and can be somewhat strange: I'm a little confused by the specification's discussion of safe versus unsafe operations, but as I recall, you can specifically instruct the Lisp compiler to ignore array bounds checking at the expense of speed. You'd have to be insane to do this, however, but it is possible. Consider this function:

      (defun bar (array i x)
      "Set the Ith element of array ARRAY to X"

      (declare (type fixnum i)
      (type (
  • Buggy (Score:3, Interesting)

    by The MAZZTer ( 911996 ) <(megazzt) (at) (gmail.com)> on Monday January 01, 2007 @10:20PM (#17427894) Homepage
    Firefox 2 crashed for the first time ever (I've used it since beta 1 came out) for me today... suspiciously, less than five minutes after I turned DieHard on. Hrm.
    • I should probably clarify that I can't be 100% sure DieHard was the problem, but I still think it's possible. Sadly I can't reproduce the error (not surprisingly, given DieHard's random nature). Although given the sheer volume of drivers and apps interacting on this comp, which still manages to stay stable normally, it's surprising DieHard didn't bring my whole house of cards down instantly. :)
  • by NorbrookC ( 674063 ) on Monday January 01, 2007 @11:03PM (#17428196) Journal

    In reading this article, I started to wonder a lot about this. writing to conserve memory is a bad thing? I will say that I haven't noticed that in most software, regardless of whether it's OSS or closed-source. If anything, there seems to be a variation of Parkinson's Law in effect. Yes, computers these days have a lot more memory available, however, the number of applications and the size demands of each application has grown almost in lock-step with that. 15 or so years ago, yes, you had one OS and one application running - maybe, if you were lucky or were running TSR apps, two or three. These days, the OS takes up a hefty chunk, and it's not uncommon to see 8 or 9 (if not more) applications running at once. What they all seem to have in common is that they assume they have access to all the RAM, or as much of it as they can grab.

    I have to wonder if he's actually looked at things these days. I don't see where programming (properly done) to conserve memory is a bad thing. If anything, it seems that few are actually doing it.

    • by Jeremi ( 14640 )
      What they all seem to have in common is that they assume they have
      access to all the RAM, or as much of it as they can grab.

      They don't assume "access to as much RAM as they can grab", they assume "access to as much RAM as they need". Given the presence of gigabyte RAM modules, virtual memory, and near-terabyte hard drives, this is usually a reasonable assumption.

      I have to wonder if he's actually looked at things these days. I don't see where programming (properly done) to conserve memory is a bad thing. If

      • Memory is dirt-cheap these days, and if you've got it you might as well put it to use.

        You have an interesting definition of "dirt cheap". Doing a quick check, 1GB RAM is running around $175-$200. Admittedly, that's a lot cheaper than 12 to 15 years ago, when it was averaging $25 a MB, but I don't consider that "dirt cheap." The problem, as you pointed out, is that they grab as "much as they need", or more correctly, as the developer(s) think it needs. That's fine in an isolated system, where it's t

        • Don't know where you buy memory, but it looks like it's $100 for a GB of RAM (at newegg.com)

          Look, if you want to run RAM hungry apps, you need to either purchase more memory, or open fewer apps at once. Or, I guess you could go back to using the apps that you were using a few years ago. I'm sure they'll run with the same, small memory footprint that you want them to.
        • by Jekler ( 626699 )
          I think you're rightfully fed up with throwing hardware at a problem. Hardware isn't as cheap and easy to come by as some developers believe. Waste can be seen if you look at things like the ATI Catalyst Control Center (CCC). It occupies 60mb-70mb of memory persistently. Am I to believe that the CCC is several times more complex and contains several times more data than the entirety of the Windows 3.1 operating system? (obvious cracks about Windows aside)
    • by laffer1 ( 701823 )
      As an open source developer and college student, I can clarify the problem. We are taught that memory is cheap and there will always be enough. Of course that is stupid. Anyone who uses Firefox, Gnome, KDE or Vista can tell you that modern software is using way too much RAM. Granted three of those products are trying to work on the problem a bit. In the case of Microsoft, it helps them sell new PCs which then ship with new versions of their software.

      If you want to solve this problem, professors need to
  • by istartedi ( 132515 ) on Tuesday January 02, 2007 @01:14AM (#17429014) Journal

    The worst bugs are the ones that are hard to reproduce. In fact, when faced with a bug that's difficult to reproduce, I've been known to quip "yet another unintentional random number generator". The suggestion that they're going to apply a pseudo-fix that involves random allocations raises all kinds of red flags. I'd much rather have fine-grained control over which sections of code are allowed to access which sections of memory, and be able to track which sections of code are accessing a chunk of memory. I'd much rather have strict enforcement of a non-execute bit on memory that's only supposed to contain data (there is some support for this already). Introducing randomness into memory allocation? Worst. Idea. Ever. It's like throwing in the towel, and if they put that in at low levels in system libs and things like that, we're screwed in terms of every being able to *really* fix the problem. If their compiler is going to link against an allocator that has this capability, I hope they provide the ability to disable it.

    • by DrSkwid ( 118965 )
      A rare voice of sanity, thank you.

      from the plan 9 fortune file :

      Almost all good computer programs contain at least one random-number generator.
  • These techniques are old hats: several malloc implementations offer randomization, and ElectricFence finds pointer errors by spreading out and aligning allocations across virtual memory.

    In practice, however, a decent set of test cases together with valgrind will make any of those runtime gymnastics unnecessary.
  • Do NOT INSTALL THIS (Score:2, Informative)

    by cowholio4 ( 1045862 )
    BTW Do not install this crap... I do software development and this program has made it impossible to compile/run my programs (even after I uninstalled it) and also while it was running it would not allow Eclipse to run. So basically this program screwed me over big time. I am writing a database that is is to be deployed tomorrow morning. **&**&**&*&* ..... Note to self: Do this crap in a virtual machine and not while developing a program.
  • I don't think it's typically the programmers fault that there are security issues with the software. If the programmer was taught how to do things properly, then they would do things properly. Also, if they weren't so rushed to get a product out the door, they would be able to do a proper review and test of the code and find a majority of the bugs before the product hits the streets (or the server room in the case of custom software)

    Typically, a programmer is doing their job. The programmers manager is d
    • by DrSkwid ( 118965 )
      > If the programmer was taught how to do things properly, then they would do things properly.

      It is a programmer's responsibilty to find out what properly is.
  • by BagOCrap ( 980854 ) on Tuesday January 02, 2007 @08:15AM (#17430558) Homepage

    Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows"...
    Am I the only one who finds the above sentence just strange? In my books and experience, speed optimisations most certainly don't result in buffer overflows. Recklessness does, however.
    • by Lxy ( 80823 )
      He's talking about what aspects the programmer is focusing on. If you're concentrating on makign a program fast and efficient, you may not be looking at writing it securely. Speed doesn't create buffer overflows, but lack of attention on the programmer's part certainly does.
  • the film "live free or die hard" is on it's way. The synopsis alone makes me wanna gag..

    http://www.imdb.com/title/tt0337978/ [imdb.com]
    When a criminal plot is in place to take down the entire computer and technological structure that supports the economy of the United States (and the world), it's up to a decidedly "old school" hero, police detective John McClane (Willis), to take down the conspiracy, aided by a young hacker (Long).

  • Why not just license under the GPL, LGPL or some other open source license? This business of being "free for non-commercial use" restricts users who use open source software for commercial purposes. This software is really "non-free" according to any definition of the FSF or Open Source Initiative, which explicitly forbid discrimination against fields of endeavor. Perhaps you should say "non-free, but gratis for non-commercial use."

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...