Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Intel

AMD Could Profit from Buffer-Overflow Protection 631

spin2cool writes "New Scientist has an article about how AMD and Intel are planning on releasing new consumer chips with built-in buffer-overflow protection. Apparently AMD's chips will make it to market first, though, which some analysts think could give AMD an advantage as the next round of chips are released. The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe."
This discussion has been archived. No new comments can be posted.

AMD Could Profit from Buffer-Overflow Protection

Comments Filter:
  • by Anonymous Coward on Monday February 23, 2004 @03:35PM (#8364999)
    They are protecting the pages marked as code from the data pages. Code could still overflow, but not use that to execute arbitrary code in the pages marked as data(or non-executable).
  • I'd buy (Score:3, Informative)

    by valkraider ( 611225 ) on Monday February 23, 2004 @03:38PM (#8365049) Journal
    I'm not Joe, but if all other factors were equal - this would be enough to sway me to them... But of course, it's almost moot - since I use Apple OSX... But I do have some Linux boxes that could run on them...

    However - they WILL have to spin it well enough, or better than the "Megahertz Myth" because that didn't work too well for average folks. BestBuy salesmen don't know how to explain "AMD Athlon 289456++XL 3400 MP SSE4 +-7200 BufferXTreme" so they just push intel...
  • Re:Linux support (Score:5, Informative)

    by TheRealFoxFire ( 523782 ) on Monday February 23, 2004 @03:39PM (#8365072)
    It will likely be in their architecture manual. The summary of the protection is that it allows the OS to mark pages of virtual memory with a No Execute (NX) bit. Attempting to execute any instructions from such a page would cause a trap to the OS.

    An OS would then use this to mark pure data page and areas like the stack as NX so that overflowing datastructures doesn't allow arbitrary malicious code to be run.
  • What does it do? (Score:3, Informative)

    by slamb ( 119285 ) on Monday February 23, 2004 @03:42PM (#8365125) Homepage
    From the article:
    Until now, Intel-compatible processors have not been able to distinguish between sections of memory that contain data and those that contain program instructions. This has allowed hackers to insert malicious program instructions in sections of memory that are supposed to contain data only, and use buffer overflow to overwrite the "pointer" data that tells the processor which instruction to execute next. Hackers use this to force the computer to start executing their own code (see graphic).

    The new AMD chips prevent this. They separate memory into instruction-only and data-only sections. If hackers attempt to execute code from the data section of memory, they will fail. Windows will then detect the attempt and close the application.

    I've seen patches [ogi.edu] to Linux that provide a non-executable stack. There's also the mprotect(2) [wlug.org.nz] system call to change memory protection from user programs. And I believe OpenBSD has had a non-executable stack in the mainline for at least a couple releases.

    So what they're advertising here seems to have already existed. If not, how are the things above possible?

  • Re:the Chipmaker??? (Score:4, Informative)

    by normal_guy ( 676813 ) on Monday February 23, 2004 @03:43PM (#8365139)
    I assure you it's not just Microsoft who's to blame [securityfocus.com].
  • by dtjohnson ( 102237 ) on Monday February 23, 2004 @03:44PM (#8365153)


    The AMD Opteron and Athlon 64 chips already [computerworld.com]
    have the buffer overflow protection in their hardware and the
    feature is already supported by both Linux and Windows XP 64-bit
    edition. AMD calls this "Execution Protection" and the
    basic idea is that the processor will not allow code that arrives to
    the system via a buffer overflow to be marked as
    executable. The slashdot story says "will have" for both
    Intel and AMD when it should read "AMD already has and Intel will
    have..."

  • Re:No silver bullet. (Score:3, Informative)

    by m0rph3us0 ( 549631 ) on Monday February 23, 2004 @03:44PM (#8365167)
    All you need to rewrite is the executable loader, and the memory allocator.
  • by Vancorps ( 746090 ) on Monday February 23, 2004 @03:44PM (#8365168)
    AMD processors have both of those features. AMD has done well at matching Intel feature for feature. Take a look at Opteron for servers. It doesn't help right now that there are a lot of Intel boards that shipped defective. I was replacing backplanes for a solid month just before the New Year. The latest Xeon's really aren't that impressive either. There was a time the Xeon was an incredible processor worthy of running a NOC but now they are hot enough that Opteron and other players look real nice again.
  • Re:Pathetic (Score:2, Informative)

    by paranode ( 671698 ) on Monday February 23, 2004 @03:52PM (#8365277)
    Wraaaag! Why does everyone keep calling this a Microsoft bug?

    From the article:
    "AMD's Athlon-64 (for PCs) and Opteron (for servers) will protect against buffer overflows when used with a new version of Windows XP."

    So either these chips will only work to protect against Microsoft bugs (in conjunction with software), or we'll have to wait until Linux can figure out how to use this feature.
  • Re:Odd... (Score:1, Informative)

    by Anonymous Coward on Monday February 23, 2004 @03:52PM (#8365287)
    You can't mark a given page in the current Intel x86 offerings as both execute and read-only. That is the problem.
  • AMD chipsets (Score:2, Informative)

    by Anonymous Coward on Monday February 23, 2004 @03:55PM (#8365319)
    AMD did make a chipset for the dual processor Athlon motherboards, and it really wasn't anything to brag about. On the other hand, the third party nVidia nForce2 chipset has had rave reviews and a number of great motherboard implementations.
  • Re:what a drag (Score:5, Informative)

    by paranode ( 671698 ) on Monday February 23, 2004 @03:55PM (#8365321)
    Exactly. OpenBSD 3.3 [openbsd.org] already came with this feature in May 2003.

    "W^X (pronounced: "W xor X") on architectures capable of pure execute-bit support in the MMU (sparc, sparc64, alpha, hppa). This is a fine-grained memory permissions layout, ensuring that memory which can be written to by application programs can not be executable at the same time and vice versa. This raises the bar on potential buffer overflows and other attacks: as a result, an attacker is unable to write code anywhere in memory where it can be executed. (NOTE: i386 and powerpc do not support W^X in 3.3; however, 3.3-current already supports it on i386, and both these processors are expected to support this change in 3.4). "
  • Re:the Chipmaker??? (Score:5, Informative)

    by kfg ( 145172 ) on Monday February 23, 2004 @03:56PM (#8365343)
    This has nothing to do with Microsoft, and everything to do with architecture and programing languages.

    If you program in C on Intel you are going to have problems without almost fanatical devotion to the Po^H^H management of your memory resources.

    That goes for Linux as well, as any check at Bugtraq can confirm.

    Yes, people should be very careful when coding in languages and on architechtures which allow buffer overflow, but the real solution is at a level lower than the coder's.

    KFG
  • by Vancorps ( 746090 ) on Monday February 23, 2004 @03:56PM (#8365348)
    Well just last night my AMD based laptop shut off on me because it got too hot, something stuck in the fan.

    As for the other features you mention. You are comparing Desktop processors and server processors. You might note the lack of the Opteron processor in the third party tests you linked to.

    Bout two months ago someone came to me with a motherboard and processor, Athlon XP 2600+. They couldn't get it to boot. I took one look at it and realized the heatsink was on backwards, it shut it self down as soon as it got hot enough. I put the heatsink on correctly and the thing booted right up.

    As for the PCI locking its a bit harder to vouch for since I don't see a whole lot of information about it, but I sure do recall seeing tests involving the Opteron, if I could find it right now I would, except I'm on dialup now for the first time in six years and its annoying the hell out of me.

  • by ortcutt ( 711694 ) on Monday February 23, 2004 @03:57PM (#8365350)
    The article is thin on the details of how they are redesigning the chips to prevent overflow exploits, but Stack Protection (which as far as I know is what matters when it comes to buffer overflows) is available today. IBM's SSP project extends GCC to provide stack protection.

    http://www.trl.ibm.com/projects/security/ssp/ [ibm.com]

    Gentoo-specific info here.

    http://www.gentoo.org/proj/en/hardened/propolice.x ml [gentoo.org]

  • by kawika ( 87069 ) on Monday February 23, 2004 @03:57PM (#8365356)
    Any application that creates code in stack-based memory such as a local (auto) variable, or in one of the standard heaps (from which malloc and "new" memory come) will be affected. This memory is no longer executable and cannot be made executable by an application. Some existing JIT compilers are affected and will need rework.

    To work with memory protection enabled, applications will need to allocate memory using VirtualAlloc [microsoft.com] and specify the memory options [microsoft.com] to make it executable. Then they can generate and run the code there.

    I am assuming that Linux could incorporate some similar functionality, anybody know if someone is working on it?
  • by willy_me ( 212994 ) on Monday February 23, 2004 @03:58PM (#8365364)
    Self modifying code won't run on most modern architectures. Take PPC for example, there are two separate caches, one for code and one for data. The code can not be changed during runtime. There are of course lots of programming languages that utilize "self modifying" code but they typically run with an interpreter who's code does not change. I'm not sure how gdb gets around this problem but if anyone wants to enlighten me, I'm listening.

    Apps that use self-modifying code probably only run on one architecture - most likely the 8x86. There are far better ways to protect one's code.

    Self-modifying code is prone to bugs, hard to develop, hard to maintain, hardware dependent, and to top it all off - not that effective at providing security.
  • Re:Awesome (Score:2, Informative)

    by Unoti ( 731964 ) on Monday February 23, 2004 @03:59PM (#8365378) Journal
    Why didn't they think of this in the first place Another take on answering that question... originally processors were conceived to have data and programs separate. It wasn't until Von Neumann [eingang.org] that someone proposed putting the data and programs in the same place.
  • by Anonymous Coward on Monday February 23, 2004 @04:00PM (#8365387)
    This is not a non-executable stack. It is a per-page execute bit. Please do not confuse the two.

    Linux has not yet looked into technology like this. The PaX project however has these features available via a kernel patch, that uses software techniques to achieve what will be quite simple now that hardware can support marking pages executable.

    Yes people, this does mean that as far as buffer overflows go, Windows on amd64 will soon be more resilient than a stock Linux kernel.
  • A bunch of things (Score:4, Informative)

    by Groo Wanderer ( 180806 ) <charlieNO@SPAMsemiaccurate.com> on Monday February 23, 2004 @04:00PM (#8365388) Homepage
    1) It is also in Prescott
    2) It needs OS support, specifically XP SP2, which isn't out yet.
    3) It doesn't really do what it is meant to, I have seen several 'theoretical' discussions on how to circumvent it. Think of it as another hoop to jump through for the black hats.
    4) You need to be in 64-bit mode to use it
    5) 4) requires a recompilation anyway, why not do it right with the right tools when you recompile?
    6) I know of at least one vendor using to bid against intel on contracts now.
    7) Oh yeah, this will do a lot of good. Really. When has a white paper ever lied?
    8) The more you know about things like this, the more you want to move into a cabin in Montana and live off the land.

    -Charlie
  • Re:Old news (Score:5, Informative)

    by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Monday February 23, 2004 @04:01PM (#8365400) Homepage
    Except you could write/read from the CODE segment and you could far jump into the data/extra/stack segment registers.

    What's better is that CS==DS was a common mode [known as a .COM or TINY model program].

    So there goes your theory.

    Tom
  • by Anonymous Coward on Monday February 23, 2004 @04:05PM (#8365442)
    I think you're confused :) (Though the mods think you are insightful, I guess that says as much about them as you.)

    There is no way to do this at the motherboard level. It needs to happen in the MMU which is part of the processor. And it isn't specific to buffer overflows. In fact, it does not prevent them. What is does is allow the operating system to mark memory as readable without making it executable. Other processors can already do this. And there are known kludgy workarounds which are implemented in OpenBSD and Solar Designer's Linux kernel patches which make it work even on broken Intel CPUs. AMD and Intel are simply implementing the page table protection bits the way they should have been implemented back when Intel created the 386.
  • by Anonymous Coward on Monday February 23, 2004 @04:05PM (#8365447)
    BUT you have had to implement segmented memory management to get it.

    The problem has always been that the execute flag has only existed in the segmentation registers.

    This change puts the execute flag on the PAGE rather than the SEGMENT.

    The segmented mode is SLOW when you combine it with paging activity (you get a fault on a page, and you have to modify both the page tables AND the segment descriptors for things to work. Going through both mapping registers (page and segment) causes a significant speed reduction in memory access. It also gives you a larger physical memory range... (cache collision penalties too)
  • by cjellibebi ( 645568 ) on Monday February 23, 2004 @04:07PM (#8365463)
    > I would entertain listening to cases where self-mod'ing code has its place.

    The Intel x86 architecture has few registers, so if you want to keep lots of values handy, you're going to have to keep swapping values in and out of memory. Alternatively, immediate-value constants can be hard-coded in the code that do not change during a long loop or a loop with many layers of nestedness. Just before the loop is executed, these hard-coded constants will be modified by re-writing the immediate-values in the code. An example of this is some code that draws a scaled translucent sprite. Throughout the code, the scale will remain constant, and if the translucency is uniform, that will remain constant too. The code that does the translucent blitting will use the registers only for values that change during the sprite-drawing.

    On an 80386, using this technique will cause a significant speed-increase in the code, but on 80486's and above where there are on-board L1 caches on the CPUs, the code-modification may cause cache-misses that may slow down the system - espcecially if it is run on an even newer x86 CPU that has a seperate program and data cache in the L1 cache. To make things worse, nowardays, most code runs in a multi-tasking environment, so trying to figure out if self-modifying code causes a slowdown or a speed-increase is almost impossible to predict.

    Of course, nowardays, most drawing is done by hardware accellerated graphics cards so this isn't a good example, but there could still be some use for hard-coding values that do not change in a loop.

  • Re:what a drag (Score:1, Informative)

    by Anonymous Coward on Monday February 23, 2004 @04:08PM (#8365479)
    It was, at least the interface was. And the RISC processors (SPARC, Alpha, etc.) all support non-executable pages. But this isn't really a complete solution. This won't stop buffer overflows, only the ability to execute the code in the overflowed buffer. That means you can still do DoS attacks using buffer overflow and if you can find a piece of code to do the equivalent of your exploit in the binary or linked libraries you can just use it instead.
  • by alanw ( 1822 ) * <alan@wylie.me.uk> on Monday February 23, 2004 @04:14PM (#8365544) Homepage
    Several architectures (sparc, sparc64, alpha, hppa, m88k) have had per-page execute permissons for years.
    See This BugTraq posting by Theo de Raadt [securityfocus.com]
  • by Monkelectric ( 546685 ) <[moc.cirtceleknom] [ta] [todhsals]> on Monday February 23, 2004 @04:14PM (#8365553)
    You really can't do that even now. IIRC, writes to code segments aren't allowed for the following reason -- when paging to disk, the OS doesn't write out code segments to disk, it just clears the segment, uses it, and when it needs to reload the code segment itand re-reads them from the executable on the HD, it saves alot of time but has the side-effect that anything that was modified in the code segment is clobbered.

    I think some architectures even disallow writing to code segments altogether -- the l1 or l2 caches wont maintain coherency (this is again an optimization as writing to a code segment is rare).

  • by cybergrue ( 696844 ) on Monday February 23, 2004 @04:14PM (#8365557)
    Outside of some academic/reserch application, you don't want your code to be self-modifying, as it is much much more likely to be a bug then what was intended. Self modifying code is one of the reasons why Turing's Stoping Problem is unsolvable. It is very easy for self-modifying code to get into an infinite loop.

    That said, if you want to create self-modiying code for some reason (hey, its actually an intresting field), then you should probably do in an interpreted language for several reasons. First being that you don't screw-up the OS, and second, the intrepreter has a fair chance of detecting when the program has entered an infinite loop. (Hmm, its been in that loop for a few million cycles and its state does not appear to be changing). This can be done with stacks or counters. I've seen it in some Smalltalk interpreters.
    A third advantage is that the Virtual machine you run this self-modifying code on doesn't even need to exist in reality, so you can do a lot of wierd and wonderful stuff with it, and it will still run.

    Protected momory has exited for a long time, and x86 architecture is just about the last major processor architecture that does not have support for user and OS seperated protected memory. I am taking an OS course right now, and I was very surprised at this as it solves so many malicious code based problems like buffer overruns.

  • by beerman2k ( 521609 ) on Monday February 23, 2004 @04:15PM (#8365568) Homepage
    That's not true at all. Only the OS needs a new version. The OS simply marks pages allocated to the stack as "No Execute", and voila, programs can't use a buffer overrun to execute code.
  • by beerman2k ( 521609 ) on Monday February 23, 2004 @04:19PM (#8365607) Homepage
    All modern architecture's have seperate caches for code and data. Simply flushing the i-cache will allow you to update your code on the fly.
  • Re:Linux support (Score:3, Informative)

    by nester ( 14407 ) on Monday February 23, 2004 @04:21PM (#8365633)
    Wait, are you saying that pages don't *already* have an execution-enable bit?

    yes, this is one of the wonderful misfeatures of x86. i don't know what this article is all about. amd64 ALREADY has an execute bit in each pte, when it's in long (64bit) mode. this is nothing new; it's been in amd's manuals for while. i'd bet it was one of the first x86 problems they planned to fix.

  • I suspect its even older than that ... Burroughs seems to have something like this, called "tagged words" or "tagged memory" even longer ...

    http://www.ajwm.net/amayer/papers/B5000.html
  • by addaon ( 41825 ) <addaon+slashdot.gmail@com> on Monday February 23, 2004 @04:31PM (#8365731)
    Take PPC for example, and, um, do an isync?
  • One and only one: JIT compilation. For example, when Sun's JVM executes Java bytecode, certain portions of it may get compiled to native machine language and then run. In fact, Sun's compiler has a technology called HotSpot which is supposed to dynamically optimize some of this machine code as it runs. Certainly JIT compilation has big benefits. I believe that perl/parrot will be doing that. How much benefit HotSpot has in the real world, I'm not sure, but it is a cool trick.

    And all of these are cases of an executing piece of code dynamically creating and executing another piece of code which is exactly what happens in a buffer overflow situation.

    However, the number of programs that have a legitimate need to do this is tiny. I'm not sure how this chip will accomdoate those. There may need to be some kind of OS-layer thing with code that is trusted. Maybe the JVM itself could switch modes, so that only when it is actively attempting to write code would that feature be allowed. There are definitely work-arounds to allow JIT to continue working.

    As for copy protection, given a choice between having a system which is secure for me and a system which is secure for them, I'll take the system which is secure for me. What about you?

    -------
    Create a WAP hosting [chiralsoftware.net] service

  • by ktulu1115 ( 567549 ) on Monday February 23, 2004 @04:45PM (#8365881)
    Call me stupid, but AFAIK x86 chips have full segmentation support [x86.org] (in protected mode obviously) - ability to define different segment types (read only, r/w, execute only, etc)... For those of you not familiar with it, it allows the programmer to define different types of memory segments, which would allow you to do some pretty interesting things such as defining read-only code segments (so the machine instructions can't be modified in memory), and non-executing data segments (to prevent OS from trying to run code stored in program data/buffers). This would solve the problem, at least how they addressed it in the article.

    If current operating systems actually used this in addition to paging (which is what most of them only use now), why would they need to create a new chip? Linux does not fully utilize segmention, mostly only paging [clemson.edu]. I don't have any resources on MS OS design right now so I can't comment on it... (although maybe looking at the recent source would help some ;)
  • by Von Helmet ( 727753 ) on Monday February 23, 2004 @04:49PM (#8365916)

    Scalable Processor ARChitecture.

  • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Monday February 23, 2004 @05:22PM (#8366323) Homepage
    Mozilla doesn't work in Linux with the NX protection on either... it's doing some really dodgy self modifying stuff when it starts up I think.

    SP2 will break a *lot* of code and it's well worth downloading the beta and testing your stuff with it - your boss will thank you for it :)

  • by Barlo_Mung_42 ( 411228 ) on Monday February 23, 2004 @05:28PM (#8366407) Homepage
    "It's not a cure-all solution."

    Nor will there ever be IMO. But this combined with good practices like not running as admin when we don't need to (reading email, web browsing, game playing for example) will be a huge leap forward.

  • Re:Awesome (Score:3, Informative)

    by xmath ( 90486 ) on Monday February 23, 2004 @05:28PM (#8366409)
    Please correct me if I'm mistaken

    You are. A buffer overflow works by overflowing a stack-allocated buffer, causing other stack-allocated data to be overwritten. The usual method of exploiting this is by overwriting the return address with a value that points back into the buffer, so that the function will return straight into the buffer data, where the cracker will have put executable code of course.

    A way to provide some protection against this is by disabling the ability to execute code that is located on stack.

    Note that:
    1. there are already linux kernel patches to do this on x86 hardware, but they incur a slight performance penalty because they're implemented by abusing page table caches (there are separate ones for data, and you can deliberately make 'em inconsistent so that the table entry for data says access is allowed, while the one for code says it's disallowed)
    2. this does not prevent buffer overflow exploits entirely, it just makes 'em a lot harder. There are tricks you can still use sometimes like putting the known address of some useful library function into the return address

    hope this helps to clear it up a bit

  • Nope. (Score:2, Informative)

    by Anonymous Coward on Monday February 23, 2004 @05:29PM (#8366425)
    Buffer overflows occur because the call-tree is in the same, writable stack as the call-frames (local variables). Furthermore, languages like C make it very easy to define "buffers" that are located in the local stackframe, instead of forcibly allocating memory in the heap.

    So, the problem is that somehow (e.g. not limiting the amount of input you read into a fixed sized buffer) you overrun the end of your buffer on the local stackframe with the return address at some point just after it.

    You have to know how far an offset the return address is from the start of the overflowed buffer, and where you'd like to put the instruction pointer now. As you don't generally have anywhere else, you put the Instruction Pointer to some point on the stack where you overflowed it. Make sure your overflow includes exploit code, and enough NOPs first to allow for any call depth.

    Barricades against this technique:
    - random address for start of stack
    - non-executable stacks
    - forwards-growing stacks
  • Re:Awesome (Score:4, Informative)

    by dustman ( 34626 ) <dleary.ttlc@net> on Monday February 23, 2004 @05:36PM (#8366501)
    Do they overflow the current process's virtual address space?

    No. On the stack itself, in addition to the local data for a function (and the saved registers), is the return address that you are going to jump back to after the function is complete. Buffer overflow exploits write past the end of the buffer. So you are overflowing the function's local data, not the entire stack segment. As the previous poster mentioned, because the stack grows downward, your overflow can write over the return address, which is where all the nastiness starts.

    In addition to this, is the fact that the binaries are always the same for each machine, and the process's memory all logically maps to the same location (windows user code maps to 0x10000000).

    So, say someone writes a program and somewhere has a static buffer for input which is 256 bytes, and doesn't check bounds on input data. You can construct an input which is more than 256 bytes, and your data will overwrite stuff which is outside of the input buffer, perhaps the return address. So, with the proper input, you can make the program jump to an arbitrary point.

    Usually, whenever a function is called, it will be called at the same depth of recursion. Like, I might make a function, "authenticate", which asks for your username and password (storing them without checking in my 256 byte buffers), then checks credentials and either proceeds or returns an error code.

    This function will probably only be called once, and it will always be called at the same time in program execution, relatively early. The stack will always be the same size when it is called. (Like, your call stack at this point might look like: main() -> initialize() -> authenticate()) or whatever).

    Sometimes, a function might be called from multiple places... Maybe there is something like "getAddress()", which does pretty much the same thing, it grabs an address input by the user, but it might be called from many places in the executable. Each call will have its own characteristic call stack, and offset within the stack segment. The stack frames of all functions leading down to it will be present. (You can usually examine the current call stack in a debugger).

    If you know "where" the function will be called from in this manner, you will know the exact stack layout at this point, including the absolute addresses and everything (which you know because the binaries are always the same and the executable always maps to the same logical place in memory).

    So, you can overwrite the return address so that it returns to inside the input buffer. Then, you have 256 bytes (in this example) to work with for constructing your little exploit. Often, the exploit will be just a stub which downloads another malware program and launches it, or whatever.

    There is a little bit more to it. Like, you usually need to construct your input so that you don't have any 0 bytes within it, because that will signify the end of a string. The input, even though it's not bounds checked, might still be validated in some fashion. (I think I remember reading about someone who had made a "codec", so that the input data could be composed of valid alphanumeric characters. So, even the unpacker was alphanumeric, which is pretty cool).
  • Re:Partial solution (Score:2, Informative)

    by Maddog_D97 ( 743377 ) on Monday February 23, 2004 @05:47PM (#8366625)
    Because it's faster and doesn't suck up a lot of memory. It wasn't until recently that John Carmack switched to writting in C++ instead of C for his new DooM ]|[ engine.
  • by ^BR ( 37824 ) on Monday February 23, 2004 @05:49PM (#8366660)

    ...already use mprotect() [openbsd.org] to set the execute permission on the area of memory where they generate the code... On Unix that's it...

    By the way... What is (or is there) the Windows equivalent?

  • by Ayanami Rei ( 621112 ) * <rayanami&gmail,com> on Monday February 23, 2004 @06:05PM (#8366871) Journal
    Read on [x86-64.org]. There is specific support for the NX flag on pages. If you boot with noexec=on, then stack/heap/data is automatically protected. If the page fault handler sees your thread because of an NX flag violation, the process is killed.
    Caveats: you can't mprotect it back to execute status, and it breaks some software, especially Mozilla/Java/Ada (just like exec-shield...)

  • by Ben Hutchings ( 4651 ) on Monday February 23, 2004 @06:06PM (#8366879) Homepage
    VirtualProtect() [microsoft.com]
  • by Luminous Coward ( 445673 ) on Monday February 23, 2004 @06:26PM (#8367079)
    Wait just a second here. Do you mean to tell me that Intel and AMD still don't have no-execute flags for their page tables?
    You need to pay attention. AMD's Opteron has been available for 10 months now. Processors that support the AMD64 instruction set architecture (e.g. Opteron) do have a per-page no-execute bit.

    See AMD64 Architecture Programmer's Manual Volume 2: System Programming [amd.com]
    5.6.1 No Execute (NX) Bit (page 173)

  • Re:Good or Bad idea? (Score:3, Informative)

    by Hoser McMoose ( 202552 ) on Monday February 23, 2004 @06:34PM (#8367126)
    First off, if they port it to damn near any other architecture out there except for PowerPC, then that other platform WILL have this no-execute bit implemented already. x86 is REALLY late to the game with this technology, it already exists in SPARC, HPPA, IA64, Alpha, etc. etc. It's really just x86 and PowerPC that are the two notable exceptions to the rule; they enforced this sort of thing through segmentation (ie it can and has been implemented in regular x86 code, it's just a bit sloppier and requires some kind of ass-backwards hacks to get it to work with the operating system).

    Also this no-execute bit is NOT a sure-fire fix. In fact, it's not a fix at all, buffer overflows can definitely still occur, it's just a whole heck of a lot harder to do anything too malicious once that buffer has been overflowed. Basically you just end up with a DoS attack instead of a remote access vulnerbility. Still something that should be fixed though.

    So, is this a good idea? Hell yeah! An extra line of defense is ALWAYS a good thing!
  • by Anonymous Coward on Monday February 23, 2004 @06:52PM (#8367345)
    They could not but implied that it was common knowledge.

    It's not like they never [tomshardware.com] had [sysopt.com] problems [com.com] in the past [com.com].
  • by edwdig ( 47888 ) on Monday February 23, 2004 @06:58PM (#8367441)
    32 bit x86 chips have the write and execute flags combined in the page tables. The segment descriptors have seperate bits for them. Intel basically expected people to use segmentation on the 386 rather than paging, so the original paging implementation was a little subpar.

    Segmentation offers much finer control over memory (allocations can be sized to the exact byte, with a fault generated on any out of bounds access) and a larger virtual address space (48 bits, accessed in segments up to 4 GB). The problem with segmentation is the kernel memory management becomes a lot more complicated, so OS developers have avoided using the segmentation. x86 chips are the only ones to provide segmentation support, so developers of portable OS's avoided the feature as well.

    When AMD designed the x86-64 architecture, they had to design new page tables to deal with 64 addresses. While making that change, they also seperated the write & execute permission bits.
  • by Anonymous Coward on Monday February 23, 2004 @07:22PM (#8367652)
    Um folks Sun UltraSparc chips and Solaris have been doing overflow protection for years. This is nothing new. All you have to do is enable it.
  • by Hoser McMoose ( 202552 ) on Monday February 23, 2004 @07:29PM (#8367740)

    Do you remember when the "Intel Inside" logo came out?

    1991, according to Intel themselves [intel.com]

    There was no real competition. (it was the Pentium days) There were other processors, but the Pentium pretty much blew them away.

    The Intel Inside marketing program started two years before the Pentium came out. At that time AMD was competing very effectively with the 486. So much so that Intel wanted a new marketing campaign to try to bring people back. Even in the early Pentium days AMD continued to compete effectively. Their 5x86 120MHz chips were very competitive with the Pentium 60 and Pentium 66, and even the 75MHz Pentium chips. It wasn't really until '94 or '95 that Intel really started leaving AMD in the dust, mainly because AMD was WAY late at releasing their K5 processor and when it did come out they had so many problems manufacturing it that it was clocked much lower than initially hoped for. Cyrix continued to offer some competition for Intel during this time, but they were plagued by crappy motherboards which gave them a poor reputation (it was a bit of a self-fulfilling prophecy thing: reputation for being cheap crap meant that they were put on cheap crap motherboards which resulted in a poor quality system).

    it will be [better] because it is cheaper

    And that is somehow an invalid reason for a product to be better?

  • by Anonymous Coward on Monday February 23, 2004 @07:38PM (#8367825)

    The problem with AMD was never the AMD chip, but stability of the 3rd party motherboard manufacturers. Since AMD chips were in the "bargain" category, "bargain" motherboard makers cut corners and caused stability issues. So vendors think AMD = cheap and worse quality. There were also early issues with AMD being able to fill orders, which makes vendors hesitant to purchase from them. One of the problems with the boom-bust cycle of chips is that even if AMD makes the greatest chip out there, they dont have the manufacturing support to fill all the orders and significantly increase market share, and they don't have sufficient financials to support increasing their production capacity. they just FINALLY made a profit last quarter, when Intel was profitable even in the worst of times.
  • by Anonymous Coward on Monday February 23, 2004 @07:58PM (#8368026)
    This "new feature" for marking pages as having a non-executeable stack is *already* part of the Athlon-64 chips. The New Scientist article was talking about how a new version of XP will begin using it soon--not that it's not yet released.

    AMD has already made Intel look bad by getting their 64-bit CPU into the mass-market first, and this feature was implemented partly to provide a facility that some other platforms (e.g. Solaris on Sparc) have had for quite some time.
  • That is incorrect (Score:1, Informative)

    by Anonymous Coward on Monday February 23, 2004 @08:12PM (#8368132)
    The software has to be compiled to take advantage of this (hence the new version of XP)

    actually, it only has to work in 64-bit mode in most of the cases. That is because by default heap allocs request the pages to have PROT_READ, which on x86-32 gives PROT_EXEC as well - not so in AMD64 64-bit mode. so for instance unless explicitly unset in the kernel boot parameters, AMD64 Linux kernels give buffer overflow protection for 32-bit apps, no recompile needed

    of course, in windows world you need to wait for the 64-bit kernel, hence the new version of XP.
  • by Loki_1929 ( 550940 ) on Monday February 23, 2004 @10:39PM (#8369629) Journal
    "I for one had trouble for a while remembering" ... remembering a lot of things.

    Like the PIII Coppermine CPUs that wouldn't even boot [bbc.co.uk] sometimes.

    Or the randomly rebooting [cw.com.hk] PII Xeons.

    Or the voltage problems [com.com] with certain PIII Xeons.

    Or the memory request system hang bug in the PIII/Xeon [hardwarecentral.com].

    Or the PIII's SSE bug [zdnet.co.uk] whose 'fix' killed i810 compatability.

    Or the MTH [com.com] bug in the PIII CPUs that forced Intel customers to replace boards and RAM.

    Or the recalled [com.com], that's right, recalled [com.com] PIII chips at 1.13GHz.

    Or the recalled [com.com] (there's that word again) Xeon SERVER chips at 800 and 900MHz.

    Or the recalled [techweb.com] (that word, AGAIN?!) cc820 "cape cod" Intel motherboards.

    Or the data overwriting [zdnet.co.uk] bug in the P4 CPUs.

    Or the P4 chipset [com.com] bug that killed video performance.

    Or the Sun/Oracle P4 bug [indiana.edu].

    Or the Itanium [theinquirer.net] bug that was severe enough to make Compaq halt Itanium shipments.

    Or the Itanium 2 bug [infoworld.com] that "can cause systems to behave unpredictably or shut down".

    Or the numerous other P4/Xeon/XeonMP bugs [theinquirer.net] that have been hanging around.

    Yes, I did consider the possibility that there might just be some basis for the belief that Intel's products are superior. Having considered that, in light of the mountains of evidence to the contrary, I shall now proceed to laugh at you.

    Ha ha ha.

    Now go away, or I shall mock you again.

  • by Loki_1929 ( 550940 ) on Monday February 23, 2004 @11:12PM (#8369857) Journal
    "Granted you could be nervous about this since 3dfx went the way of the dodo, but since AMD doesn't make POS video cards that double the weight of your box...they should be safe ;-)"

    3DFX's problem had nothing to do with their products. Their problem had to do with the fact that they got greedy - extremely greedy. After their first few successful graphics chips were launched, they basically shut their board makers out in the European market with the purchase of STB. They began producing their own boards, and had production capacity sufficient to supply the European market, and that's about it. Thus, other board makers were still necessary for other markets, such as the US. Having been bent over by 3DFX in the European market, board makers essentially told 3DFX to take their chips and stuff them. Thus, 3DFX was left with the choice of abandoning every market but the European (you're joking, right?), or dipping into (read: draining) their R&D budget. Noting that option 1 was suicidal, 3DFX chose the latter. Thus, production was bumped, the new Voodoo 3 graphics cards were an outstanding bunch, and virtually no R&D was accomplished for a few years. Wait; did I say they didn't do any R&D for a few years?! Yes - yes I did. Thus, the thus far sub-standard (where 3DFX was the standard) 3D graphics card/chip makers were able to catch up to, and surpass 3DFX in both performance and features. Glide, 3DFX's baby, was eclipsed by the more open, if less fully-featured, OpenGL in game support. By the time 3DFX had enough production capability to start working on new cards, the writing was on the wall. Ati, Matrox, and nVidia were already too far ahead for 3DFX to have a chance competing against. 3DFX dumped the last of their cash into creating an extraordinarily powerful, goofy as hell looking, wildly expensive set of cards, which saw almost no time whatsoever in the market before 3DFX was forced to sell all IP rights to nVidia. 3DFX, nothing more than a shell of a company with no IP, then collapsed about a month later.

    The last good card from 3DFX? The Voodoo 3 3500. Their last great card? The Voodoo 3 3000, whose overclocking ability was absolutely beyond anything anyone had ever before imagined possible. With stock cooling, one could achieve gains that would be thought of as ridiculous (percentage-wise) today. My own V3 3000, whose default memory clock speed was 166MHz, hit 220MHz with the stock cooler with no artifacts. I recall pushing it a bit higher with a rigged cooling system before finally replacing the card (it was getting OLD). 200MHz was common for the memory speed on those, and values as high as 240 - 250MHz had been reported, though often not without some artifacts. The quality of components was next to none from 3DFX. It was not their product, but their arrogance that was their undoing.

  • by kryptkpr ( 180196 ) on Monday February 23, 2004 @11:18PM (#8369907) Homepage
    Spend the few extra dollars on a good motherboard with the nForce2 chipset. I run an Asus A7N8X-E Deluxe and in my experience it's very speedy (compared to my old ECS K7S5A, bleh) and packed to the tits with features (FireWire, SATA+RAID, USB2.0, etc..).

    Also, good memory (we're talking at least the lifetime warranty kind here) is totally necessary if you want your system to be stable at high frequencies, it seems AMD CPUs are more sensitive to bad/cheap memory (particularly in ECS boards, they're cheap, but avoid them if you at all can).

    On a side note, AIDA32 shows the chipset bus on this board as being 8-bit HyperTransport v1.0 .. totally cool :)
  • by Anonymous Coward on Tuesday February 24, 2004 @12:32AM (#8370421)
    AMD rocks. I've been running AMD's since my K5-133 that replaced my last Intel chip, a dx/75. (or was it 100? dunno). Even back then, the only problem I had was one game (aces over the pacific) needed a small DOS patch to make it work right with the AMD chip. I suspect it was something intel had done, vice something AMD hadn't done, WRT programming.

    I just bought an AMD 2400 and put it in my mobo that is only rated (officially, anyway) for an XP 1800.

    Well, needless to say, it runs smooth and quite, right out of the box. I used the crappy fan that came with it.... I mean, this thing is an abomination. But was free! So I figured I'd test it out noise / heat wise compared to my volcano 9 turbo fan.

    Anyone wanna buy a volcano 9? Cheap?

    Buy AMD man. My Intel desktop at work (2.4 ghz) runs slower than my AMD 1 ghz (clocked to 1.4). Intel lengthened the pipline, so their numbers are really the lie.... the PR ratings are pretty close to accurate in real world testing.

    The only other person I have heard having any problems lately with an AMD, (and he was quite vociferous in his disdain for the chips) sounded like a classic BAD DIMM with random lockups and bsods etc on his system. I've seen that with Intel chips.... it's the RAM not the chip.

    AMD rocks! go AMD! gogogogo buy one now!

    (I yearn for a 64, but I'll wait till they are faster/cheaper)
  • Re:stupid (Score:2, Informative)

    by SuperFrink ( 640505 ) on Tuesday February 24, 2004 @03:18AM (#8371254) Homepage
    The real problem is ultimately the use of C, and the real solution is not to use a new CPU or add instructions, but to use a language without C's quirks.

    Is the problem the C language or is it that people write code that doesn't check it's input well enough?

    The man page for gets() (from Slackware 9.1) reads:
    Never use gets(). Because it is impossible to tell with out knowing the data in advance how many characters gets() will read, and because gets() will continue to store characters past the end of the buffer, it is extremely dangerous to use. It has been used to break computer security. Use fgets() instead.

    It's true people write buggy code. Look at PHP. I think it checks bounds on arrays (actually I think they behave like hash tables or list objects but that's beside the point.) Have you never heard of a security bug in something written in PHP?

    Is a bug that lets people run arbitrary commands on your webserver less dangerious than a bug that allows them to run arbitrary executable code? Do you keep wget, gcc and as on your webserver?

    Some languages make it eaiser to check input (eg regexes in perl) but it's still possible to let something through.

An authority is a person who can tell you more about something than you really care to know.

Working...