AMD Could Profit from Buffer-Overflow Protection 631
spin2cool writes "New Scientist has an article about how AMD and Intel are planning on releasing new consumer chips with built-in buffer-overflow protection. Apparently AMD's chips will make it to market first, though, which some analysts think could give AMD an advantage as the next round of chips are released. The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe."
They are NOT protecting against overflows (Score:5, Informative)
I'd buy (Score:3, Informative)
However - they WILL have to spin it well enough, or better than the "Megahertz Myth" because that didn't work too well for average folks. BestBuy salesmen don't know how to explain "AMD Athlon 289456++XL 3400 MP SSE4 +-7200 BufferXTreme" so they just push intel...
Re:Linux support (Score:5, Informative)
An OS would then use this to mark pure data page and areas like the stack as NX so that overflowing datastructures doesn't allow arbitrary malicious code to be run.
What does it do? (Score:3, Informative)
I've seen patches [ogi.edu] to Linux that provide a non-executable stack. There's also the mprotect(2) [wlug.org.nz] system call to change memory protection from user programs. And I believe OpenBSD has had a non-executable stack in the mainline for at least a couple releases.
So what they're advertising here seems to have already existed. If not, how are the things above possible?
Re:the Chipmaker??? (Score:4, Informative)
AMD Opteron and Athlon 64 already have this (Score:4, Informative)
The AMD Opteron and Athlon 64 chips already [computerworld.com]
have the buffer overflow protection in their hardware and the
feature is already supported by both Linux and Windows XP 64-bit
edition. AMD calls this "Execution Protection" and the
basic idea is that the processor will not allow code that arrives to
the system via a buffer overflow to be marked as
executable. The slashdot story says "will have" for both
Intel and AMD when it should read "AMD already has and Intel will
have..."
Re:No silver bullet. (Score:3, Informative)
Re:AMD needs better marketing (Score:5, Informative)
Re:Pathetic (Score:2, Informative)
From the article:
"AMD's Athlon-64 (for PCs) and Opteron (for servers) will protect against buffer overflows when used with a new version of Windows XP."
So either these chips will only work to protect against Microsoft bugs (in conjunction with software), or we'll have to wait until Linux can figure out how to use this feature.
Re:Odd... (Score:1, Informative)
AMD chipsets (Score:2, Informative)
Re:what a drag (Score:5, Informative)
"W^X (pronounced: "W xor X") on architectures capable of pure execute-bit support in the MMU (sparc, sparc64, alpha, hppa). This is a fine-grained memory permissions layout, ensuring that memory which can be written to by application programs can not be executable at the same time and vice versa. This raises the bar on potential buffer overflows and other attacks: as a result, an attacker is unable to write code anywhere in memory where it can be executed. (NOTE: i386 and powerpc do not support W^X in 3.3; however, 3.3-current already supports it on i386, and both these processors are expected to support this change in 3.4). "
Re:the Chipmaker??? (Score:5, Informative)
If you program in C on Intel you are going to have problems without almost fanatical devotion to the Po^H^H management of your memory resources.
That goes for Linux as well, as any check at Bugtraq can confirm.
Yes, people should be very careful when coding in languages and on architechtures which allow buffer overflow, but the real solution is at a level lower than the coder's.
KFG
Re:AMD needs better marketing (Score:5, Informative)
As for the other features you mention. You are comparing Desktop processors and server processors. You might note the lack of the Opteron processor in the third party tests you linked to.
Bout two months ago someone came to me with a motherboard and processor, Athlon XP 2600+. They couldn't get it to boot. I took one look at it and realized the heatsink was on backwards, it shut it self down as soon as it got hot enough. I put the heatsink on correctly and the thing booted right up.As for the PCI locking its a bit harder to vouch for since I don't see a whole lot of information about it, but I sure do recall seeing tests involving the Opteron, if I could find it right now I would, except I'm on dialup now for the first time in six years and its annoying the hell out of me.
Stack Protection Today (Score:2, Informative)
http://www.trl.ibm.com/projects/security/ssp/ [ibm.com]
Gentoo-specific info here.
http://www.gentoo.org/proj/en/hardened/propolice.x ml [gentoo.org]
Re:Code rewrites going to be needed? (Score:5, Informative)
To work with memory protection enabled, applications will need to allocate memory using VirtualAlloc [microsoft.com] and specify the memory options [microsoft.com] to make it executable. Then they can generate and run the code there.
I am assuming that Linux could incorporate some similar functionality, anybody know if someone is working on it?
Re:Code rewrites going to be needed? (Score:3, Informative)
Apps that use self-modifying code probably only run on one architecture - most likely the 8x86. There are far better ways to protect one's code.
Self-modifying code is prone to bugs, hard to develop, hard to maintain, hardware dependent, and to top it all off - not that effective at providing security.
Re:Awesome (Score:2, Informative)
Re:This is not a future thing - AMD does it today (Score:2, Informative)
Linux has not yet looked into technology like this. The PaX project however has these features available via a kernel patch, that uses software techniques to achieve what will be quite simple now that hardware can support marking pages executable.
Yes people, this does mean that as far as buffer overflows go, Windows on amd64 will soon be more resilient than a stock Linux kernel.
A bunch of things (Score:4, Informative)
2) It needs OS support, specifically XP SP2, which isn't out yet.
3) It doesn't really do what it is meant to, I have seen several 'theoretical' discussions on how to circumvent it. Think of it as another hoop to jump through for the black hats.
4) You need to be in 64-bit mode to use it
5) 4) requires a recompilation anyway, why not do it right with the right tools when you recompile?
6) I know of at least one vendor using to bid against intel on contracts now.
7) Oh yeah, this will do a lot of good. Really. When has a white paper ever lied?
8) The more you know about things like this, the more you want to move into a cabin in Montana and live off the land.
-Charlie
Re:Old news (Score:5, Informative)
What's better is that CS==DS was a common mode [known as a
So there goes your theory.
Tom
Buffer overflows are in applications, not the CPU (Score:1, Informative)
There is no way to do this at the motherboard level. It needs to happen in the MMU which is part of the processor. And it isn't specific to buffer overflows. In fact, it does not prevent them. What is does is allow the operating system to mark memory as readable without making it executable. Other processors can already do this. And there are known kludgy workarounds which are implemented in OpenBSD and Solar Designer's Linux kernel patches which make it work even on broken Intel CPUs. AMD and Intel are simply implementing the page table protection bits the way they should have been implemented back when Intel created the 386.
It's been available on intel for a LONG time...BUT (Score:1, Informative)
The problem has always been that the execute flag has only existed in the segmentation registers.
This change puts the execute flag on the PAGE rather than the SEGMENT.
The segmented mode is SLOW when you combine it with paging activity (you get a fault on a page, and you have to modify both the page tables AND the segment descriptors for things to work. Going through both mapping registers (page and segment) causes a significant speed reduction in memory access. It also gives you a larger physical memory range... (cache collision penalties too)
Re:Code rewrites going to be needed? (Score:4, Informative)
The Intel x86 architecture has few registers, so if you want to keep lots of values handy, you're going to have to keep swapping values in and out of memory. Alternatively, immediate-value constants can be hard-coded in the code that do not change during a long loop or a loop with many layers of nestedness. Just before the loop is executed, these hard-coded constants will be modified by re-writing the immediate-values in the code. An example of this is some code that draws a scaled translucent sprite. Throughout the code, the scale will remain constant, and if the translucency is uniform, that will remain constant too. The code that does the translucent blitting will use the registers only for values that change during the sprite-drawing.
On an 80386, using this technique will cause a significant speed-increase in the code, but on 80486's and above where there are on-board L1 caches on the CPUs, the code-modification may cause cache-misses that may slow down the system - espcecially if it is run on an even newer x86 CPU that has a seperate program and data cache in the L1 cache. To make things worse, nowardays, most code runs in a multi-tasking environment, so trying to figure out if self-modifying code causes a slowdown or a speed-increase is almost impossible to predict.
Of course, nowardays, most drawing is done by hardware accellerated graphics cards so this isn't a good example, but there could still be some use for hard-coding values that do not change in a loop.
Re:what a drag (Score:1, Informative)
At last, consumer CPUs catch up with the Alpha (Score:4, Informative)
See This BugTraq posting by Theo de Raadt [securityfocus.com]
Re:Code rewrites going to be needed? (Score:3, Informative)
I think some architectures even disallow writing to code segments altogether -- the l1 or l2 caches wont maintain coherency (this is again an optimization as writing to a code segment is rare).
Re:They are NOT protecting against overflows (Score:3, Informative)
That said, if you want to create self-modiying code for some reason (hey, its actually an intresting field), then you should probably do in an interpreted language for several reasons. First being that you don't screw-up the OS, and second, the intrepreter has a fair chance of detecting when the program has entered an infinite loop. (Hmm, its been in that loop for a few million cycles and its state does not appear to be changing). This can be done with stacks or counters. I've seen it in some Smalltalk interpreters.
A third advantage is that the Virtual machine you run this self-modifying code on doesn't even need to exist in reality, so you can do a lot of wierd and wonderful stuff with it, and it will still run.
Protected momory has exited for a long time, and x86 architecture is just about the last major processor architecture that does not have support for user and OS seperated protected memory. I am taking an OS course right now, and I was very surprised at this as it solves so many malicious code based problems like buffer overruns.
Re:AMD needs better marketing (Score:3, Informative)
Re:Code rewrites going to be needed? (Score:4, Informative)
Re:Linux support (Score:3, Informative)
yes, this is one of the wonderful misfeatures of x86. i don't know what this article is all about. amd64 ALREADY has an execute bit in each pte, when it's in long (64bit) mode. this is nothing new; it's been in amd's manuals for while. i'd bet it was one of the first x86 problems they planned to fix.
Re:At last, consumer CPUs catch up with the Alpha (Score:2, Informative)
http://www.ajwm.net/amayer/papers/B5000.html
Re:Code rewrites going to be needed? (Score:3, Informative)
There's one legitimate place for rewriting code (Score:2, Informative)
And all of these are cases of an executing piece of code dynamically creating and executing another piece of code which is exactly what happens in a buffer overflow situation.
However, the number of programs that have a legitimate need to do this is tiny. I'm not sure how this chip will accomdoate those. There may need to be some kind of OS-layer thing with code that is trusted. Maybe the JVM itself could switch modes, so that only when it is actively attempting to write code would that feature be allowed. There are definitely work-arounds to allow JIT to continue working.
As for copy protection, given a choice between having a system which is secure for me and a system which is secure for them, I'll take the system which is secure for me. What about you?
-------
Create a WAP hosting [chiralsoftware.net] service
Isn't this already possible with segmentation? (Score:5, Informative)
If current operating systems actually used this in addition to paging (which is what most of them only use now), why would they need to create a new chip? Linux does not fully utilize segmention, mostly only paging [clemson.edu]. I don't have any resources on MS OS design right now so I can't comment on it... (although maybe looking at the recent source would help some
Re:What are "SPARC standards"? Thanks. [nt] (Score:2, Informative)
Scalable Processor ARChitecture.
Re:AMD needs better marketing (Score:3, Informative)
SP2 will break a *lot* of code and it's well worth downloading the beta and testing your stuff with it - your boss will thank you for it
Re:AMD needs better marketing (Score:4, Informative)
Nor will there ever be IMO. But this combined with good practices like not running as admin when we don't need to (reading email, web browsing, game playing for example) will be a huge leap forward.
Re:Awesome (Score:3, Informative)
You are. A buffer overflow works by overflowing a stack-allocated buffer, causing other stack-allocated data to be overwritten. The usual method of exploiting this is by overwriting the return address with a value that points back into the buffer, so that the function will return straight into the buffer data, where the cracker will have put executable code of course.
A way to provide some protection against this is by disabling the ability to execute code that is located on stack.
Note that:
1. there are already linux kernel patches to do this on x86 hardware, but they incur a slight performance penalty because they're implemented by abusing page table caches (there are separate ones for data, and you can deliberately make 'em inconsistent so that the table entry for data says access is allowed, while the one for code says it's disallowed)
2. this does not prevent buffer overflow exploits entirely, it just makes 'em a lot harder. There are tricks you can still use sometimes like putting the known address of some useful library function into the return address
hope this helps to clear it up a bit
Nope. (Score:2, Informative)
So, the problem is that somehow (e.g. not limiting the amount of input you read into a fixed sized buffer) you overrun the end of your buffer on the local stackframe with the return address at some point just after it.
You have to know how far an offset the return address is from the start of the overflowed buffer, and where you'd like to put the instruction pointer now. As you don't generally have anywhere else, you put the Instruction Pointer to some point on the stack where you overflowed it. Make sure your overflow includes exploit code, and enough NOPs first to allow for any call depth.
Barricades against this technique:
- random address for start of stack
- non-executable stacks
- forwards-growing stacks
Re:Awesome (Score:4, Informative)
No. On the stack itself, in addition to the local data for a function (and the saved registers), is the return address that you are going to jump back to after the function is complete. Buffer overflow exploits write past the end of the buffer. So you are overflowing the function's local data, not the entire stack segment. As the previous poster mentioned, because the stack grows downward, your overflow can write over the return address, which is where all the nastiness starts.
In addition to this, is the fact that the binaries are always the same for each machine, and the process's memory all logically maps to the same location (windows user code maps to 0x10000000).
So, say someone writes a program and somewhere has a static buffer for input which is 256 bytes, and doesn't check bounds on input data. You can construct an input which is more than 256 bytes, and your data will overwrite stuff which is outside of the input buffer, perhaps the return address. So, with the proper input, you can make the program jump to an arbitrary point.
Usually, whenever a function is called, it will be called at the same depth of recursion. Like, I might make a function, "authenticate", which asks for your username and password (storing them without checking in my 256 byte buffers), then checks credentials and either proceeds or returns an error code.
This function will probably only be called once, and it will always be called at the same time in program execution, relatively early. The stack will always be the same size when it is called. (Like, your call stack at this point might look like: main() -> initialize() -> authenticate()) or whatever).
Sometimes, a function might be called from multiple places... Maybe there is something like "getAddress()", which does pretty much the same thing, it grabs an address input by the user, but it might be called from many places in the executable. Each call will have its own characteristic call stack, and offset within the stack segment. The stack frames of all functions leading down to it will be present. (You can usually examine the current call stack in a debugger).
If you know "where" the function will be called from in this manner, you will know the exact stack layout at this point, including the absolute addresses and everything (which you know because the binaries are always the same and the executable always maps to the same logical place in memory).
So, you can overwrite the return address so that it returns to inside the input buffer. Then, you have 256 bytes (in this example) to work with for constructing your little exploit. Often, the exploit will be just a stub which downloads another malware program and launches it, or whatever.
There is a little bit more to it. Like, you usually need to construct your input so that you don't have any 0 bytes within it, because that will signify the end of a string. The input, even though it's not bounds checked, might still be validated in some fashion. (I think I remember reading about someone who had made a "codec", so that the input data could be composed of valid alphanumeric characters. So, even the unpacker was alphanumeric, which is pretty cool).
Re:Partial solution (Score:2, Informative)
Well written applications... (Score:3, Informative)
By the way... What is (or is there) the Windows equivalent?
Wrong. NX is in the 2.4.x kernels, at least. (Score:4, Informative)
Caveats: you can't mprotect it back to execute status, and it breaks some software, especially Mozilla/Java/Ada (just like exec-shield...)
Re:Well written applications... (Score:3, Informative)
Re:AMD needs better marketing (Score:2, Informative)
See AMD64 Architecture Programmer's Manual Volume 2: System Programming [amd.com]
5.6.1 No Execute (NX) Bit (page 173)
Re:Good or Bad idea? (Score:3, Informative)
Also this no-execute bit is NOT a sure-fire fix. In fact, it's not a fix at all, buffer overflows can definitely still occur, it's just a whole heck of a lot harder to do anything too malicious once that buffer has been overflowed. Basically you just end up with a DoS attack instead of a remote access vulnerbility. Still something that should be fixed though.
So, is this a good idea? Hell yeah! An extra line of defense is ALWAYS a good thing!
Re:AMD needs better marketing (Score:1, Informative)
It's not like they never [tomshardware.com] had [sysopt.com] problems [com.com] in the past [com.com].
Re:AMD needs better marketing (Score:5, Informative)
Segmentation offers much finer control over memory (allocations can be sized to the exact byte, with a fault generated on any out of bounds access) and a larger virtual address space (48 bits, accessed in segments up to 4 GB). The problem with segmentation is the kernel memory management becomes a lot more complicated, so OS developers have avoided using the segmentation. x86 chips are the only ones to provide segmentation support, so developers of portable OS's avoided the feature as well.
When AMD designed the x86-64 architecture, they had to design new page tables to deal with 64 addresses. While making that change, they also seperated the write & execute permission bits.
SUN has done this for years (Score:2, Informative)
Re:AMD is doing just fine (Score:5, Informative)
Do you remember when the "Intel Inside" logo came out?
1991, according to Intel themselves [intel.com]
There was no real competition. (it was the Pentium days) There were other processors, but the Pentium pretty much blew them away.
The Intel Inside marketing program started two years before the Pentium came out. At that time AMD was competing very effectively with the 486. So much so that Intel wanted a new marketing campaign to try to bring people back. Even in the early Pentium days AMD continued to compete effectively. Their 5x86 120MHz chips were very competitive with the Pentium 60 and Pentium 66, and even the 75MHz Pentium chips. It wasn't really until '94 or '95 that Intel really started leaving AMD in the dust, mainly because AMD was WAY late at releasing their K5 processor and when it did come out they had so many problems manufacturing it that it was clocked much lower than initially hoped for. Cyrix continued to offer some competition for Intel during this time, but they were plagued by crappy motherboards which gave them a poor reputation (it was a bit of a self-fulfilling prophecy thing: reputation for being cheap crap meant that they were put on cheap crap motherboards which resulted in a poor quality system).
it will be [better] because it is cheaper
And that is somehow an invalid reason for a product to be better?
Re:AMD needs better marketing (Score:1, Informative)
The problem with AMD was never the AMD chip, but stability of the 3rd party motherboard manufacturers. Since AMD chips were in the "bargain" category, "bargain" motherboard makers cut corners and caused stability issues. So vendors think AMD = cheap and worse quality. There were also early issues with AMD being able to fill orders, which makes vendors hesitant to purchase from them. One of the problems with the boom-bust cycle of chips is that even if AMD makes the greatest chip out there, they dont have the manufacturing support to fill all the orders and significantly increase market share, and they don't have sufficient financials to support increasing their production capacity. they just FINALLY made a profit last quarter, when Intel was profitable even in the worst of times.
Once again, the reporter should read carefully... (Score:3, Informative)
AMD has already made Intel look bad by getting their 64-bit CPU into the mass-market first, and this feature was implemented partly to provide a facility that some other platforms (e.g. Solaris on Sparc) have had for quite some time.
That is incorrect (Score:1, Informative)
actually, it only has to work in 64-bit mode in most of the cases. That is because by default heap allocs request the pages to have PROT_READ, which on x86-32 gives PROT_EXEC as well - not so in AMD64 64-bit mode. so for instance unless explicitly unset in the kernel boot parameters, AMD64 Linux kernels give buffer overflow protection for 32-bit apps, no recompile needed
of course, in windows world you need to wait for the 64-bit kernel, hence the new version of XP.
Re:AMD needs better marketing (Score:5, Informative)
Like the PIII Coppermine CPUs that wouldn't even boot [bbc.co.uk] sometimes.
Or the randomly rebooting [cw.com.hk] PII Xeons.
Or the voltage problems [com.com] with certain PIII Xeons.
Or the memory request system hang bug in the PIII/Xeon [hardwarecentral.com].
Or the PIII's SSE bug [zdnet.co.uk] whose 'fix' killed i810 compatability.
Or the MTH [com.com] bug in the PIII CPUs that forced Intel customers to replace boards and RAM.
Or the recalled [com.com], that's right, recalled [com.com] PIII chips at 1.13GHz.
Or the recalled [com.com] (there's that word again) Xeon SERVER chips at 800 and 900MHz.
Or the recalled [techweb.com] (that word, AGAIN?!) cc820 "cape cod" Intel motherboards.
Or the data overwriting [zdnet.co.uk] bug in the P4 CPUs.
Or the P4 chipset [com.com] bug that killed video performance.
Or the Sun/Oracle P4 bug [indiana.edu].
Or the Itanium [theinquirer.net] bug that was severe enough to make Compaq halt Itanium shipments.
Or the Itanium 2 bug [infoworld.com] that "can cause systems to behave unpredictably or shut down".
Or the numerous other P4/Xeon/XeonMP bugs [theinquirer.net] that have been hanging around.
Yes, I did consider the possibility that there might just be some basis for the belief that Intel's products are superior. Having considered that, in light of the mountains of evidence to the contrary, I shall now proceed to laugh at you.
Ha ha ha.
Now go away, or I shall mock you again.
Re:You know what AMD Needs? (Score:4, Informative)
3DFX's problem had nothing to do with their products. Their problem had to do with the fact that they got greedy - extremely greedy. After their first few successful graphics chips were launched, they basically shut their board makers out in the European market with the purchase of STB. They began producing their own boards, and had production capacity sufficient to supply the European market, and that's about it. Thus, other board makers were still necessary for other markets, such as the US. Having been bent over by 3DFX in the European market, board makers essentially told 3DFX to take their chips and stuff them. Thus, 3DFX was left with the choice of abandoning every market but the European (you're joking, right?), or dipping into (read: draining) their R&D budget. Noting that option 1 was suicidal, 3DFX chose the latter. Thus, production was bumped, the new Voodoo 3 graphics cards were an outstanding bunch, and virtually no R&D was accomplished for a few years. Wait; did I say they didn't do any R&D for a few years?! Yes - yes I did. Thus, the thus far sub-standard (where 3DFX was the standard) 3D graphics card/chip makers were able to catch up to, and surpass 3DFX in both performance and features. Glide, 3DFX's baby, was eclipsed by the more open, if less fully-featured, OpenGL in game support. By the time 3DFX had enough production capability to start working on new cards, the writing was on the wall. Ati, Matrox, and nVidia were already too far ahead for 3DFX to have a chance competing against. 3DFX dumped the last of their cash into creating an extraordinarily powerful, goofy as hell looking, wildly expensive set of cards, which saw almost no time whatsoever in the market before 3DFX was forced to sell all IP rights to nVidia. 3DFX, nothing more than a shell of a company with no IP, then collapsed about a month later.
The last good card from 3DFX? The Voodoo 3 3500. Their last great card? The Voodoo 3 3000, whose overclocking ability was absolutely beyond anything anyone had ever before imagined possible. With stock cooling, one could achieve gains that would be thought of as ridiculous (percentage-wise) today. My own V3 3000, whose default memory clock speed was 166MHz, hit 220MHz with the stock cooler with no artifacts. I recall pushing it a bit higher with a rigged cooling system before finally replacing the card (it was getting OLD). 200MHz was common for the memory speed on those, and values as high as 240 - 250MHz had been reported, though often not without some artifacts. The quality of components was next to none from 3DFX. It was not their product, but their arrogance that was their undoing.
Re:AMD needs better marketing (Score:5, Informative)
Also, good memory (we're talking at least the lifetime warranty kind here) is totally necessary if you want your system to be stable at high frequencies, it seems AMD CPUs are more sensitive to bad/cheap memory (particularly in ECS boards, they're cheap, but avoid them if you at all can).
On a side note, AIDA32 shows the chipset bus on this board as being 8-bit HyperTransport v1.0
Re:AMD needs better marketing (Score:1, Informative)
I just bought an AMD 2400 and put it in my mobo that is only rated (officially, anyway) for an XP 1800.
Well, needless to say, it runs smooth and quite, right out of the box. I used the crappy fan that came with it.... I mean, this thing is an abomination. But was free! So I figured I'd test it out noise / heat wise compared to my volcano 9 turbo fan.
Anyone wanna buy a volcano 9? Cheap?
Buy AMD man. My Intel desktop at work (2.4 ghz) runs slower than my AMD 1 ghz (clocked to 1.4). Intel lengthened the pipline, so their numbers are really the lie.... the PR ratings are pretty close to accurate in real world testing.
The only other person I have heard having any problems lately with an AMD, (and he was quite vociferous in his disdain for the chips) sounded like a classic BAD DIMM with random lockups and bsods etc on his system. I've seen that with Intel chips.... it's the RAM not the chip.
AMD rocks! go AMD! gogogogo buy one now!
(I yearn for a 64, but I'll wait till they are faster/cheaper)
Re:stupid (Score:2, Informative)
Is the problem the C language or is it that people write code that doesn't check it's input well enough?
The man page for gets() (from Slackware 9.1) reads:
It's true people write buggy code. Look at PHP. I think it checks bounds on arrays (actually I think they behave like hash tables or list objects but that's beside the point.) Have you never heard of a security bug in something written in PHP?
Is a bug that lets people run arbitrary commands on your webserver less dangerious than a bug that allows them to run arbitrary executable code? Do you keep wget, gcc and as on your webserver?
Some languages make it eaiser to check input (eg regexes in perl) but it's still possible to let something through.