AMD Could Profit from Buffer-Overflow Protection 631
spin2cool writes "New Scientist has an article about how AMD and Intel are planning on releasing new consumer chips with built-in buffer-overflow protection. Apparently AMD's chips will make it to market first, though, which some analysts think could give AMD an advantage as the next round of chips are released. The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe."
The could profit from overflows too... (Score:5, Funny)
Re:The could profit from overflows too... (Score:5, Funny)
Your account might profit too, if your account is next in sequence after AMD's...
Re:The could profit from overflows too... (Score:4, Funny)
AMD needs better marketing (Score:5, Insightful)
Re:AMD needs better marketing (Score:5, Insightful)
Nobody knows if Intel is better, but they don't want a computer that "lacks" Intel inside. They simply guess that if it's inside, it's better than not having it inside.
It is brilliant. It can't be copied or AMD looks like a "me too!" player. It can't be contested because it's just vauge enough to not claim that the machine is any better for having Intel inside, but implies that anything else is somehow inferior.
Re:AMD needs better marketing (Score:5, Funny)
I always thought "Intel Inside" was a warning label.
Re:AMD needs better marketing (Score:5, Insightful)
To this day, the legacy of Cyrix shadows AMD with marketting using the supposed clockspeed rather then actual.
Fact of the matter is that Intel has so much branding, even being behind AMD on a few releases isn't going to do enough to displace Intel from being #1. All AMD is good for is the consumer so that there isn't a monopoly, and competition leads to innovation - otherwise Intel wouldn't have brought x86-64 to the general consumer for years. Not that I blame their logic, but then there wasn't a need to jump to Pentium either - the 486 had a lot still to offer at the time.
Athlon was NOT the first AMD CPU (Score:4, Insightful)
The AMD K5, K6, K6-II, and K6-III were all decent chips, but were nothing more than the "bargain" chip. What gave Intel the real lead over AMD was the combination of several years of the fastest chips being only available from Intel and the public knowing who made their chip.
Re:AMD needs better marketing (Score:5, Interesting)
Re:AMD needs better marketing (Score:5, Insightful)
I bet it had more to do with ensuring their profit margin.
Re:AMD needs better marketing (Score:5, Insightful)
Also, I suspect AMD possibly suffers from the poor reputations of previous Intel competitors who truly did have unreliable, inferior products. I for one had trouble for a while remembering which of AMD and Cyrix was the one to avoid, thus for the average consumer choosing the always reliable Intel makes some sense.
AMD still needs some time to build up the reputation Intel has. If they can continue building reliable products without cutting too many corners as they have done in the past to keep up in the race against the giant, they may eventually obtain such a reputation, but such things take time.
Re:AMD needs better marketing (Score:5, Insightful)
I've always been of the opinion that if you're in the habit of removing your heatsink from a running processor, you have deeper problems than worrying about whether or not it will melt. Tom's sure managed to keep a lot of people I know from buying AMD, which is pretty funny considering how much cooler AMD chips run these days compared to Intel.
Re:AMD needs better marketing (Score:5, Insightful)
Also AMDs much larger problem was motherboards. VIA chipesest used to suck hard, and AMD's own were almost as bad. I remeber when the Athlons were fairly new it was time for me to upgrade so I decided to get one based on price. I got a 700mhz slot Athlon and a top-of-the-line Abit board with VIA chipset. I then proceeded to fight with my system for two weeks. I could not make it work in either 98 or 2000. It just would not play nice with my GeForce or my pro audio card. I finally sent it back, got a 440BX and an Intel P3 700 which I then used for like 2 years.
Now I know the situation is completely different today, but that sort of thing sticks with many companies and OEMs. Trust is a thing that is easy to loose, hard to regain. Not fair, but that's how the world works.
Only receantly have I started recommending ATi video cards. Why? Well becuase I supported ATis in many situations and their drivers were trash. 2d was fine but try and 3d and you were asking for BSODs. That's now changed, their drivers are in every way as solid as nVidia's and their hardware is better. But it took time for me to trust that. I had to use the cards and see them used in a number of different environments before I was ready to declare them stable enough for use in production systems.
Also the PR numbers aren't helping. Many people see it as dishonest, espically since they haven't need consistent (some of the more receant chips haven't performed at the level their PR would infer). This again hurts crediblity in the eyes of some people.
It's not fair per se, but it is the way of the world. You burn me, it takes time for me to trust you won't do it again.
Re:AMD needs better marketing (Score:5, Insightful)
And, why, exactly, would you remove the heatsink from a CPU while it is running?
Moreover, this was not a flaw in the Athlon. The Athlon, since Athlon XP, has contained a thermal diode to enable safe thermal shutdown. The motherboard that Tom's Hardware used did not have the thermal protection circuitry.
Losing a CPU to "thermal death" was a rare occurance. Most CPUs that experienced "thermal death" had improperly installed thermal solutions (e.g. the clip was not installed properly). A fan failure or failure to use thermal compound (e.g. a pad or grease) would likely not cause damage to the CPU, even without thermal protection. Only a lack of die to heatsink contact (e.g. with an improperly installed shim or a poorly installed heatsink that detached during movement) would likely cause the Athlon to experience "thermal death" ass shown in the Tom's Hardware video.
"whereas Intels even back then would simply slow down"
The Tom's Hardware Guide video was a fake. The CPU temperature never exceeded 30C (look at the thermal probe). Thermal throttle-down on the P4 occurs when the CPU hits 85C. And, yes, the system will crash or simply become completely unusable if the heatsink is removed.
"without cutting too many corners as they have done in the past"
Right. Intel has never cut corners, particularly not with major logic bugs in the Pentium, PII, PIII, P4, and Itanium.
Look, CPUs are not flawless. But the CPU thermal issue you speak of really is not a huge issue. With a properly installed heatsink (like the heatsinks on a computer you would buy from HP or eMachines), it never was an issue. And today every new motherboard has thermal protection.
Tom's Hardware did a disservice to the community and to AMD by taking a relatively minor issue that affected a small number of people and blowing it out of proportion to a huge flaw.
If you read Tom's Hardware for as long as I have, you begin to notice a pattern: Tom is an egotistic nut. He posted one editorial stating that the performance war between Intel and AMD was bad for consumers (hmmm... my $90 Athlon XP 2600+ would seem to refute that, as would sub $200 P4 3.0GHz CPUs). He also says that people buying AMD64 systems are giving AMD a "no intrest loan" because of the lack of availibility of AMD64 operating systems and applications. Apparently, no one told Tom that the Athlon 64 3000+ is *cheaper* than its similarly performing P4 counterpart (in IA-32 applications). And, apparently, no one told Tom that Intel has adopted the same instruction set for its Pentium 4 based 64-bit systems.
I have lost respect for Tom and his publication. Between his hate-filled articles filled with vague statements and mistruths, his constant bashing of AMD (he compared the Athlon XP 3400+, a $450 CPU, to the P4 Extreme Edition, a $900 CPU, and decreed the P4EE the victor because it was marginally faster in 3/4 of the tests), and his suing of other tech websites, Tom has struck out. I only hope that [H]ardOCP doesn't suffer the same fate.
Re:AMD needs better marketing (Score:5, Informative)
Also, good memory (we're talking at least the lifetime warranty kind here) is totally necessary if you want your system to be stable at high frequencies, it seems AMD CPUs are more sensitive to bad/cheap memory (particularly in ECS boards, they're cheap, but avoid them if you at all can).
On a side note, AIDA32 shows the chipset bus on this board as being 8-bit HyperTransport v1.0
Re:AMD needs better marketing (Score:5, Informative)
Like the PIII Coppermine CPUs that wouldn't even boot [bbc.co.uk] sometimes.
Or the randomly rebooting [cw.com.hk] PII Xeons.
Or the voltage problems [com.com] with certain PIII Xeons.
Or the memory request system hang bug in the PIII/Xeon [hardwarecentral.com].
Or the PIII's SSE bug [zdnet.co.uk] whose 'fix' killed i810 compatability.
Or the MTH [com.com] bug in the PIII CPUs that forced Intel customers to replace boards and RAM.
Or the recalled [com.com], that's right, recalled [com.com] PIII chips at 1.13GHz.
Or the recalled [com.com] (there's that word again) Xeon SERVER chips at 800 and 900MHz.
Or the recalled [techweb.com] (that word, AGAIN?!) cc820 "cape cod" Intel motherboards.
Or the data overwriting [zdnet.co.uk] bug in the P4 CPUs.
Or the P4 chipset [com.com] bug that killed video performance.
Or the Sun/Oracle P4 bug [indiana.edu].
Or the Itanium [theinquirer.net] bug that was severe enough to make Compaq halt Itanium shipments.
Or the Itanium 2 bug [infoworld.com] that "can cause systems to behave unpredictably or shut down".
Or the numerous other P4/Xeon/XeonMP bugs [theinquirer.net] that have been hanging around.
Yes, I did consider the possibility that there might just be some basis for the belief that Intel's products are superior. Having considered that, in light of the mountains of evidence to the contrary, I shall now proceed to laugh at you.
Ha ha ha.
Now go away, or I shall mock you again.
Re:You know what AMD Needs? (Score:4, Informative)
3DFX's problem had nothing to do with their products. Their problem had to do with the fact that they got greedy - extremely greedy. After their first few successful graphics chips were launched, they basically shut their board makers out in the European market with the purchase of STB. They began producing their own boards, and had production capacity sufficient to supply the European market, and that's about it. Thus, other board makers were still necessary for other markets, such as the US. Having been bent over by 3DFX in the European market, board makers essentially told 3DFX to take their chips and stuff them. Thus, 3DFX was left with the choice of abandoning every market but the European (you're joking, right?), or dipping into (read: draining) their R&D budget. Noting that option 1 was suicidal, 3DFX chose the latter. Thus, production was bumped, the new Voodoo 3 graphics cards were an outstanding bunch, and virtually no R&D was accomplished for a few years. Wait; did I say they didn't do any R&D for a few years?! Yes - yes I did. Thus, the thus far sub-standard (where 3DFX was the standard) 3D graphics card/chip makers were able to catch up to, and surpass 3DFX in both performance and features. Glide, 3DFX's baby, was eclipsed by the more open, if less fully-featured, OpenGL in game support. By the time 3DFX had enough production capability to start working on new cards, the writing was on the wall. Ati, Matrox, and nVidia were already too far ahead for 3DFX to have a chance competing against. 3DFX dumped the last of their cash into creating an extraordinarily powerful, goofy as hell looking, wildly expensive set of cards, which saw almost no time whatsoever in the market before 3DFX was forced to sell all IP rights to nVidia. 3DFX, nothing more than a shell of a company with no IP, then collapsed about a month later.
The last good card from 3DFX? The Voodoo 3 3500. Their last great card? The Voodoo 3 3000, whose overclocking ability was absolutely beyond anything anyone had ever before imagined possible. With stock cooling, one could achieve gains that would be thought of as ridiculous (percentage-wise) today. My own V3 3000, whose default memory clock speed was 166MHz, hit 220MHz with the stock cooler with no artifacts. I recall pushing it a bit higher with a rigged cooling system before finally replacing the card (it was getting OLD). 200MHz was common for the memory speed on those, and values as high as 240 - 250MHz had been reported, though often not without some artifacts. The quality of components was next to none from 3DFX. It was not their product, but their arrogance that was their undoing.
Re:AMD needs better marketing (Score:4, Insightful)
"I asked them to cite proof that AMD systems were unstable. They could not but implied that it was common knowledge."
You can take this one step further - simply go through the articles I found and posted [slashdot.org] over a year ago. Show them the articles and then tell them that you cannot accept anything other than AMD quotes, in the interest of 'ensuring stability'.
AMD is doing just fine (Score:5, Insightful)
Do you remember when the "Intel Inside" logo came out? There was no real competition. (it was the Pentium days) There were other processors, but the Pentium pretty much blew them away. Intel didn't just success on that logo alone, they do have a little bit of technology behind it.
I think it is funny when people say AMD is better. When they say that, ask them why - 99% of the time it will be because it is cheaper (bang for the buck). The other 1% might do overclocking, or read anandtech on a daily basis, or have some highly technical reason - which is essentially irrelevant to the argument. For AMD to be where they are in the processor market, it is nearly a miracle. The only reason is because Intel was comfortable in their position. AMD came on the scene with a comparable product at a cheaper price, and it woke Intel up real fast. They catered more to the "home enthusiast" market at just the right time.
I have a buddy who has worked at Intel for 7 years now, and I always kid him about AMD. He works on the thermal solutions, and has access to the fab floor. There may be some advantages that Intel has over AMD in some areas (and vice versa) but if you have two well put together systems of each sitting side-by-side, the processor is pretty much a non-issue.
Re:AMD is doing just fine (Score:5, Informative)
Do you remember when the "Intel Inside" logo came out?
1991, according to Intel themselves [intel.com]
There was no real competition. (it was the Pentium days) There were other processors, but the Pentium pretty much blew them away.
The Intel Inside marketing program started two years before the Pentium came out. At that time AMD was competing very effectively with the 486. So much so that Intel wanted a new marketing campaign to try to bring people back. Even in the early Pentium days AMD continued to compete effectively. Their 5x86 120MHz chips were very competitive with the Pentium 60 and Pentium 66, and even the 75MHz Pentium chips. It wasn't really until '94 or '95 that Intel really started leaving AMD in the dust, mainly because AMD was WAY late at releasing their K5 processor and when it did come out they had so many problems manufacturing it that it was clocked much lower than initially hoped for. Cyrix continued to offer some competition for Intel during this time, but they were plagued by crappy motherboards which gave them a poor reputation (it was a bit of a self-fulfilling prophecy thing: reputation for being cheap crap meant that they were put on cheap crap motherboards which resulted in a poor quality system).
it will be [better] because it is cheaper
And that is somehow an invalid reason for a product to be better?
Re:AMD needs better marketing (Score:5, Insightful)
Re:AMD needs better marketing (Score:3, Funny)
Re:AMD needs better marketing (Score:4, Insightful)
There are other trampolines available. Merely making stack pages non-executable doesn't prevent return-into-libc exploits for example where you use the global offset table to jump into arbitrary code by overwriting the entry for a library call like printf(3).
Re:AMD needs better marketing (Score:4, Informative)
Nor will there ever be IMO. But this combined with good practices like not running as admin when we don't need to (reading email, web browsing, game playing for example) will be a huge leap forward.
Re:AMD needs better marketing (Score:5, Insightful)
I'm actually surprised that there are chips out there that don't have such a feature. In a perverse way, I hope IBM has a patent on it.... :-)
Re:AMD needs better marketing (Score:5, Informative)
Segmentation offers much finer control over memory (allocations can be sized to the exact byte, with a fault generated on any out of bounds access) and a larger virtual address space (48 bits, accessed in segments up to 4 GB). The problem with segmentation is the kernel memory management becomes a lot more complicated, so OS developers have avoided using the segmentation. x86 chips are the only ones to provide segmentation support, so developers of portable OS's avoided the feature as well.
When AMD designed the x86-64 architecture, they had to design new page tables to deal with 64 addresses. While making that change, they also seperated the write & execute permission bits.
Re:AMD needs better marketing (Score:5, Informative)
Those are MOTHERBOARDS (Score:4, Insightful)
Re:AMD needs better marketing (Score:5, Informative)
As for the other features you mention. You are comparing Desktop processors and server processors. You might note the lack of the Opteron processor in the third party tests you linked to.
Bout two months ago someone came to me with a motherboard and processor, Athlon XP 2600+. They couldn't get it to boot. I took one look at it and realized the heatsink was on backwards, it shut it self down as soon as it got hot enough. I put the heatsink on correctly and the thing booted right up.As for the PCI locking its a bit harder to vouch for since I don't see a whole lot of information about it, but I sure do recall seeing tests involving the Opteron, if I could find it right now I would, except I'm on dialup now for the first time in six years and its annoying the hell out of me.
Re:AMD needs better marketing (Score:5, Interesting)
Re:AMD needs better marketing (Score:3, Interesting)
PCI clock locking is to prevent the bus speed from drifting. Seems that most current AMD systems are running a 33mhz PCI buss +/- 3.3Mhz. Thats enough to cause crashes. True you see this in overclocked systems more then anything else. However over time the PCI buss speed can change in non locked systems due to thermal expansion of the circuitry etc. These are not big issues. In fa
Re:AMD needs better marketing (Score:5, Insightful)
Yeah, Ok [slashdot.org].
How easily we all forget just how many times Intel's chips and boards have been junk-in-a-box. What good is a feature when you can't even keep the machine up and running? What kind of uptime does your server farm have when you're sending recalled CPUs back to the manufacturer? Or perhaps, in the case of Compaq's Itanium customers, the server simply doesn't arrive because it's determined to be defective from the get-go?
Whoops.
Awesome (Score:3, Funny)
Re:Awesome (Score:5, Insightful)
Some of today's problems are really just side-effects of the x86 legacy. If you're willing to break binary compatibility, fixing problems is really, really easy. For example, there's no law that stacks have to stupidly grow downwards in memory so that an overflow ends up overwriting older stuff on the stack space, instead of overwriting in the direction where the unallocated space is. And indeed, on many architectures, it works more sensibly. So even if you don't protect against overflows, their damage doesn't need to be so severe.
But by the time it became popular for personal computers to be connected to the internet (and thus, overflow protection started to become really important), it was far too late to fix the problem, because too many people were locked into x86.
Re:Awesome (Score:4, Informative)
No. On the stack itself, in addition to the local data for a function (and the saved registers), is the return address that you are going to jump back to after the function is complete. Buffer overflow exploits write past the end of the buffer. So you are overflowing the function's local data, not the entire stack segment. As the previous poster mentioned, because the stack grows downward, your overflow can write over the return address, which is where all the nastiness starts.
In addition to this, is the fact that the binaries are always the same for each machine, and the process's memory all logically maps to the same location (windows user code maps to 0x10000000).
So, say someone writes a program and somewhere has a static buffer for input which is 256 bytes, and doesn't check bounds on input data. You can construct an input which is more than 256 bytes, and your data will overwrite stuff which is outside of the input buffer, perhaps the return address. So, with the proper input, you can make the program jump to an arbitrary point.
Usually, whenever a function is called, it will be called at the same depth of recursion. Like, I might make a function, "authenticate", which asks for your username and password (storing them without checking in my 256 byte buffers), then checks credentials and either proceeds or returns an error code.
This function will probably only be called once, and it will always be called at the same time in program execution, relatively early. The stack will always be the same size when it is called. (Like, your call stack at this point might look like: main() -> initialize() -> authenticate()) or whatever).
Sometimes, a function might be called from multiple places... Maybe there is something like "getAddress()", which does pretty much the same thing, it grabs an address input by the user, but it might be called from many places in the executable. Each call will have its own characteristic call stack, and offset within the stack segment. The stack frames of all functions leading down to it will be present. (You can usually examine the current call stack in a debugger).
If you know "where" the function will be called from in this manner, you will know the exact stack layout at this point, including the absolute addresses and everything (which you know because the binaries are always the same and the executable always maps to the same logical place in memory).
So, you can overwrite the return address so that it returns to inside the input buffer. Then, you have 256 bytes (in this example) to work with for constructing your little exploit. Often, the exploit will be just a stub which downloads another malware program and launches it, or whatever.
There is a little bit more to it. Like, you usually need to construct your input so that you don't have any 0 bytes within it, because that will signify the end of a string. The input, even though it's not bounds checked, might still be validated in some fashion. (I think I remember reading about someone who had made a "codec", so that the input data could be composed of valid alphanumeric characters. So, even the unpacker was alphanumeric, which is pretty cool).
Re:Awesome (Score:5, Insightful)
They did. Mainframes and the like have had protection from this sort of hack for ages. AS/400s have object orientation support built into the hardware, and a data object (which is what a stack or buffer would be implemented as) cannot be executed as code, no matter what. The hardware will not allow it. Nor would the buffer be allowed to grow into a code location.
We're living with hardware and software architecture decisions made in the 1980s, when PCs were still considered toys.
Code rewrites going to be needed? (Score:5, Interesting)
Re:Code rewrites going to be needed? (Score:3, Insightful)
Re:Code rewrites going to be needed? (Score:3, Insightful)
Re:Code rewrites going to be needed? (Score:5, Interesting)
In the early '90s Motorola released the 68040 with a code cache that made programs that used self-modifying code crash and burn. Apple had been telling people for years not to write self-modifying code because this was going to happen. When Apple started building prototype Macs with 68040s and started testing for compatibility who do you suppose was one of the biggest offenders? Microsoft. I am not making this up.
Re:Code rewrites going to be needed? (Score:5, Interesting)
Microsoft has historically had very bad coding practices. From all accounts I have heard this has markedly improved, but it was pretty bad.
Re:Code rewrites going to be needed? (Score:4, Informative)
Re:Code rewrites going to be needed? (Score:5, Interesting)
Writing self-modifying code was the first thing my Assembler instructor put his foot down and said, "bad idea, don't even think about it." I could see you could do it easily with assembler.
I would entertain listening to cases where self-mod'ing code has its place.
Re:Code rewrites going to be needed? (Score:3, Insightful)
Re:Code rewrites going to be needed? (Score:4, Informative)
The Intel x86 architecture has few registers, so if you want to keep lots of values handy, you're going to have to keep swapping values in and out of memory. Alternatively, immediate-value constants can be hard-coded in the code that do not change during a long loop or a loop with many layers of nestedness. Just before the loop is executed, these hard-coded constants will be modified by re-writing the immediate-values in the code. An example of this is some code that draws a scaled translucent sprite. Throughout the code, the scale will remain constant, and if the translucency is uniform, that will remain constant too. The code that does the translucent blitting will use the registers only for values that change during the sprite-drawing.
On an 80386, using this technique will cause a significant speed-increase in the code, but on 80486's and above where there are on-board L1 caches on the CPUs, the code-modification may cause cache-misses that may slow down the system - espcecially if it is run on an even newer x86 CPU that has a seperate program and data cache in the L1 cache. To make things worse, nowardays, most code runs in a multi-tasking environment, so trying to figure out if self-modifying code causes a slowdown or a speed-increase is almost impossible to predict.
Of course, nowardays, most drawing is done by hardware accellerated graphics cards so this isn't a good example, but there could still be some use for hard-coding values that do not change in a loop.
Re:Code rewrites going to be needed? (Score:4, Interesting)
Re:Code rewrites going to be needed? (Score:5, Informative)
To work with memory protection enabled, applications will need to allocate memory using VirtualAlloc [microsoft.com] and specify the memory options [microsoft.com] to make it executable. Then they can generate and run the code there.
I am assuming that Linux could incorporate some similar functionality, anybody know if someone is working on it?
what a drag (Score:4, Insightful)
Re:what a drag (Score:5, Insightful)
Re:what a drag (Score:5, Informative)
"W^X (pronounced: "W xor X") on architectures capable of pure execute-bit support in the MMU (sparc, sparc64, alpha, hppa). This is a fine-grained memory permissions layout, ensuring that memory which can be written to by application programs can not be executable at the same time and vice versa. This raises the bar on potential buffer overflows and other attacks: as a result, an attacker is unable to write code anywhere in memory where it can be executed. (NOTE: i386 and powerpc do not support W^X in 3.3; however, 3.3-current already supports it on i386, and both these processors are expected to support this change in 3.4). "
They are NOT protecting against overflows (Score:5, Informative)
screw average joe (Score:3, Insightful)
Linux support (Score:5, Insightful)
This does require some interaction from the operating system in order to work. Hopefully AMD will release enough information to allow this feature to be implemented in Linux.
Re:Linux support (Score:5, Informative)
An OS would then use this to mark pure data page and areas like the stack as NX so that overflowing datastructures doesn't allow arbitrary malicious code to be run.
Re:Linux support (Score:5, Interesting)
http://www.x86-64.org/lists/discuss/msg03469.ht
"Average Joe" Vs. buffer overflows (Score:3, Insightful)
but can "Average Joe" understand the implication of buffer overflows ?
try to explain to Homer Simpson why he should upgrade his computer based on buffer overflows protections.
Re:"Average Joe" Vs. buffer overflows (Score:5, Insightful)
Nope (Score:5, Interesting)
They want fast and reliable, not techspeak. I can barely get my clients to understand why they need SSL (and how it works).
Re:Nope (Score:5, Interesting)
Good or Bad idea? (Score:3, Insightful)
For example, let's say people wrote insecure x86 code, then someone decides to port the code to another platform. There'll be software vulnerabilities that will be around because of the flawed code in the first place.
Securing C++ through hardware (Score:5, Insightful)
(Note: Although many people come down on C++, it's also what functions you use. For instance, while fget() is considered "safe" because you provide a buffer boundry, gets() is considered unsafe. This drives me nuts! We knew how to program to prevent buffer overruns years ago, and they're still a problem!)
Re:Securing C++ through hardware (Score:5, Insightful)
Back in college I would defend C/C++ against one of my professors who thought it was the spawn of satan (and oddly though Pascal was/is the greatest language ever) for the simple fact that it gives you the ability to do so many things with few limits.
A hammer cannot only be used to drive in nails or bang a dent out of your car hood... but it can also be used to break your neighbors windows and beat someone to death. Just because a tool CAN be used for ill, doesn't mean the tool is to blame. After all... guns don't kill people... murders/soldiers/hunters/etc do!
Re:Securing C++ through hardware (Score:4, Insightful)
If we ignore for the sake of argument the specific "high-level assembler" design goal for C, and look instead at philosophy which was carried into C++, there was this fundamental hacking philosophy that said that, because you occasionally needed to do something a bit bizarre, it should be EASY to do that bizarre thing. Further, the entire C/C++ philosophy was that the programmer was solely responsible for the consequences of his actions.
We contrast this with Ada. Ada's philosophy was that you only occasionally need to do bizarre things, that 95-99% of the time, you are doing perfectly straightforward things, that the effort should be distributed accordingly, and that the language should be helping the programmer to do the routine things correctly. This implies that, when the programmer attempts to do something bizarre, 95-99% of the time it is because he screwed something up, and he DIDN'T mean to do what he typed, and the compiler barfs.
At that point, it becomes the programmer's responsibility to tell the compiler, and NOT INCIDENTALLY everyone who will ever do maintenance on his code, that "Yea verily I DID intend to shoot myself in the foot here!". Idioms are provided for doing that. If the programmer really intended to take that floating-point number and treat it as a bitmask, he has to tell the compiler that this was indeed his intention.
Ada did not provide a "back door" array reference mechanism comparable to the C/C++ pointer hacking, for the reason that it is impossible to do proper bounds checking in that case. Ada does provide a mechanism for suppressing bounds checking, but it is NOT the default and it is explicitly forbidden by the standard from it being the default in any conforming implementation. If the programmer has a good reason for suppressing bounds checking, he has to do it EXPLICITLY, at some level.
Your analogy with hammers is OK, but it breaks down with guns. Guns have trigger guards and safety catches, PRECISELY to prevent naive users from shooting themselves in the foot, or from shooting someone else that they didn't intend to shoot. At the same time, those safety mechanisms do not prevent the gun from being used to shoot someone that the user most fervently WANTS shot right then.
In my view, if I utter a sequence of instructions that will dance a fandango on core, it is almost certainly the case that I have made an error, and I would prefer the toolset to ask me "Are you sure? (Y/N)". If I am certain that I intended to dance that fandango, I am also certain I want to warn the next guy in line that I am now lacing up my dancing wafflestompers, and the language should support that.
Of course they can (Score:5, Funny)
Sure, AMD just has to write a buffer-overflow exploit into a worm that carries the pop-up window message, "If you had and AMD processor, you're hard drive wouldn't be erasing right now."
Look closer... (Score:5, Funny)
MOV AX,DS:OSID[BX]
CMP AX,2 ; 2=Windows 3.x
JE PANIC
CMP AX,3 ; 3=Windows 9x
JE PANIC
CMP AX,4 ; 4=Windows 2K/ME/XP
JE PANIC
CMP AX,10 ; 10=Minix
JE OKAY
CMP AX,11 ; 11=...
ISSUE 'CPU BUFFER OVERFLOW ACTIVATED'
JMP PANIC
I'd buy (Score:3, Informative)
However - they WILL have to spin it well enough, or better than the "Megahertz Myth" because that didn't work too well for average folks. BestBuy salesmen don't know how to explain "AMD Athlon 289456++XL 3400 MP SSE4 +-7200 BufferXTreme" so they just push intel...
Rob Enderle Strikes Again! (Score:4, Funny)
Ahem... (Score:5, Insightful)
Granted, yes, this is a good thing, but "buffer-overflow protection when used with a new version of Windows XP?" We now have to rely on Microsoft to set the X flag properly...
This has been talked about on Slashdot a lot in the past; the OpenBSD guys in particular are hot on the Opteron because it, like SPARC, provides this protection. Fortunately, this isn't some Windows-specific voodoo; we all stand to benefit from this fundamental fix to the broken Intel VM architecture.
There's no excuse for buffer overflows (Score:5, Insightful)
Then the Japanese started making cars that didn't leak oil. Now, no one would accept a car that leaks oil. People have realized that cars don't have to leak and we shouldn't accept it.
It's the same thing with buffer overflows. People now have this attitude "well, there's nothing you can do. Just write code really carefully. Anyone who makes buffer overflows in his code is just a sloppy coder!"
Nothing could be further from the truth. There is no way anyone can code a large project in plain old C and not make buffer overflows. Look at OpenBSD, who are masters of secure C. They still have buffer problems.
And yet, there is absolutely no reason for code to have any buffer overflows! There are programatic tools, such as virtuams machines (think JVM) and safe libraries which mean that programmers never have to manipulate buffers in unsafe ways.
Putting in hardware-level support for this would be fantastic. It is time for people to change their attitude about what they accept in computers. Crashes and security holes are not inherent aspects of software. Mistakes are inherent in writing code, but these mistakes don't always need to have such disasterous consequences.
---------
Create a WAP [chiralsoftware.net] server
Re:There's no excuse for buffer overflows (Score:5, Funny)
And they would just explode for no reason sometimes.
the Average Joe doesn't buy processors (Score:5, Insightful)
What does it do? (Score:3, Informative)
I've seen patches [ogi.edu] to Linux that provide a non-executable stack. There's also the mprotect(2) [wlug.org.nz] system call to change memory protection from user programs. And I believe OpenBSD has had a non-executable stack in the mainline for at least a couple releases.
So what they're advertising here seems to have already existed. If not, how are the things above possible?
Intel's advantage: the motherboards. (Score:3, Interesting)
That said, nForce and nForce2-based mobos have come a long ways in terms of stability and overall ease of use, but then again... no one ever got fired for buying Intel. AMD separating code from data (curiously, like Intel managed to do once upon a time) is lovely but proving that they've got the best solution out there is a battle that's not going to be won overnight by a single innovation.
Uptime will prove who's got the better solution.
AMD Opteron and Athlon 64 already have this (Score:4, Informative)
The AMD Opteron and Athlon 64 chips already [computerworld.com]
have the buffer overflow protection in their hardware and the
feature is already supported by both Linux and Windows XP 64-bit
edition. AMD calls this "Execution Protection" and the
basic idea is that the processor will not allow code that arrives to
the system via a buffer overflow to be marked as
executable. The slashdot story says "will have" for both
Intel and AMD when it should read "AMD already has and Intel will
have..."
Old news (Score:5, Interesting)
This existed in the 8086 and 8088 CPUs. You seperate your program into code, data and stack segments and load the appropriate segment registers. Code segments can't be read or written, data and stack segments can't be executed. But stupid programmers decided that that kept you from playing games with code-as-data and data-as-code, so they created flat addressing mode with all segment registers pointing at a single segment. Feh. Those who don't read history are doomed to repeat it. Badly.
Re:Old news (Score:5, Informative)
What's better is that CS==DS was a common mode [known as a
So there goes your theory.
Tom
Execution bit on MMU Pages (Score:5, Interesting)
The Average Joe? (Score:5, Interesting)
Wow! (Score:5, Funny)
Designed for Microsoft Windows (Score:5, Funny)
Added protection (Score:3, Funny)
A bunch of things (Score:4, Informative)
2) It needs OS support, specifically XP SP2, which isn't out yet.
3) It doesn't really do what it is meant to, I have seen several 'theoretical' discussions on how to circumvent it. Think of it as another hoop to jump through for the black hats.
4) You need to be in 64-bit mode to use it
5) 4) requires a recompilation anyway, why not do it right with the right tools when you recompile?
6) I know of at least one vendor using to bid against intel on contracts now.
7) Oh yeah, this will do a lot of good. Really. When has a white paper ever lied?
8) The more you know about things like this, the more you want to move into a cabin in Montana and live off the land.
-Charlie
At last, consumer CPUs catch up with the Alpha (Score:4, Informative)
See This BugTraq posting by Theo de Raadt [securityfocus.com]
Isn't this already possible with segmentation? (Score:5, Informative)
If current operating systems actually used this in addition to paging (which is what most of them only use now), why would they need to create a new chip? Linux does not fully utilize segmention, mostly only paging [clemson.edu]. I don't have any resources on MS OS design right now so I can't comment on it... (although maybe looking at the recent source would help some
stupid (Score:5, Interesting)
The correct way of dealing with buffer overflow problems is to make them not happen in the first place. That means that all pointers need to have bounds associated with them. Unfortunately, both the C mindset and some design quirks of the C programming language make that a little harder than it should be for UNIX/Linux and C-based systems.
The real problem is ultimately the use of C, and the real solution is not to use a new CPU or add instructions, but to use a language without C's quirks. In terms of performance, C's pointer semantics only hurt anyway.
Is this the right solution? (Score:4, Interesting)
This will be a good thing if it works out, but it will take years for these chips to penetrate the market to any significant degree and once again we are seeing hardware vendors come to the rescue of software companies by creating hardware that has the capability, either in speed or safety features, to compensate for bad programming tools and bad programmers.
LISP machines had this and much more (Score:4, Interesting)
Since every object in LISP machine memory had a type tag, many useful operations could be parallelized, such as garbage collection and type dispatch for object oriented function calls.
The problem with languages like C is that they have no object semantics at all, so runtime bounds checking and other goodies don't work very well. The C weenies have everybody convinced that this is necessary to get the highest performance, but they don't realize that with a small amount of extra hardware, all these safety operations can be done in parallel. And since the C weenies influence the CPU designers, it is a vicious circle of bad machine architecture.
Re:the Chipmaker??? (Score:3, Insightful)
Re:the Chipmaker??? (Score:4, Informative)
Re:the Chipmaker??? (Score:3, Insightful)
Re:the Chipmaker??? (Score:5, Informative)
If you program in C on Intel you are going to have problems without almost fanatical devotion to the Po^H^H management of your memory resources.
That goes for Linux as well, as any check at Bugtraq can confirm.
Yes, people should be very careful when coding in languages and on architechtures which allow buffer overflow, but the real solution is at a level lower than the coder's.
KFG
Re:the Chipmaker??? (Score:4, Insightful)
That's like saying "why do we need cops? why can't people just not break the law, so no one needs to be around to reinforce them?"
Accidents do happen, and it's not only Microsoft's own problem. It doesn't hurt to have another layer of security for bad programming...
Re:the Chipmaker??? (Score:5, Insightful)
for the same reason cooperative multitasking went out of style: humans.
theoretically a coop multitasking operating system is much more efficient than pre-emptive multitasking. coop multitasking systems (like Mac OS pre X and Novell Netware) require each application to voluntarily give up the CPU when appropriate. That means that every app gets the entire cpu to itself, yielding better cache performance and allowing the app to continue a thread until a good time to stop came along (like, waiting for input or disk or whatever). Unfortunately, that means all programs must be perfect, a bug in any one of the running programs will bring down the entire OS like a house of cards. Or if you didn't release resources just right, your app would appear to hog the entire system and it would LOOK like you crashed everything.
Most programmers are not perfect.
Thus the rise in pre-emptive multitasking, where app programmers no longer get to decide when to give up the cpu, the operating system yanks your thread based on timeslices or some other mechanism outside the apps control. this means your various caches no longer have the "right" data most of the time, and maybe your thread gets yanked 1 instruction short of what would have been a better stopping place (maybe the next cycle was for a well-timed disk access). Some advanced chip features like memory streaming for SIMD ops also get trampled by pre-emptive multitasking, meaning you can no longer prefetch large chunks of data since threading out stops all your streams (this is a problem for Altivec programming.)
But on the whole, by acknowledging that programmers are not perfect (it only takes one bad one to ruin your system), and moving to the "wrong" solution of pre-empt multitasking, we get vastly improved stability and perceived performance. This is also why "wrong" solutions like hardware overflow protection are needed.
A scientist would say you are right, but an engineer would say you are wrong.
Re:Pathetic (Score:5, Insightful)
Yes... the vast majority of buffer overflow exploits we read about are Microsoft based, however it's not too hard to find software from other providers, yes, even in Linux. Which can suffer from this kind of flaw.
Re:Pathetic (Score:4, Funny)
Re:Pathetic (Score:5, Insightful)
This isn't insightful, it's flamebait and FUD.
Re:Pathetic (Score:5, Interesting)
How is this insightful? First of all, any post that uses the $ is Microsoft's name should be modded -1, 14 year old poster.
As if buffer overflows really had much to do with the OS. It has a lot more to do with poor coding. Try the following searches for more info:
linux buffer overflow [google.com]
bsd buffer overflow [google.com]
OS X buffer overflow [google.com]
Solaris buffer overflow [google.com]
And yes, everyone's favorite:
windows buffer overflow [google.com]
Re:Pathetic (Score:5, Insightful)
Software can't do everything. In fact, some earlier architectures offered choice of separating data segment and code segment (DEC VAX were the latest I used which had this feature), but because they have some performance penalty, the hardware companies removed this feature. Now that we have more speed than needed, it is being put back.
Re:No silver bullet. (Score:3, Informative)
Wrong. NX is in the 2.4.x kernels, at least. (Score:4, Informative)
Caveats: you can't mprotect it back to execute status, and it breaks some software, especially Mozilla/Java/Ada (just like exec-shield...)