Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Intel

AMD Could Profit from Buffer-Overflow Protection 631

spin2cool writes "New Scientist has an article about how AMD and Intel are planning on releasing new consumer chips with built-in buffer-overflow protection. Apparently AMD's chips will make it to market first, though, which some analysts think could give AMD an advantage as the next round of chips are released. The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe."
This discussion has been archived. No new comments can be posted.

AMD Could Profit from Buffer-Overflow Protection

Comments Filter:
  • by ebuck ( 585470 ) on Monday February 23, 2004 @03:32PM (#8364969)
    Especially if the buffer is their banking account.
  • by Anonymous Coward on Monday February 23, 2004 @03:32PM (#8364972)
    Like IBM with OS/2, they have the better product. They now just need to convince ordinary consumers that this is the case. For some reason, people love that little Intel jingle.
    • by ebuck ( 585470 ) on Monday February 23, 2004 @03:37PM (#8365036)
      I think it was the Intel inside marketing campaign that really did the trick.

      Nobody knows if Intel is better, but they don't want a computer that "lacks" Intel inside. They simply guess that if it's inside, it's better than not having it inside.

      It is brilliant. It can't be copied or AMD looks like a "me too!" player. It can't be contested because it's just vauge enough to not claim that the machine is any better for having Intel inside, but implies that anything else is somehow inferior.
      • by Anonymous Coward on Monday February 23, 2004 @04:00PM (#8365390)
        Nobody knows if Intel is better, but they don't want a computer that "lacks" Intel inside. They simply guess that if it's inside, it's better than not having it inside.

        I always thought "Intel Inside" was a warning label.

      • by Anonymous Coward on Monday February 23, 2004 @04:00PM (#8365396)
        Intel Inside is a minor part - what cemented Intel was Cyrix. People saw a low cost CPU and got burned for it - then there was no alternative to Intel until the original Athlon which meant that the Pentium and Pentium II were unchallenged.

        To this day, the legacy of Cyrix shadows AMD with marketting using the supposed clockspeed rather then actual.

        Fact of the matter is that Intel has so much branding, even being behind AMD on a few releases isn't going to do enough to displace Intel from being #1. All AMD is good for is the consumer so that there isn't a monopoly, and competition leads to innovation - otherwise Intel wouldn't have brought x86-64 to the general consumer for years. Not that I blame their logic, but then there wasn't a need to jump to Pentium either - the 486 had a lot still to offer at the time.
        • by Christopher Bibbs ( 14 ) on Monday February 23, 2004 @04:20PM (#8365618) Homepage Journal
          There were plenty of good AMD and Cyrix 486 CPUs being used when Intel switched to the Pentium and the successful "Intel Inside" badging. Bonus points to anyone who still has a "Intel Onboard" sticker from the earlier failed marketing attempt. However, users at the time largely only knew they had a 386 or 486. Most of them couldn't tell you who made it without opening the case.

          The AMD K5, K6, K6-II, and K6-III were all decent chips, but were nothing more than the "bargain" chip. What gave Intel the real lead over AMD was the combination of several years of the fastest chips being only available from Intel and the public knowing who made their chip.
      • by Neil Watson ( 60859 ) on Monday February 23, 2004 @04:02PM (#8365409) Homepage
        It's frightening that even vendors believe in marketing. I meet with vendor one day to discuss supplying us with generic computers. I told them that most of our desktops were Durons. They gasped and stated they could not recommend such things. Stating that they would quote us Intel to "ensure stability". I asked them to cite proof that AMD systems were unstable. They could not but implied that it was common knowledge.
        • by helzerr ( 232770 ) on Monday February 23, 2004 @04:37PM (#8365799) Homepage
          Stating that they would quote us Intel to "ensure stability".

          I bet it had more to do with ensuring their profit margin.

        • by mcbevin ( 450303 ) on Monday February 23, 2004 @05:29PM (#8366424) Homepage
          You never considered the possibility that there might just be some basis for that belief other than Intel marketing? That while AMDs are not so bad now, older versions would for example melt if you removed the cooling (whereas Intels even back then would simply slow down).

          Also, I suspect AMD possibly suffers from the poor reputations of previous Intel competitors who truly did have unreliable, inferior products. I for one had trouble for a while remembering which of AMD and Cyrix was the one to avoid, thus for the average consumer choosing the always reliable Intel makes some sense.

          AMD still needs some time to build up the reputation Intel has. If they can continue building reliable products without cutting too many corners as they have done in the past to keep up in the race against the giant, they may eventually obtain such a reputation, but such things take time.
          • by freeweed ( 309734 ) on Monday February 23, 2004 @07:35PM (#8367788)
            AMDs are not so bad now, older versions would for example melt if you removed the cooling

            I've always been of the opinion that if you're in the habit of removing your heatsink from a running processor, you have deeper problems than worrying about whether or not it will melt. Tom's sure managed to keep a lot of people I know from buying AMD, which is pretty funny considering how much cooler AMD chips run these days compared to Intel.
            • by Sycraft-fu ( 314770 ) on Monday February 23, 2004 @08:54PM (#8368605)
              Hey, they fall off sometimes, or a fan fails, or the AC fails. Shit happens. Happened to a friend of mine, friend one of his CPUs, though the board was ok thankfully.

              Also AMDs much larger problem was motherboards. VIA chipesest used to suck hard, and AMD's own were almost as bad. I remeber when the Athlons were fairly new it was time for me to upgrade so I decided to get one based on price. I got a 700mhz slot Athlon and a top-of-the-line Abit board with VIA chipset. I then proceeded to fight with my system for two weeks. I could not make it work in either 98 or 2000. It just would not play nice with my GeForce or my pro audio card. I finally sent it back, got a 440BX and an Intel P3 700 which I then used for like 2 years.

              Now I know the situation is completely different today, but that sort of thing sticks with many companies and OEMs. Trust is a thing that is easy to loose, hard to regain. Not fair, but that's how the world works.

              Only receantly have I started recommending ATi video cards. Why? Well becuase I supported ATis in many situations and their drivers were trash. 2d was fine but try and 3d and you were asking for BSODs. That's now changed, their drivers are in every way as solid as nVidia's and their hardware is better. But it took time for me to trust that. I had to use the cards and see them used in a number of different environments before I was ready to declare them stable enough for use in production systems.

              Also the PR numbers aren't helping. Many people see it as dishonest, espically since they haven't need consistent (some of the more receant chips haven't performed at the level their PR would infer). This again hurts crediblity in the eyes of some people.

              It's not fair per se, but it is the way of the world. You burn me, it takes time for me to trust you won't do it again.
          • by RzUpAnmsCwrds ( 262647 ) on Monday February 23, 2004 @08:17PM (#8368183)
            "versions would for example melt if you removed the cooling"

            And, why, exactly, would you remove the heatsink from a CPU while it is running?

            Moreover, this was not a flaw in the Athlon. The Athlon, since Athlon XP, has contained a thermal diode to enable safe thermal shutdown. The motherboard that Tom's Hardware used did not have the thermal protection circuitry.

            Losing a CPU to "thermal death" was a rare occurance. Most CPUs that experienced "thermal death" had improperly installed thermal solutions (e.g. the clip was not installed properly). A fan failure or failure to use thermal compound (e.g. a pad or grease) would likely not cause damage to the CPU, even without thermal protection. Only a lack of die to heatsink contact (e.g. with an improperly installed shim or a poorly installed heatsink that detached during movement) would likely cause the Athlon to experience "thermal death" ass shown in the Tom's Hardware video.

            "whereas Intels even back then would simply slow down"

            The Tom's Hardware Guide video was a fake. The CPU temperature never exceeded 30C (look at the thermal probe). Thermal throttle-down on the P4 occurs when the CPU hits 85C. And, yes, the system will crash or simply become completely unusable if the heatsink is removed.

            "without cutting too many corners as they have done in the past"

            Right. Intel has never cut corners, particularly not with major logic bugs in the Pentium, PII, PIII, P4, and Itanium.

            Look, CPUs are not flawless. But the CPU thermal issue you speak of really is not a huge issue. With a properly installed heatsink (like the heatsinks on a computer you would buy from HP or eMachines), it never was an issue. And today every new motherboard has thermal protection.

            Tom's Hardware did a disservice to the community and to AMD by taking a relatively minor issue that affected a small number of people and blowing it out of proportion to a huge flaw.

            If you read Tom's Hardware for as long as I have, you begin to notice a pattern: Tom is an egotistic nut. He posted one editorial stating that the performance war between Intel and AMD was bad for consumers (hmmm... my $90 Athlon XP 2600+ would seem to refute that, as would sub $200 P4 3.0GHz CPUs). He also says that people buying AMD64 systems are giving AMD a "no intrest loan" because of the lack of availibility of AMD64 operating systems and applications. Apparently, no one told Tom that the Athlon 64 3000+ is *cheaper* than its similarly performing P4 counterpart (in IA-32 applications). And, apparently, no one told Tom that Intel has adopted the same instruction set for its Pentium 4 based 64-bit systems.

            I have lost respect for Tom and his publication. Between his hate-filled articles filled with vague statements and mistruths, his constant bashing of AMD (he compared the Athlon XP 3400+, a $450 CPU, to the P4 Extreme Edition, a $900 CPU, and decreed the P4EE the victor because it was marginally faster in 3/4 of the tests), and his suing of other tech websites, Tom has struck out. I only hope that [H]ardOCP doesn't suffer the same fate.
          • by Loki_1929 ( 550940 ) on Monday February 23, 2004 @10:39PM (#8369629) Journal
            "I for one had trouble for a while remembering" ... remembering a lot of things.

            Like the PIII Coppermine CPUs that wouldn't even boot [bbc.co.uk] sometimes.

            Or the randomly rebooting [cw.com.hk] PII Xeons.

            Or the voltage problems [com.com] with certain PIII Xeons.

            Or the memory request system hang bug in the PIII/Xeon [hardwarecentral.com].

            Or the PIII's SSE bug [zdnet.co.uk] whose 'fix' killed i810 compatability.

            Or the MTH [com.com] bug in the PIII CPUs that forced Intel customers to replace boards and RAM.

            Or the recalled [com.com], that's right, recalled [com.com] PIII chips at 1.13GHz.

            Or the recalled [com.com] (there's that word again) Xeon SERVER chips at 800 and 900MHz.

            Or the recalled [techweb.com] (that word, AGAIN?!) cc820 "cape cod" Intel motherboards.

            Or the data overwriting [zdnet.co.uk] bug in the P4 CPUs.

            Or the P4 chipset [com.com] bug that killed video performance.

            Or the Sun/Oracle P4 bug [indiana.edu].

            Or the Itanium [theinquirer.net] bug that was severe enough to make Compaq halt Itanium shipments.

            Or the Itanium 2 bug [infoworld.com] that "can cause systems to behave unpredictably or shut down".

            Or the numerous other P4/Xeon/XeonMP bugs [theinquirer.net] that have been hanging around.

            Yes, I did consider the possibility that there might just be some basis for the belief that Intel's products are superior. Having considered that, in light of the mountains of evidence to the contrary, I shall now proceed to laugh at you.

            Ha ha ha.

            Now go away, or I shall mock you again.

        • by Loki_1929 ( 550940 ) on Monday February 23, 2004 @05:48PM (#8366648) Journal
          "they would quote us Intel to "ensure stability"."

          "I asked them to cite proof that AMD systems were unstable. They could not but implied that it was common knowledge."

          You can take this one step further - simply go through the articles I found and posted [slashdot.org] over a year ago. Show them the articles and then tell them that you cannot accept anything other than AMD quotes, in the interest of 'ensuring stability'.

      • by gosand ( 234100 ) on Monday February 23, 2004 @05:02PM (#8366054)
        Nobody knows if Intel is better, but they don't want a computer that "lacks" Intel inside. They simply guess that if it's inside, it's better than not having it inside. It is brilliant. It can't be copied or AMD looks like a "me too!" player. It can't be contested because it's just vauge enough to not claim that the machine is any better for having Intel inside, but implies that anything else is somehow inferior.

        Do you remember when the "Intel Inside" logo came out? There was no real competition. (it was the Pentium days) There were other processors, but the Pentium pretty much blew them away. Intel didn't just success on that logo alone, they do have a little bit of technology behind it.

        I think it is funny when people say AMD is better. When they say that, ask them why - 99% of the time it will be because it is cheaper (bang for the buck). The other 1% might do overclocking, or read anandtech on a daily basis, or have some highly technical reason - which is essentially irrelevant to the argument. For AMD to be where they are in the processor market, it is nearly a miracle. The only reason is because Intel was comfortable in their position. AMD came on the scene with a comparable product at a cheaper price, and it woke Intel up real fast. They catered more to the "home enthusiast" market at just the right time.

        I have a buddy who has worked at Intel for 7 years now, and I always kid him about AMD. He works on the thermal solutions, and has access to the fab floor. There may be some advantages that Intel has over AMD in some areas (and vice versa) but if you have two well put together systems of each sitting side-by-side, the processor is pretty much a non-issue.

        • by Hoser McMoose ( 202552 ) on Monday February 23, 2004 @07:29PM (#8367740)

          Do you remember when the "Intel Inside" logo came out?

          1991, according to Intel themselves [intel.com]

          There was no real competition. (it was the Pentium days) There were other processors, but the Pentium pretty much blew them away.

          The Intel Inside marketing program started two years before the Pentium came out. At that time AMD was competing very effectively with the 486. So much so that Intel wanted a new marketing campaign to try to bring people back. Even in the early Pentium days AMD continued to compete effectively. Their 5x86 120MHz chips were very competitive with the Pentium 60 and Pentium 66, and even the 75MHz Pentium chips. It wasn't really until '94 or '95 that Intel really started leaving AMD in the dust, mainly because AMD was WAY late at releasing their K5 processor and when it did come out they had so many problems manufacturing it that it was clocked much lower than initially hoped for. Cyrix continued to offer some competition for Intel during this time, but they were plagued by crappy motherboards which gave them a poor reputation (it was a bit of a self-fulfilling prophecy thing: reputation for being cheap crap meant that they were put on cheap crap motherboards which resulted in a poor quality system).

          it will be [better] because it is cheaper

          And that is somehow an invalid reason for a product to be better?

    • by ackthpt ( 218170 ) * on Monday February 23, 2004 @03:44PM (#8365158) Homepage Journal
      Don't overdo it. The software has to be compiled to take advantage of this (hence the new version of XP), so just buying a new PC with "WOW! BUFFER OVERFLOW PROTECTION" will generate negative press as people complain, "Hey! I've still got worms! er.. my computer does, not me!" Such gaffes are what competitors live for.
      • We don't support that... [slashdot.org] is the solution ;-)
  • Awesome (Score:3, Funny)

    by RedWolves2 ( 84305 ) on Monday February 23, 2004 @03:33PM (#8364980) Homepage Journal
    Put me down for one! This is exactly what we all need. Why didn't they think of this in the first place. Always on Microsofts shoulders to button the buffers up. This will make a huge difference in security.
    • Re:Awesome (Score:5, Insightful)

      by Sloppy ( 14984 ) * on Monday February 23, 2004 @03:44PM (#8365165) Homepage Journal
      Why didn't they think of this in the first place.
      Because it's hard to fix while keeping compatibility, and it was a different world in 1980.

      Some of today's problems are really just side-effects of the x86 legacy. If you're willing to break binary compatibility, fixing problems is really, really easy. For example, there's no law that stacks have to stupidly grow downwards in memory so that an overflow ends up overwriting older stuff on the stack space, instead of overwriting in the direction where the unallocated space is. And indeed, on many architectures, it works more sensibly. So even if you don't protect against overflows, their damage doesn't need to be so severe.

      But by the time it became popular for personal computers to be connected to the internet (and thus, overflow protection started to become really important), it was far too late to fix the problem, because too many people were locked into x86.

    • Re:Awesome (Score:5, Insightful)

      by phil reed ( 626 ) on Monday February 23, 2004 @04:20PM (#8365622) Homepage
      Why didn't they think of this in the first place.

      They did. Mainframes and the like have had protection from this sort of hack for ages. AS/400s have object orientation support built into the hardware, and a data object (which is what a stack or buffer would be implemented as) cannot be executed as code, no matter what. The hardware will not allow it. Nor would the buffer be allowed to grow into a code location.

      We're living with hardware and software architecture decisions made in the 1980s, when PCs were still considered toys.

  • by PornMaster ( 749461 ) on Monday February 23, 2004 @03:33PM (#8364983) Homepage
    I know that people using standard APIs might be fine, but I can't help but wonder how many applications will not work because of it. While there probably aren't many self-modifying code apps out there, there are surely some. Will they be affected?
    • Anytime you change the architecture of a chip there will be side effects. It is inevitable. I am interested to see what the repercussions might be it terms of code, performance, and even reliability. If they implemented this well, perhaps these side effects will be minimal and unnoticeable, in which case this could be a major development!
    • My guess is that many applications use self-modifying code as part of their anti-piracy/anti-reverse-engineering protection.
      • by imnoteddy ( 568836 ) on Monday February 23, 2004 @03:51PM (#8365263)
        My guess is that many applications use self-modifying code as part of their anti-piracy/anti-reverse-engineering protection.

        In the early '90s Motorola released the 68040 with a code cache that made programs that used self-modifying code crash and burn. Apple had been telling people for years not to write self-modifying code because this was going to happen. When Apple started building prototype Macs with 68040s and started testing for compatibility who do you suppose was one of the biggest offenders? Microsoft. I am not making this up.

        • by larkost ( 79011 ) on Monday February 23, 2004 @04:35PM (#8365773)
          On Apple platforms Microsoft has a very long history of this. There was another major case in the Apple II era where Apple developer documentation specifically reserved an address space for future expansion. Microsoft ignored this, and the IIe broke a good chunk of their software because of this.

          Microsoft has historically had very bad coding practices. From all accounts I have heard this has markedly improved, but it was pretty bad.
    • Self modifying code apps would be affected. And I think that is a good thing because you would want to ferret out such things in your systems.

      Writing self-modifying code was the first thing my Assembler instructor put his foot down and said, "bad idea, don't even think about it." I could see you could do it easily with assembler.

      I would entertain listening to cases where self-mod'ing code has its place.

      • self modifying code is one thing ... but there are real apps have needs to create code on the fly and execute it (great examples are Java JIT compilers, and wonderfull valgrind) .... on the other hand a FAST, standard way to set the pages protections on some newly created code from RW- to R-X would be appropriate in these cases
      • by cjellibebi ( 645568 ) on Monday February 23, 2004 @04:07PM (#8365463)
        > I would entertain listening to cases where self-mod'ing code has its place.

        The Intel x86 architecture has few registers, so if you want to keep lots of values handy, you're going to have to keep swapping values in and out of memory. Alternatively, immediate-value constants can be hard-coded in the code that do not change during a long loop or a loop with many layers of nestedness. Just before the loop is executed, these hard-coded constants will be modified by re-writing the immediate-values in the code. An example of this is some code that draws a scaled translucent sprite. Throughout the code, the scale will remain constant, and if the translucency is uniform, that will remain constant too. The code that does the translucent blitting will use the registers only for values that change during the sprite-drawing.

        On an 80386, using this technique will cause a significant speed-increase in the code, but on 80486's and above where there are on-board L1 caches on the CPUs, the code-modification may cause cache-misses that may slow down the system - espcecially if it is run on an even newer x86 CPU that has a seperate program and data cache in the L1 cache. To make things worse, nowardays, most code runs in a multi-tasking environment, so trying to figure out if self-modifying code causes a slowdown or a speed-increase is almost impossible to predict.

        Of course, nowardays, most drawing is done by hardware accellerated graphics cards so this isn't a good example, but there could still be some use for hard-coding values that do not change in a loop.

    • by J-Worthington ( 745904 ) on Monday February 23, 2004 @03:50PM (#8365243) Homepage
      One type of application that would need to take this into account would be JIT compilers, e.g. like that used in .Net. These create native code in memory to execute, with the objective of increasing performance. These apps simply need to state that they want the memory they allocate to be executable when they allocate it, then they can continue to work as before.
    • by kawika ( 87069 ) on Monday February 23, 2004 @03:57PM (#8365356)
      Any application that creates code in stack-based memory such as a local (auto) variable, or in one of the standard heaps (from which malloc and "new" memory come) will be affected. This memory is no longer executable and cannot be made executable by an application. Some existing JIT compilers are affected and will need rework.

      To work with memory protection enabled, applications will need to allocate memory using VirtualAlloc [microsoft.com] and specify the memory options [microsoft.com] to make it executable. Then they can generate and run the code there.

      I am assuming that Linux could incorporate some similar functionality, anybody know if someone is working on it?
  • what a drag (Score:4, Insightful)

    by Wellmont ( 737226 ) on Monday February 23, 2004 @03:34PM (#8364990) Homepage
    Can anyone else say that it is ABOUT time that buffer overflow was built into a processor or motherboard? The only thing i worry about is the performance drag that making up for everyone's programming mistakes can do to a processor.
    • Re:what a drag (Score:5, Insightful)

      by m0rph3us0 ( 549631 ) on Monday February 23, 2004 @03:40PM (#8365078)
      All it is is on extra bit in the pagetable that check whether the memory region is W^X (write or execute). This kind of thing usually requires a bit of operating system magic to make it work. i386 already has W^X protection, it just isn't enabled by most OS's.
      • Re:what a drag (Score:5, Informative)

        by paranode ( 671698 ) on Monday February 23, 2004 @03:55PM (#8365321)
        Exactly. OpenBSD 3.3 [openbsd.org] already came with this feature in May 2003.

        "W^X (pronounced: "W xor X") on architectures capable of pure execute-bit support in the MMU (sparc, sparc64, alpha, hppa). This is a fine-grained memory permissions layout, ensuring that memory which can be written to by application programs can not be executable at the same time and vice versa. This raises the bar on potential buffer overflows and other attacks: as a result, an attacker is unable to write code anywhere in memory where it can be executed. (NOTE: i386 and powerpc do not support W^X in 3.3; however, 3.3-current already supports it on i386, and both these processors are expected to support this change in 3.4). "
  • by Anonymous Coward on Monday February 23, 2004 @03:35PM (#8364999)
    They are protecting the pages marked as code from the data pages. Code could still overflow, but not use that to execute arbitrary code in the pages marked as data(or non-executable).
  • screw average joe (Score:3, Insightful)

    by jrexilius ( 520067 ) on Monday February 23, 2004 @03:35PM (#8365000) Homepage
    My company has 85,000 desktops and almost as many servers and we are just one large bank. I can see this being a rather great corporate standard.
  • Linux support (Score:5, Insightful)

    by nate1138 ( 325593 ) on Monday February 23, 2004 @03:35PM (#8365013)
    AMD's Athlon-64 (for PCs) and Opteron (for servers) will protect against buffer overflows when used with a new version of Windows XP.

    This does require some interaction from the operating system in order to work. Hopefully AMD will release enough information to allow this feature to be implemented in Linux.
  • by MySt1k ( 713767 ) on Monday February 23, 2004 @03:36PM (#8365017)
    The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe.
    but can "Average Joe" understand the implication of buffer overflows ?
    try to explain to Homer Simpson why he should upgrade his computer based on buffer overflows protections.
  • Nope (Score:5, Interesting)

    by lukewarmfusion ( 726141 ) on Monday February 23, 2004 @03:37PM (#8365029) Homepage Journal
    It would be a hell of a marketing and user education campaign to get users to understand this (or almost any hardware related details).

    They want fast and reliable, not techspeak. I can barely get my clients to understand why they need SSL (and how it works).
    • Re:Nope (Score:5, Interesting)

      by Inuchance ( 559556 ) <inu@i[ ]hance.net ['nuc' in gap]> on Monday February 23, 2004 @04:04PM (#8365438) Journal
      I think a good commercial would having hackers trying to break into a computer, and then a big "ACCESS DENIED" error shows, and one of the hackers exclaims, "No good, they've got the latest AMD CPU!" And then some announcer says something like, "With the latest CPUs from AMD, your computer executes only what YOU want it to, not what THEY [flash over to image of frustrated hackers] want!"
  • Good or Bad idea? (Score:3, Insightful)

    by demonic-halo ( 652519 ) on Monday February 23, 2004 @03:37PM (#8365031)
    This is all cool and all, but will this mean people may start writing sloppier code which will become something to bite as in the ass later in the future?

    For example, let's say people wrote insecure x86 code, then someone decides to port the code to another platform. There'll be software vulnerabilities that will be around because of the flawed code in the first place.
  • by KingOfBLASH ( 620432 ) on Monday February 23, 2004 @03:37PM (#8365033) Journal
    I find it interesting that one of the reasons that hardware protection from buffer overflows is needed is because many programs were created using functions in languages that don't properly check array bounds. Programmers really need to learn that either they need to use functions which provide bounds checking if they insist on using a language like C or C++, or they need to program in another language.

    (Note: Although many people come down on C++, it's also what functions you use. For instance, while fget() is considered "safe" because you provide a buffer boundry, gets() is considered unsafe. This drives me nuts! We knew how to program to prevent buffer overruns years ago, and they're still a problem!)
    • by DaHat ( 247651 ) on Monday February 23, 2004 @03:45PM (#8365171)
      I think you are forgetting something though... C and C++ are the most powerful higher level languages that exist today... Why? Because with them... you can easily mess everything up!

      Back in college I would defend C/C++ against one of my professors who thought it was the spawn of satan (and oddly though Pascal was/is the greatest language ever) for the simple fact that it gives you the ability to do so many things with few limits.

      A hammer cannot only be used to drive in nails or bang a dent out of your car hood... but it can also be used to break your neighbors windows and beat someone to death. Just because a tool CAN be used for ill, doesn't mean the tool is to blame. After all... guns don't kill people... murders/soldiers/hunters/etc do!
      • by john.r.strohm ( 586791 ) on Monday February 23, 2004 @04:45PM (#8365879)
        Back in college I would defend C/C++ against one of my professors who thought it was the spawn of satan (and oddly though Pascal was/is the greatest language ever) for the simple fact that it gives you the ability to do so many things with few limits.

        If we ignore for the sake of argument the specific "high-level assembler" design goal for C, and look instead at philosophy which was carried into C++, there was this fundamental hacking philosophy that said that, because you occasionally needed to do something a bit bizarre, it should be EASY to do that bizarre thing. Further, the entire C/C++ philosophy was that the programmer was solely responsible for the consequences of his actions.

        We contrast this with Ada. Ada's philosophy was that you only occasionally need to do bizarre things, that 95-99% of the time, you are doing perfectly straightforward things, that the effort should be distributed accordingly, and that the language should be helping the programmer to do the routine things correctly. This implies that, when the programmer attempts to do something bizarre, 95-99% of the time it is because he screwed something up, and he DIDN'T mean to do what he typed, and the compiler barfs.

        At that point, it becomes the programmer's responsibility to tell the compiler, and NOT INCIDENTALLY everyone who will ever do maintenance on his code, that "Yea verily I DID intend to shoot myself in the foot here!". Idioms are provided for doing that. If the programmer really intended to take that floating-point number and treat it as a bitmask, he has to tell the compiler that this was indeed his intention.

        Ada did not provide a "back door" array reference mechanism comparable to the C/C++ pointer hacking, for the reason that it is impossible to do proper bounds checking in that case. Ada does provide a mechanism for suppressing bounds checking, but it is NOT the default and it is explicitly forbidden by the standard from it being the default in any conforming implementation. If the programmer has a good reason for suppressing bounds checking, he has to do it EXPLICITLY, at some level.

        Your analogy with hammers is OK, but it breaks down with guns. Guns have trigger guards and safety catches, PRECISELY to prevent naive users from shooting themselves in the foot, or from shooting someone else that they didn't intend to shoot. At the same time, those safety mechanisms do not prevent the gun from being used to shoot someone that the user most fervently WANTS shot right then.

        In my view, if I utter a sequence of instructions that will dance a fandango on core, it is almost certainly the case that I have made an error, and I would prefer the toolset to ask me "Are you sure? (Y/N)". If I am certain that I intended to dance that fandango, I am also certain I want to warn the next guy in line that I am now lacing up my dancing wafflestompers, and the language should support that.
  • by mikeophile ( 647318 ) on Monday February 23, 2004 @03:37PM (#8365037)
    The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe.


    Sure, AMD just has to write a buffer-overflow exploit into a worm that carries the pop-up window message, "If you had and AMD processor, you're hard drive wouldn't be erasing right now."

  • by heironymouscoward ( 683461 ) <heironymouscoward@yaho o . com> on Monday February 23, 2004 @03:37PM (#8365039) Journal
    ....

    MOV AX,DS:OSID[BX]
    CMP AX,2 ; 2=Windows 3.x
    JE PANIC
    CMP AX,3 ; 3=Windows 9x
    JE PANIC
    CMP AX,4 ; 4=Windows 2K/ME/XP
    JE PANIC
    CMP AX,10 ; 10=Minix
    JE OKAY
    CMP AX,11 ; 11=... :PANIC
    ISSUE 'CPU BUFFER OVERFLOW ACTIVATED'
    JMP PANIC

  • I'd buy (Score:3, Informative)

    by valkraider ( 611225 ) on Monday February 23, 2004 @03:38PM (#8365049) Journal
    I'm not Joe, but if all other factors were equal - this would be enough to sway me to them... But of course, it's almost moot - since I use Apple OSX... But I do have some Linux boxes that could run on them...

    However - they WILL have to spin it well enough, or better than the "Megahertz Myth" because that didn't work too well for average folks. BestBuy salesmen don't know how to explain "AMD Athlon 289456++XL 3400 MP SSE4 +-7200 BufferXTreme" so they just push intel...
  • by Galuvian ( 755742 ) on Monday February 23, 2004 @03:38PM (#8365053)
    Although this is great for AMD I'm sure, I stopped reading the article when Enderle was the first 'analyst' quoted.
  • Ahem... (Score:5, Insightful)

    by cbiffle ( 211614 ) on Monday February 23, 2004 @03:38PM (#8365056)
    From my reading of the article, this sounds like it's just a new spin on the per-page eXec flag on the AMD64 architecture.

    Granted, yes, this is a good thing, but "buffer-overflow protection when used with a new version of Windows XP?" We now have to rely on Microsoft to set the X flag properly...

    This has been talked about on Slashdot a lot in the past; the OpenBSD guys in particular are hot on the Opteron because it, like SPARC, provides this protection. Fortunately, this isn't some Windows-specific voodoo; we all stand to benefit from this fundamental fix to the broken Intel VM architecture. :-)
  • by ChiralSoftware ( 743411 ) <info@chiralsoftware.net> on Monday February 23, 2004 @03:40PM (#8365080) Homepage
    Remember back in the 60s and before, all cars leaked oil? People just accepted, "Cars leak oil." They didn't realize that it didn't have to be that way.

    Then the Japanese started making cars that didn't leak oil. Now, no one would accept a car that leaks oil. People have realized that cars don't have to leak and we shouldn't accept it.

    It's the same thing with buffer overflows. People now have this attitude "well, there's nothing you can do. Just write code really carefully. Anyone who makes buffer overflows in his code is just a sloppy coder!"

    Nothing could be further from the truth. There is no way anyone can code a large project in plain old C and not make buffer overflows. Look at OpenBSD, who are masters of secure C. They still have buffer problems.

    And yet, there is absolutely no reason for code to have any buffer overflows! There are programatic tools, such as virtuams machines (think JVM) and safe libraries which mean that programmers never have to manipulate buffers in unsafe ways.

    Putting in hardware-level support for this would be fantastic. It is time for people to change their attitude about what they accept in computers. Crashes and security holes are not inherent aspects of software. Mistakes are inherent in writing code, but these mistakes don't always need to have such disasterous consequences.

    ---------
    Create a WAP [chiralsoftware.net] server

  • by funny-jack ( 741994 ) on Monday February 23, 2004 @03:40PM (#8365089) Homepage
    They buy computers. They don't need to sell the idea to the Average Joe, they need to sell the idea to the people making computers for the Average Joe.
  • What does it do? (Score:3, Informative)

    by slamb ( 119285 ) on Monday February 23, 2004 @03:42PM (#8365125) Homepage
    From the article:
    Until now, Intel-compatible processors have not been able to distinguish between sections of memory that contain data and those that contain program instructions. This has allowed hackers to insert malicious program instructions in sections of memory that are supposed to contain data only, and use buffer overflow to overwrite the "pointer" data that tells the processor which instruction to execute next. Hackers use this to force the computer to start executing their own code (see graphic).

    The new AMD chips prevent this. They separate memory into instruction-only and data-only sections. If hackers attempt to execute code from the data section of memory, they will fail. Windows will then detect the attempt and close the application.

    I've seen patches [ogi.edu] to Linux that provide a non-executable stack. There's also the mprotect(2) [wlug.org.nz] system call to change memory protection from user programs. And I believe OpenBSD has had a non-executable stack in the mainline for at least a couple releases.

    So what they're advertising here seems to have already existed. If not, how are the things above possible?

  • by dave-fu ( 86011 ) on Monday February 23, 2004 @03:43PM (#8365136) Homepage Journal
    You buy an Intel chip, you buy a reference mobo and you get rock-solid stability. You buy AMD, you end up rolling the dice on Via, SiS or NVidia and what feels like filthy voodoo trying to get everything to play nicely together.
    That said, nForce and nForce2-based mobos have come a long ways in terms of stability and overall ease of use, but then again... no one ever got fired for buying Intel. AMD separating code from data (curiously, like Intel managed to do once upon a time) is lovely but proving that they've got the best solution out there is a battle that's not going to be won overnight by a single innovation.
    Uptime will prove who's got the better solution.
  • by dtjohnson ( 102237 ) on Monday February 23, 2004 @03:44PM (#8365153)


    The AMD Opteron and Athlon 64 chips already [computerworld.com]
    have the buffer overflow protection in their hardware and the
    feature is already supported by both Linux and Windows XP 64-bit
    edition. AMD calls this "Execution Protection" and the
    basic idea is that the processor will not allow code that arrives to
    the system via a buffer overflow to be marked as
    executable. The slashdot story says "will have" for both
    Intel and AMD when it should read "AMD already has and Intel will
    have..."

  • Old news (Score:5, Interesting)

    by Todd Knarr ( 15451 ) on Monday February 23, 2004 @03:47PM (#8365196) Homepage

    This existed in the 8086 and 8088 CPUs. You seperate your program into code, data and stack segments and load the appropriate segment registers. Code segments can't be read or written, data and stack segments can't be executed. But stupid programmers decided that that kept you from playing games with code-as-data and data-as-code, so they created flat addressing mode with all segment registers pointing at a single segment. Feh. Those who don't read history are doomed to repeat it. Badly.

  • by adisakp ( 705706 ) on Monday February 23, 2004 @03:49PM (#8365227) Journal
    For what it's worth... many processors, like the PowerPC series have had this "buffer overflow protection" feature for years. The idea is to mark program code pages after they are loaded as executeable and read-only. No other pages are marked executeable. It destroys clever little hacks like self-modifying code but at the same time, makes it impossible for buffer overflows to introduce new code into a programs executeable code page set.
  • The Average Joe? (Score:5, Interesting)

    by SpaceRook ( 630389 ) on Monday February 23, 2004 @03:50PM (#8365239)
    The average joe can't even figure out that he shouldn't open email attachments from people he doesn't know (Exhibit A: MyDoom). You really think he knows what the fuck a buffer overflow is? "No buffer overflow? But what if I *want* overflow! More is better!" I applaud this security feature, but don't think of it as a selling point for typical users.
  • Wow! (Score:5, Funny)

    by El ( 94934 ) on Monday February 23, 2004 @03:50PM (#8365240)
    Separation of programs into separate code and data segment -- what a novel idea! I hope they got a patent on this technology!
  • by Hamster Lover ( 558288 ) on Monday February 23, 2004 @03:50PM (#8365246) Journal
    Now those stickers on the front of the computer really mean something...
  • by Mick Ohrberg ( 744441 ) <{moc.liamg} {ta} {grebrho.kcim}> on Monday February 23, 2004 @03:57PM (#8365362) Homepage Journal
    Excellent! Now they just need to develop a chip that protects against id10t [catb.org] and PEBCAK [http] problems.
  • A bunch of things (Score:4, Informative)

    by Groo Wanderer ( 180806 ) <charlie.semiaccurate@com> on Monday February 23, 2004 @04:00PM (#8365388) Homepage
    1) It is also in Prescott
    2) It needs OS support, specifically XP SP2, which isn't out yet.
    3) It doesn't really do what it is meant to, I have seen several 'theoretical' discussions on how to circumvent it. Think of it as another hoop to jump through for the black hats.
    4) You need to be in 64-bit mode to use it
    5) 4) requires a recompilation anyway, why not do it right with the right tools when you recompile?
    6) I know of at least one vendor using to bid against intel on contracts now.
    7) Oh yeah, this will do a lot of good. Really. When has a white paper ever lied?
    8) The more you know about things like this, the more you want to move into a cabin in Montana and live off the land.

    -Charlie
  • by alanw ( 1822 ) * <alan@wylie.me.uk> on Monday February 23, 2004 @04:14PM (#8365544) Homepage
    Several architectures (sparc, sparc64, alpha, hppa, m88k) have had per-page execute permissons for years.
    See This BugTraq posting by Theo de Raadt [securityfocus.com]
  • by ktulu1115 ( 567549 ) on Monday February 23, 2004 @04:45PM (#8365881)
    Call me stupid, but AFAIK x86 chips have full segmentation support [x86.org] (in protected mode obviously) - ability to define different segment types (read only, r/w, execute only, etc)... For those of you not familiar with it, it allows the programmer to define different types of memory segments, which would allow you to do some pretty interesting things such as defining read-only code segments (so the machine instructions can't be modified in memory), and non-executing data segments (to prevent OS from trying to run code stored in program data/buffers). This would solve the problem, at least how they addressed it in the article.

    If current operating systems actually used this in addition to paging (which is what most of them only use now), why would they need to create a new chip? Linux does not fully utilize segmention, mostly only paging [clemson.edu]. I don't have any resources on MS OS design right now so I can't comment on it... (although maybe looking at the recent source would help some ;)
  • stupid (Score:5, Interesting)

    by ajagci ( 737734 ) on Monday February 23, 2004 @05:10PM (#8366162)
    Marking pages as executable/non-executable is old, and it's not the way to deal with buffer overflows. Many buffer overflow exploits, in fact, only modify data (like the saved PC pointer).

    The correct way of dealing with buffer overflow problems is to make them not happen in the first place. That means that all pointers need to have bounds associated with them. Unfortunately, both the C mindset and some design quirks of the C programming language make that a little harder than it should be for UNIX/Linux and C-based systems.

    The real problem is ultimately the use of C, and the real solution is not to use a new CPU or add instructions, but to use a language without C's quirks. In terms of performance, C's pointer semantics only hurt anyway.
  • by multiplexo ( 27356 ) on Monday February 23, 2004 @05:38PM (#8366517) Journal
    Years ago I went to a presentation on RISC v. CISC architectures. The presenter pointed out that RISC didn't really stand for "Reduced Instruction Set Computing" rather it stood for "Relegate the Important Stuff to Compilers". Why hasn't Microsoft released C and C++ compilers that institute bounds checking? Hell, ADA had this years ago and say what you will about the language it's a damned handy thing to have.
    This will be a good thing if it works out, but it will take years for these chips to penetrate the market to any significant degree and once again we are seeing hardware vendors come to the rescue of software companies by creating hardware that has the capability, either in speed or safety features, to compensate for bad programming tools and bad programmers.
  • by hqm ( 49964 ) on Monday February 23, 2004 @05:41PM (#8366560)
    The LISP machines built at the MIT AI Lab had hardware which worked in parallel with the main CPU that checked things like array bounds and also did other types of tag checking, such as automatic runtime coercion of ints to floats and other things that are helpful to a high level language.

    Since every object in LISP machine memory had a type tag, many useful operations could be parallelized, such as garbage collection and type dispatch for object oriented function calls.

    The problem with languages like C is that they have no object semantics at all, so runtime bounds checking and other goodies don't work very well. The C weenies have everybody convinced that this is necessary to get the highest performance, but they don't realize that with a small amount of extra hardware, all these safety operations can be done in parallel. And since the C weenies influence the CPU designers, it is a vicious circle of bad machine architecture.

Where there's a will, there's a relative.

Working...