Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Intel

AMD Could Profit from Buffer-Overflow Protection 631

spin2cool writes "New Scientist has an article about how AMD and Intel are planning on releasing new consumer chips with built-in buffer-overflow protection. Apparently AMD's chips will make it to market first, though, which some analysts think could give AMD an advantage as the next round of chips are released. The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe."
This discussion has been archived. No new comments can be posted.

AMD Could Profit from Buffer-Overflow Protection

Comments Filter:
  • by Anonymous Coward on Monday February 23, 2004 @03:32PM (#8364972)
    Like IBM with OS/2, they have the better product. They now just need to convince ordinary consumers that this is the case. For some reason, people love that little Intel jingle.
  • what a drag (Score:4, Insightful)

    by Wellmont ( 737226 ) on Monday February 23, 2004 @03:34PM (#8364990) Homepage
    Can anyone else say that it is ABOUT time that buffer overflow was built into a processor or motherboard? The only thing i worry about is the performance drag that making up for everyone's programming mistakes can do to a processor.
  • screw average joe (Score:3, Insightful)

    by jrexilius ( 520067 ) on Monday February 23, 2004 @03:35PM (#8365000) Homepage
    My company has 85,000 desktops and almost as many servers and we are just one large bank. I can see this being a rather great corporate standard.
  • Linux support (Score:5, Insightful)

    by nate1138 ( 325593 ) on Monday February 23, 2004 @03:35PM (#8365013)
    AMD's Athlon-64 (for PCs) and Opteron (for servers) will protect against buffer overflows when used with a new version of Windows XP.

    This does require some interaction from the operating system in order to work. Hopefully AMD will release enough information to allow this feature to be implemented in Linux.
  • by MySt1k ( 713767 ) on Monday February 23, 2004 @03:36PM (#8365017)
    The question will be whether their PR department can spin this into a big enough story to sell to the Average Joe.
    but can "Average Joe" understand the implication of buffer overflows ?
    try to explain to Homer Simpson why he should upgrade his computer based on buffer overflows protections.
  • by Rockenreno ( 573442 ) <(rockenreno) (at) (gmail.com)> on Monday February 23, 2004 @03:37PM (#8365027)
    Anytime you change the architecture of a chip there will be side effects. It is inevitable. I am interested to see what the repercussions might be it terms of code, performance, and even reliability. If they implemented this well, perhaps these side effects will be minimal and unnoticeable, in which case this could be a major development!
  • Good or Bad idea? (Score:3, Insightful)

    by demonic-halo ( 652519 ) on Monday February 23, 2004 @03:37PM (#8365031)
    This is all cool and all, but will this mean people may start writing sloppier code which will become something to bite as in the ass later in the future?

    For example, let's say people wrote insecure x86 code, then someone decides to port the code to another platform. There'll be software vulnerabilities that will be around because of the flawed code in the first place.
  • by KingOfBLASH ( 620432 ) on Monday February 23, 2004 @03:37PM (#8365033) Journal
    I find it interesting that one of the reasons that hardware protection from buffer overflows is needed is because many programs were created using functions in languages that don't properly check array bounds. Programmers really need to learn that either they need to use functions which provide bounds checking if they insist on using a language like C or C++, or they need to program in another language.

    (Note: Although many people come down on C++, it's also what functions you use. For instance, while fget() is considered "safe" because you provide a buffer boundry, gets() is considered unsafe. This drives me nuts! We knew how to program to prevent buffer overruns years ago, and they're still a problem!)
  • by ebuck ( 585470 ) on Monday February 23, 2004 @03:37PM (#8365036)
    I think it was the Intel inside marketing campaign that really did the trick.

    Nobody knows if Intel is better, but they don't want a computer that "lacks" Intel inside. They simply guess that if it's inside, it's better than not having it inside.

    It is brilliant. It can't be copied or AMD looks like a "me too!" player. It can't be contested because it's just vauge enough to not claim that the machine is any better for having Intel inside, but implies that anything else is somehow inferior.
  • by DaHat ( 247651 ) on Monday February 23, 2004 @03:37PM (#8365042)
    Because by your logic, Microsoft has patented the technology behind causing them and in this rare case decided to leave it up to someone else to fix.
  • Ahem... (Score:5, Insightful)

    by cbiffle ( 211614 ) on Monday February 23, 2004 @03:38PM (#8365056)
    From my reading of the article, this sounds like it's just a new spin on the per-page eXec flag on the AMD64 architecture.

    Granted, yes, this is a good thing, but "buffer-overflow protection when used with a new version of Windows XP?" We now have to rely on Microsoft to set the X flag properly...

    This has been talked about on Slashdot a lot in the past; the OpenBSD guys in particular are hot on the Opteron because it, like SPARC, provides this protection. Fortunately, this isn't some Windows-specific voodoo; we all stand to benefit from this fundamental fix to the broken Intel VM architecture. :-)
  • by codexus ( 538087 ) on Monday February 23, 2004 @03:39PM (#8365071)
    My guess is that many applications use self-modifying code as part of their anti-piracy/anti-reverse-engineering protection.
  • by iantri ( 687643 ) <iantri&gmx,net> on Monday February 23, 2004 @03:39PM (#8365074) Homepage
    Explain to Average Joe that his computer will be protected from (some) crashes and (some) computer viruses..
  • Re:Pathetic (Score:5, Insightful)

    by DaHat ( 247651 ) on Monday February 23, 2004 @03:40PM (#8365076)
    Wraaaag! Why does everyone keep calling this a Microsoft bug?

    Yes... the vast majority of buffer overflow exploits we read about are Microsoft based, however it's not too hard to find software from other providers, yes, even in Linux. Which can suffer from this kind of flaw.
  • Re:what a drag (Score:5, Insightful)

    by m0rph3us0 ( 549631 ) on Monday February 23, 2004 @03:40PM (#8365078)
    All it is is on extra bit in the pagetable that check whether the memory region is W^X (write or execute). This kind of thing usually requires a bit of operating system magic to make it work. i386 already has W^X protection, it just isn't enabled by most OS's.
  • by ChiralSoftware ( 743411 ) <info@chiralsoftware.net> on Monday February 23, 2004 @03:40PM (#8365080) Homepage
    Remember back in the 60s and before, all cars leaked oil? People just accepted, "Cars leak oil." They didn't realize that it didn't have to be that way.

    Then the Japanese started making cars that didn't leak oil. Now, no one would accept a car that leaks oil. People have realized that cars don't have to leak and we shouldn't accept it.

    It's the same thing with buffer overflows. People now have this attitude "well, there's nothing you can do. Just write code really carefully. Anyone who makes buffer overflows in his code is just a sloppy coder!"

    Nothing could be further from the truth. There is no way anyone can code a large project in plain old C and not make buffer overflows. Look at OpenBSD, who are masters of secure C. They still have buffer problems.

    And yet, there is absolutely no reason for code to have any buffer overflows! There are programatic tools, such as virtuams machines (think JVM) and safe libraries which mean that programmers never have to manipulate buffers in unsafe ways.

    Putting in hardware-level support for this would be fantastic. It is time for people to change their attitude about what they accept in computers. Crashes and security holes are not inherent aspects of software. Mistakes are inherent in writing code, but these mistakes don't always need to have such disasterous consequences.

    ---------
    Create a WAP [chiralsoftware.net] server

  • Re:Pathetic (Score:5, Insightful)

    by eht ( 8912 ) on Monday February 23, 2004 @03:40PM (#8365085)
    What's about GNU/Linux's bugs or NetBSD's or Sendmail's bugs? This is OS agnostic.

    This isn't insightful, it's flamebait and FUD.
  • by funny-jack ( 741994 ) on Monday February 23, 2004 @03:40PM (#8365089) Homepage
    They buy computers. They don't need to sell the idea to the Average Joe, they need to sell the idea to the people making computers for the Average Joe.
  • by Conor6 ( 11138 ) on Monday February 23, 2004 @03:41PM (#8365106)
    ...when I was a wee programmer, I was taught that the solution to this problem was to write better code.
  • by Anonymous Coward on Monday February 23, 2004 @03:43PM (#8365138)
    Many UNIX versions already support this kind of protection and it's on by default. Good portable code deals with it. If I remember right, you have to use mmap to get a special section of memory that you can both write and execute.
  • by taniwha ( 70410 ) on Monday February 23, 2004 @03:43PM (#8365140) Homepage Journal
    self modifying code is one thing ... but there are real apps have needs to create code on the fly and execute it (great examples are Java JIT compilers, and wonderfull valgrind) .... on the other hand a FAST, standard way to set the pages protections on some newly created code from RW- to R-X would be appropriate in these cases
  • by ackthpt ( 218170 ) * on Monday February 23, 2004 @03:44PM (#8365158) Homepage Journal
    Don't overdo it. The software has to be compiled to take advantage of this (hence the new version of XP), so just buying a new PC with "WOW! BUFFER OVERFLOW PROTECTION" will generate negative press as people complain, "Hey! I've still got worms! er.. my computer does, not me!" Such gaffes are what competitors live for.
  • Re:Awesome (Score:5, Insightful)

    by Sloppy ( 14984 ) * on Monday February 23, 2004 @03:44PM (#8365165) Homepage Journal
    Why didn't they think of this in the first place.
    Because it's hard to fix while keeping compatibility, and it was a different world in 1980.

    Some of today's problems are really just side-effects of the x86 legacy. If you're willing to break binary compatibility, fixing problems is really, really easy. For example, there's no law that stacks have to stupidly grow downwards in memory so that an overflow ends up overwriting older stuff on the stack space, instead of overwriting in the direction where the unallocated space is. And indeed, on many architectures, it works more sensibly. So even if you don't protect against overflows, their damage doesn't need to be so severe.

    But by the time it became popular for personal computers to be connected to the internet (and thus, overflow protection started to become really important), it was far too late to fix the problem, because too many people were locked into x86.

  • by DaHat ( 247651 ) on Monday February 23, 2004 @03:45PM (#8365171)
    I think you are forgetting something though... C and C++ are the most powerful higher level languages that exist today... Why? Because with them... you can easily mess everything up!

    Back in college I would defend C/C++ against one of my professors who thought it was the spawn of satan (and oddly though Pascal was/is the greatest language ever) for the simple fact that it gives you the ability to do so many things with few limits.

    A hammer cannot only be used to drive in nails or bang a dent out of your car hood... but it can also be used to break your neighbors windows and beat someone to death. Just because a tool CAN be used for ill, doesn't mean the tool is to blame. After all... guns don't kill people... murders/soldiers/hunters/etc do!
  • Re:Linux support (Score:2, Insightful)

    by Sloppy ( 14984 ) * on Monday February 23, 2004 @03:47PM (#8365195) Homepage Journal
    The article is vague, but it's probably talking about the per-page permissions thing. OpenBSD already uses it, when compiled for x86-64. I'm sure the info is already quite available for Linux dudes to use it too.
  • Re:Pathetic (Score:5, Insightful)

    by Anonymous Coward on Monday February 23, 2004 @03:47PM (#8365206)
    Don't blame MS for everything. Unix too has a notorious history of its contibution due to buffer overflow. Ever heard of sendmail? I believe the first internet worm in 1988 utilized buffer overflow in number of unix apps including sendflow, finger, ...

    Software can't do everything. In fact, some earlier architectures offered choice of separating data segment and code segment (DEC VAX were the latest I used which had this feature), but because they have some performance penalty, the hardware companies removed this feature. Now that we have more speed than needed, it is being put back.
  • by Malc ( 1751 ) on Monday February 23, 2004 @03:48PM (#8365220)
    It's not just MSFT. It's everybody. You could make that statement about the Apache Foundation.
  • Re:what a drag (Score:2, Insightful)

    by phasm42 ( 588479 ) on Monday February 23, 2004 @03:49PM (#8365228)
    That's what I was wondering about... if a program is properly separated into code/data/stack segments, and the Execute bit is properly set on each segment's descriptor, then why is a new CPU feature needed? I never learned protected mode asm in depth (I learned asm in real mode), but it seems like all the necessary bits are there for the OS to protect against this. If someone knows why this isn't or can't be done, would you please post a response?
  • by denlin ( 733557 ) on Monday February 23, 2004 @03:50PM (#8365248) Journal
    agreed, it seems we're spawning an even lazier bunch of programmers then ourselves.
  • by Anonymous Coward on Monday February 23, 2004 @03:53PM (#8365289)
    Not CPU's. AMD doesn't make those motherboards, so it's not their fault if they don't implement the features.
  • I predict that... (Score:2, Insightful)

    by Inuchance ( 559556 ) <inu@inuchan[ ]net ['ce.' in gap]> on Monday February 23, 2004 @03:55PM (#8365335) Journal
    This will be the year of sloppy coding.
  • Re:Pathetic (Score:2, Insightful)

    by DaHat ( 247651 ) on Monday February 23, 2004 @03:58PM (#8365367)
    Yes, the article talks about its use with newer versions of Windows (as early as SP2 of XP if I'm not mistaken), I would remind you that this is a Windows centric market right now, when a company like AMD or Intel designs a new processor or function, the first place they talk to about it is Redmond to get OS support in the most wide spread OS. Once that is accomplished, then they can look into secondary markets for support.

    I have little doubt that AMD will supply enough info to get this functionality working under Linux around the same time that these chips ship.
  • by Anonymous Coward on Monday February 23, 2004 @04:00PM (#8365396)
    Intel Inside is a minor part - what cemented Intel was Cyrix. People saw a low cost CPU and got burned for it - then there was no alternative to Intel until the original Athlon which meant that the Pentium and Pentium II were unchallenged.

    To this day, the legacy of Cyrix shadows AMD with marketting using the supposed clockspeed rather then actual.

    Fact of the matter is that Intel has so much branding, even being behind AMD on a few releases isn't going to do enough to displace Intel from being #1. All AMD is good for is the consumer so that there isn't a monopoly, and competition leads to innovation - otherwise Intel wouldn't have brought x86-64 to the general consumer for years. Not that I blame their logic, but then there wasn't a need to jump to Pentium either - the 486 had a lot still to offer at the time.
  • by Anonymous Coward on Monday February 23, 2004 @04:04PM (#8365435)
    you can spin it as a type of protection.
    joes love protection.. maybe include some sort of armament analogy. like its blows buffer overflows away like a nuke! or something
  • by Frac ( 27516 ) on Monday February 23, 2004 @04:06PM (#8365451)
    why does the chipmaker need to protect us from microsoft buffer overflow errors? why can't they just double check their code?

    That's like saying "why do we need cops? why can't people just not break the law, so no one needs to be around to reinforce them?"

    Accidents do happen, and it's not only Microsoft's own problem. It doesn't hurt to have another layer of security for bad programming...
  • by ortcutt ( 711694 ) on Monday February 23, 2004 @04:06PM (#8365453)
    DOS is better than a remote root exploit. When your machine goes down you know you at least know about it.
  • by Helvick ( 657730 ) on Monday February 23, 2004 @04:08PM (#8365473) Homepage Journal
    As you say this is already supported by an appropriately compiled Linux kernel or XP-64 on the A64 & Opteron. The wider benefit for all of us is that this is to be included in XP SP-2 which will hopefully become endemic sometime this year. See this eWeek article [eweek.com]. At that point this becomes an excellent marketing tactic for AMD. I haven't examined the IA32e documents for myself yet but those who have seem to think Intel have left out support of the NX flag - see sandpile.org [sandpile.org]. If this is true then Intel are handing AMD a real advantage as far as consumer marketing is concerned. Even I could spin that so that that it looked like more of an advantage than 64bit capability which to be honest is a real hard sell as far as your average consumer is concerned.
  • by Christopher Bibbs ( 14 ) on Monday February 23, 2004 @04:20PM (#8365618) Homepage Journal
    There were plenty of good AMD and Cyrix 486 CPUs being used when Intel switched to the Pentium and the successful "Intel Inside" badging. Bonus points to anyone who still has a "Intel Onboard" sticker from the earlier failed marketing attempt. However, users at the time largely only knew they had a 386 or 486. Most of them couldn't tell you who made it without opening the case.

    The AMD K5, K6, K6-II, and K6-III were all decent chips, but were nothing more than the "bargain" chip. What gave Intel the real lead over AMD was the combination of several years of the fastest chips being only available from Intel and the public knowing who made their chip.
  • Re:Awesome (Score:5, Insightful)

    by phil reed ( 626 ) on Monday February 23, 2004 @04:20PM (#8365622) Homepage
    Why didn't they think of this in the first place.

    They did. Mainframes and the like have had protection from this sort of hack for ages. AS/400s have object orientation support built into the hardware, and a data object (which is what a stack or buffer would be implemented as) cannot be executed as code, no matter what. The hardware will not allow it. Nor would the buffer be allowed to grow into a code location.

    We're living with hardware and software architecture decisions made in the 1980s, when PCs were still considered toys.

  • Re:Pathetic (Score:3, Insightful)

    by strictnein ( 318940 ) * <{strictfoo-slashdot} {at} {yahoo.com}> on Monday February 23, 2004 @04:22PM (#8365635) Homepage Journal
    M$ to abbrev Microsoft as it seems to accurately describe their primary design goal.

    As apposed to all those other companies whose goal is to, what, make people happy?

    Why not:
    $u$e
    $aturn
    $tarbucks
    $un
    People$oft
    et c.

    Perhaps a little more understanding before you run rampant on your pathetic link attempts and criticisms.

    I understood in full. The original poster was at best misinformed, at worst a trolling idiot. As for pathetic link attempts, how better to illustrate a point than to provide evidence support that point? Should I have just thrown out some "leet" speak and swear words and just verbally berated the original poster? Would that be better?
  • by Saberwind ( 50430 ) on Monday February 23, 2004 @04:22PM (#8365648)
    I honestly can't remember seeing an AMD advertisement since the DX4-100 was introduced.

    Does AMD even HAVE a marketing department?
  • by egomaniac ( 105476 ) on Monday February 23, 2004 @04:23PM (#8365656) Homepage
    ...when I was a wee programmer, I was taught that the solution to this problem was to write better code.

    And that strategy sure seems to be working well for the industry, doesn't it?

    Bugs are a fact of life. The programmer who can write everything perfectly on the first try and never make an innocent screwup does not exist. Even if such a programmer existed, he would not be allowed the time or development budget to actually write perfect code, as his code would be pushed out the door as soon as it looked reasonably complete.

    So, you can sit on your ass idly envisioning a world in which everybody writes perfect, bug-free code, or you can come back to the real world and try to make it more difficult to produce bugs and reduce the impact of the ones that inevitably occur.

    After years working exclusively in Java, I am horrified that C programmers still consider the lack of array bounds checking to be a natural, normal part of life. It isn't. It's disaster after disaster waiting to happen, and there is absolutely no excuse for it. Performance is not an excuse -- we have machines running at multiple gigahertz now. They can spare a few cycles to do bounds checking. This crap needs to be fixed.

    AMD can't make us all switch to sane programming languages, but they can at least ensure that code segments can't be modified. It's a good first step. The next step is to realize that C/C++ is horribly, unbelievably broken at a fundamental level and needs to be discarded.
  • UNIX Overflow (Score:3, Insightful)

    by severoon ( 536737 ) on Monday February 23, 2004 @04:30PM (#8365722) Journal

    I remember in college adminning a lab of HP-UX's when the "let's send more than 64K ping packets" caused a buffer overflow and a reboot. So it's definitely not exclusively a Windows problem. On the other hand, the article leaves it a little ambiguous as to whether or not this hardware fix will be exclusively useful to Windows (though I don't see how they could do that, there could be some fancy hoops that Windows jumps through anyway that are necessarily to exploit the fix? doubt it, though).

    I don't believe people still write things like, "Why doesn't everyone just write better code?" This reminds me of this one start-up I was working for during the dot com boom. I worked for one company that was so hard up to find managers they hired one guy to oversee the software department who'd never worked in the industry before. He had all sorts of unrealistic expectations, like "If you guys agree to double-check your code, we can save a lot of money by getting rid of the testing phase. We could release like three months early!" He was exasperated that professional coders couldn't write bug-free code on try #1.

    To everyone who says, let's write better code...why don't you write better code? No more bugs in your code ever again!

    Clearly, this is not the answer. What we need to do is take a step back and figure out the environment today, which we can do so much better than 25 years ago. We've seen a lot of the unintended consequences and now we know they exist. Intel or someone needs to develop a new processor from the ground up that addresses all the issues that we now know about through experience.

    One thing I've learned in this business is that you cannot achieve quality through gentleman's agreement, simply by getting someone to agree to write better code.

    sev

  • Intel Inside (Score:3, Insightful)

    by dpilot ( 134227 ) on Monday February 23, 2004 @04:32PM (#8365744) Homepage Journal
    Blue Man Group and those little notes are only part of the story of the Intel Inside campaign, the part that the public sees.

    The other part is based on the razor-thin profit margins in the PC arena. IIRC, Intel Inside is a co-marketing agreement. Co-market, play those little notes and display the Intel logo as part of your ad, and you get a nice co-marketing fee from Intel. With next-to-no profit margin, that co-marketing fee just might be your profit, or a large part thereof.

    Maybe the days of "You MUST use our CPUs in 100% of your products!" are gone, but I'll bet the days of, "You must use our CPUs in 100% of your products in order to participate in Intel Inside!" are still here.
  • by helzerr ( 232770 ) on Monday February 23, 2004 @04:37PM (#8365799) Homepage
    Stating that they would quote us Intel to "ensure stability".

    I bet it had more to do with ensuring their profit margin.

  • by john.r.strohm ( 586791 ) on Monday February 23, 2004 @04:45PM (#8365879)
    Back in college I would defend C/C++ against one of my professors who thought it was the spawn of satan (and oddly though Pascal was/is the greatest language ever) for the simple fact that it gives you the ability to do so many things with few limits.

    If we ignore for the sake of argument the specific "high-level assembler" design goal for C, and look instead at philosophy which was carried into C++, there was this fundamental hacking philosophy that said that, because you occasionally needed to do something a bit bizarre, it should be EASY to do that bizarre thing. Further, the entire C/C++ philosophy was that the programmer was solely responsible for the consequences of his actions.

    We contrast this with Ada. Ada's philosophy was that you only occasionally need to do bizarre things, that 95-99% of the time, you are doing perfectly straightforward things, that the effort should be distributed accordingly, and that the language should be helping the programmer to do the routine things correctly. This implies that, when the programmer attempts to do something bizarre, 95-99% of the time it is because he screwed something up, and he DIDN'T mean to do what he typed, and the compiler barfs.

    At that point, it becomes the programmer's responsibility to tell the compiler, and NOT INCIDENTALLY everyone who will ever do maintenance on his code, that "Yea verily I DID intend to shoot myself in the foot here!". Idioms are provided for doing that. If the programmer really intended to take that floating-point number and treat it as a bitmask, he has to tell the compiler that this was indeed his intention.

    Ada did not provide a "back door" array reference mechanism comparable to the C/C++ pointer hacking, for the reason that it is impossible to do proper bounds checking in that case. Ada does provide a mechanism for suppressing bounds checking, but it is NOT the default and it is explicitly forbidden by the standard from it being the default in any conforming implementation. If the programmer has a good reason for suppressing bounds checking, he has to do it EXPLICITLY, at some level.

    Your analogy with hammers is OK, but it breaks down with guns. Guns have trigger guards and safety catches, PRECISELY to prevent naive users from shooting themselves in the foot, or from shooting someone else that they didn't intend to shoot. At the same time, those safety mechanisms do not prevent the gun from being used to shoot someone that the user most fervently WANTS shot right then.

    In my view, if I utter a sequence of instructions that will dance a fandango on core, it is almost certainly the case that I have made an error, and I would prefer the toolset to ask me "Are you sure? (Y/N)". If I am certain that I intended to dance that fandango, I am also certain I want to warn the next guy in line that I am now lacing up my dancing wafflestompers, and the language should support that.
  • by Loki_1929 ( 550940 ) on Monday February 23, 2004 @04:46PM (#8365887) Journal
    " However when I need my server farm to be up 24/7 and dont realy NEED the extra speed the AMD chips just dont look to good."

    Yeah, Ok [slashdot.org].

    How easily we all forget just how many times Intel's chips and boards have been junk-in-a-box. What good is a feature when you can't even keep the machine up and running? What kind of uptime does your server farm have when you're sending recalled CPUs back to the manufacturer? Or perhaps, in the case of Compaq's Itanium customers, the server simply doesn't arrive because it's determined to be defective from the get-go?

    Whoops.

  • by Kenneth Parker ( 693976 ) on Monday February 23, 2004 @04:51PM (#8365945)
    Try to get your facts correct. Corrections:

    1) If its in Prescott, Intel isn't saying so.
    3) It just makes things harder to exploit, but that true of everything.
    4) BS. You have to switch to PAE mode, but that isn't 64bit mode.
    5) It only requires the kernel change, no apps need the recompile.
    7) It will typically change exploits from allowing elavation of privelege to a DOS.

  • Re:Nope (Score:3, Insightful)

    by rtaylor ( 70602 ) on Monday February 23, 2004 @04:52PM (#8365952) Homepage
    [i]I can barely get my clients to understand why they need SSL[/i]

    With SSL you get the lock picture. Without SSL you simply don't get it. Everyone wants to get it, but that takes SSL and not everybody can have SSL. Do you want brand name SSL for a low low price?
    Gotta have the lock (tm).
  • by nehril ( 115874 ) on Monday February 23, 2004 @04:52PM (#8365955)
    why can't they just double check their code?
    for the same reason cooperative multitasking went out of style: humans.

    theoretically a coop multitasking operating system is much more efficient than pre-emptive multitasking. coop multitasking systems (like Mac OS pre X and Novell Netware) require each application to voluntarily give up the CPU when appropriate. That means that every app gets the entire cpu to itself, yielding better cache performance and allowing the app to continue a thread until a good time to stop came along (like, waiting for input or disk or whatever). Unfortunately, that means all programs must be perfect, a bug in any one of the running programs will bring down the entire OS like a house of cards. Or if you didn't release resources just right, your app would appear to hog the entire system and it would LOOK like you crashed everything.

    Most programmers are not perfect.

    Thus the rise in pre-emptive multitasking, where app programmers no longer get to decide when to give up the cpu, the operating system yanks your thread based on timeslices or some other mechanism outside the apps control. this means your various caches no longer have the "right" data most of the time, and maybe your thread gets yanked 1 instruction short of what would have been a better stopping place (maybe the next cycle was for a well-timed disk access). Some advanced chip features like memory streaming for SIMD ops also get trampled by pre-emptive multitasking, meaning you can no longer prefetch large chunks of data since threading out stops all your streams (this is a problem for Altivec programming.)

    But on the whole, by acknowledging that programmers are not perfect (it only takes one bad one to ruin your system), and moving to the "wrong" solution of pre-empt multitasking, we get vastly improved stability and perceived performance. This is also why "wrong" solutions like hardware overflow protection are needed.

    A scientist would say you are right, but an engineer would say you are wrong.
  • by gosand ( 234100 ) on Monday February 23, 2004 @05:02PM (#8366054)
    Nobody knows if Intel is better, but they don't want a computer that "lacks" Intel inside. They simply guess that if it's inside, it's better than not having it inside. It is brilliant. It can't be copied or AMD looks like a "me too!" player. It can't be contested because it's just vauge enough to not claim that the machine is any better for having Intel inside, but implies that anything else is somehow inferior.

    Do you remember when the "Intel Inside" logo came out? There was no real competition. (it was the Pentium days) There were other processors, but the Pentium pretty much blew them away. Intel didn't just success on that logo alone, they do have a little bit of technology behind it.

    I think it is funny when people say AMD is better. When they say that, ask them why - 99% of the time it will be because it is cheaper (bang for the buck). The other 1% might do overclocking, or read anandtech on a daily basis, or have some highly technical reason - which is essentially irrelevant to the argument. For AMD to be where they are in the processor market, it is nearly a miracle. The only reason is because Intel was comfortable in their position. AMD came on the scene with a comparable product at a cheaper price, and it woke Intel up real fast. They catered more to the "home enthusiast" market at just the right time.

    I have a buddy who has worked at Intel for 7 years now, and I always kid him about AMD. He works on the thermal solutions, and has access to the fab floor. There may be some advantages that Intel has over AMD in some areas (and vice versa) but if you have two well put together systems of each sitting side-by-side, the processor is pretty much a non-issue.

  • Re:what a drag (Score:2, Insightful)

    by hweimer ( 709734 ) on Monday February 23, 2004 @05:05PM (#8366095) Homepage
    All it is is on extra bit in the pagetable that check whether the memory region is W^X (write or execute). This kind of thing usually requires a bit of operating system magic to make it work.

    I don't think it's that easy. For example, the RET instruction on x86s that is usually called at the and of a function reads the address of the next instruction directly from the stack. If this address has been overwritten by a buffer overflow, an attacker can jump to an arbitrary address in the memory where the really nasty code is located.

    Even if the memory regions that the attacker can modify are marked as non-executable, it is still possible to call a function inside the C library. A system() call will lead to arbritrary code execution as well.
  • by Yokaze ( 70883 ) on Monday February 23, 2004 @05:12PM (#8366200)
    > It's disaster after disaster waiting to happen, and there is absolutely no excuse for it. Performance is not an excuse -- we have machines running at multiple gigahertz.

    No, most desktop machines and servers run at several GHz, they could spare some cycles for most their applications. Fine, use a bound checking version of the STL and don't meddle with pointers, it's not like you have, too.

    But the point is, most processors aren't on the desktop and don't have the cycles or space to check every pointer. And some applications are real-time. You just have 1ms to do X. So maybe you have go down to assember to get the work.

    C++ gives you the possiblity to work low-level, when you need it, but lets you program high-level, if you don't want to.

    > The next step is to realize that C/C++ is horribly, unbelievably broken at a fundamental level and needs to be discarded.

    The first step is to realise that a bad programmer produces broken code whatever language he/she uses. Bounds-checking is no substitute for correct error-checking and -handling, code-review, testing and debugging, whatever the language

  • by CoolVibe ( 11466 ) on Monday February 23, 2004 @05:13PM (#8366215) Journal
    It's not a cure-all solution.

    There are other trampolines available. Merely making stack pages non-executable doesn't prevent return-into-libc exploits for example where you use the global offset table to jump into arbitrary code by overwriting the entry for a library call like printf(3).

  • by nempo ( 325296 ) on Monday February 23, 2004 @05:17PM (#8366271)
    Well, if you have some brains up there you'd be smart enough to buy the cheaper of the two equally performing processors. If the processor is a 'non-issue' you'd be stupid to go with the more expensive alternative.

    Yes, AMD is cheaper, yes AMD overclocks better. Personally I don't overclock so that is a non-issue for me but I do want maximum performance for my well-earned money and hence tends to stay away from intel, ie. most 'bang for the buck' sort of thing.
  • by mcbevin ( 450303 ) on Monday February 23, 2004 @05:29PM (#8366424) Homepage
    You never considered the possibility that there might just be some basis for that belief other than Intel marketing? That while AMDs are not so bad now, older versions would for example melt if you removed the cooling (whereas Intels even back then would simply slow down).

    Also, I suspect AMD possibly suffers from the poor reputations of previous Intel competitors who truly did have unreliable, inferior products. I for one had trouble for a while remembering which of AMD and Cyrix was the one to avoid, thus for the average consumer choosing the always reliable Intel makes some sense.

    AMD still needs some time to build up the reputation Intel has. If they can continue building reliable products without cutting too many corners as they have done in the past to keep up in the race against the giant, they may eventually obtain such a reputation, but such things take time.
  • by dgatwood ( 11270 ) on Monday February 23, 2004 @05:40PM (#8366542) Homepage Journal
    Wait just a second here. Do you mean to tell me that Intel and AMD still don't have no-execute flags for their page tables? Wow, I guess I should be really impressed by the foresight of Motorola and IBM, who put that feature in the PowerPC series of chips back in 1994 (beginning in the PowerPC 603).

    I'm actually surprised that there are chips out there that don't have such a feature. In a perverse way, I hope IBM has a patent on it.... :-)

  • by Loki_1929 ( 550940 ) on Monday February 23, 2004 @05:48PM (#8366648) Journal
    "they would quote us Intel to "ensure stability"."

    "I asked them to cite proof that AMD systems were unstable. They could not but implied that it was common knowledge."

    You can take this one step further - simply go through the articles I found and posted [slashdot.org] over a year ago. Show them the articles and then tell them that you cannot accept anything other than AMD quotes, in the interest of 'ensuring stability'.

  • by Anonymous Coward on Monday February 23, 2004 @06:06PM (#8366882)
    You have to understand that in the old days, these sorts of things weren't considered "bad coding practices", they were considered "Super Elite Hacker Tricks". Early PC programming was always to the metal, using every trick in the book.

    The hardware was ridiclously constrained -- the functionality of something like Word 4.0 (which ran on a 1MB Mac Plus) compares to modern word processors (which need 50MB of RAM and a Pentium II). Someone had to hack like crazy to get that stuff to work, and they were probably fully aware that it could blowup later.
  • by Anonymous Coward on Monday February 23, 2004 @07:12PM (#8367583)
    Like IBM with OS/2, they have the better product. They now just need to convince ordinary consumers that this is the case. For some reason, people love that little Intel jingle.

    That little jingle costs Intel billions of dollars to embed into your brain - in advertising and marketing dollars that AMD just does not have. Good branding is a huge advantage that is time tested and proven in business for decades.

    What kind of cola do you drink? Odds are it's Coke or Pepsi even though there are thousands of cheaper, but similar products.

    The other thing that AMD needs is to convince tier 1 box makers to support them - mostly they need to convince Dell. This won't happen any time soon for several obvious reasons.

    Delivering high volume CPUs is not AMD's strength. They should focus on their niche strengths of high performance in workstations and servers. Basically - their opteron strategy. Desktops are just too brutal a business for them to remain competitive given their limited resources.
  • by freeweed ( 309734 ) on Monday February 23, 2004 @07:35PM (#8367788)
    AMDs are not so bad now, older versions would for example melt if you removed the cooling

    I've always been of the opinion that if you're in the habit of removing your heatsink from a running processor, you have deeper problems than worrying about whether or not it will melt. Tom's sure managed to keep a lot of people I know from buying AMD, which is pretty funny considering how much cooler AMD chips run these days compared to Intel.
  • by RzUpAnmsCwrds ( 262647 ) on Monday February 23, 2004 @08:17PM (#8368183)
    "versions would for example melt if you removed the cooling"

    And, why, exactly, would you remove the heatsink from a CPU while it is running?

    Moreover, this was not a flaw in the Athlon. The Athlon, since Athlon XP, has contained a thermal diode to enable safe thermal shutdown. The motherboard that Tom's Hardware used did not have the thermal protection circuitry.

    Losing a CPU to "thermal death" was a rare occurance. Most CPUs that experienced "thermal death" had improperly installed thermal solutions (e.g. the clip was not installed properly). A fan failure or failure to use thermal compound (e.g. a pad or grease) would likely not cause damage to the CPU, even without thermal protection. Only a lack of die to heatsink contact (e.g. with an improperly installed shim or a poorly installed heatsink that detached during movement) would likely cause the Athlon to experience "thermal death" ass shown in the Tom's Hardware video.

    "whereas Intels even back then would simply slow down"

    The Tom's Hardware Guide video was a fake. The CPU temperature never exceeded 30C (look at the thermal probe). Thermal throttle-down on the P4 occurs when the CPU hits 85C. And, yes, the system will crash or simply become completely unusable if the heatsink is removed.

    "without cutting too many corners as they have done in the past"

    Right. Intel has never cut corners, particularly not with major logic bugs in the Pentium, PII, PIII, P4, and Itanium.

    Look, CPUs are not flawless. But the CPU thermal issue you speak of really is not a huge issue. With a properly installed heatsink (like the heatsinks on a computer you would buy from HP or eMachines), it never was an issue. And today every new motherboard has thermal protection.

    Tom's Hardware did a disservice to the community and to AMD by taking a relatively minor issue that affected a small number of people and blowing it out of proportion to a huge flaw.

    If you read Tom's Hardware for as long as I have, you begin to notice a pattern: Tom is an egotistic nut. He posted one editorial stating that the performance war between Intel and AMD was bad for consumers (hmmm... my $90 Athlon XP 2600+ would seem to refute that, as would sub $200 P4 3.0GHz CPUs). He also says that people buying AMD64 systems are giving AMD a "no intrest loan" because of the lack of availibility of AMD64 operating systems and applications. Apparently, no one told Tom that the Athlon 64 3000+ is *cheaper* than its similarly performing P4 counterpart (in IA-32 applications). And, apparently, no one told Tom that Intel has adopted the same instruction set for its Pentium 4 based 64-bit systems.

    I have lost respect for Tom and his publication. Between his hate-filled articles filled with vague statements and mistruths, his constant bashing of AMD (he compared the Athlon XP 3400+, a $450 CPU, to the P4 Extreme Edition, a $900 CPU, and decreed the P4EE the victor because it was marginally faster in 3/4 of the tests), and his suing of other tech websites, Tom has struck out. I only hope that [H]ardOCP doesn't suffer the same fate.
  • by Sycraft-fu ( 314770 ) on Monday February 23, 2004 @08:54PM (#8368605)
    Hey, they fall off sometimes, or a fan fails, or the AC fails. Shit happens. Happened to a friend of mine, friend one of his CPUs, though the board was ok thankfully.

    Also AMDs much larger problem was motherboards. VIA chipesest used to suck hard, and AMD's own were almost as bad. I remeber when the Athlons were fairly new it was time for me to upgrade so I decided to get one based on price. I got a 700mhz slot Athlon and a top-of-the-line Abit board with VIA chipset. I then proceeded to fight with my system for two weeks. I could not make it work in either 98 or 2000. It just would not play nice with my GeForce or my pro audio card. I finally sent it back, got a 440BX and an Intel P3 700 which I then used for like 2 years.

    Now I know the situation is completely different today, but that sort of thing sticks with many companies and OEMs. Trust is a thing that is easy to loose, hard to regain. Not fair, but that's how the world works.

    Only receantly have I started recommending ATi video cards. Why? Well becuase I supported ATis in many situations and their drivers were trash. 2d was fine but try and 3d and you were asking for BSODs. That's now changed, their drivers are in every way as solid as nVidia's and their hardware is better. But it took time for me to trust that. I had to use the cards and see them used in a number of different environments before I was ready to declare them stable enough for use in production systems.

    Also the PR numbers aren't helping. Many people see it as dishonest, espically since they haven't need consistent (some of the more receant chips haven't performed at the level their PR would infer). This again hurts crediblity in the eyes of some people.

    It's not fair per se, but it is the way of the world. You burn me, it takes time for me to trust you won't do it again.
  • by Loki_1929 ( 550940 ) on Tuesday February 24, 2004 @01:47PM (#8375188) Journal
    "I have lost respect for Tom and his publication."

    I've never had much respect for Tom; he's an egomaniac. Also, the website sold out to Intel and a number of other advertisers a few years ago (roughly 3). You'll notice a fairly rapid change in the articles if you look in the archives - that is, if you can find articles that haven't been re-edited or removed. They also have a habit of removing the author title or changing it when they decide they don't like the original author anymore.

    Tom's is now a joke. It's a site offering dumbed-down consumer grade articles with rigged benchmarks and conclusions that are non-sequitur. Attempts to hide the utter bias have faded with time, with AMD supporters now being openly labled as just a bunch of idiotic, delusional fanboys by some in the staff (Hi Omid!). The benchmarks are done with multiple driver versions, then vetted such that the best possible results for key advertisers can be shown. Anandtech still has a good deal of respect in the community, as well as from me (I'm more important! ;) ). Probably the best site I've run across is Ace's [aceshardware.com], which offers incredibly in-depth articles to those willing to learn a thing or two.

    Here's an excerpt from a recent column on Tom's [tomshardware.com]:

    "There is nothing finer than raising the hackles of delusional AMD lovers. However, today I do so with a heavy heart. This is no time to take aim at the pompous, self-righteous head-in-the-sand-ostriches of the alternative chip lifestyle. One must embrace them, hug them and wipe away their tears.

    They are the freaks of low-cost computing, the poor, downtrodden users of products that never seem to be able to match PR numbers to actual performance, now almost beaten into marginality for all time.

    Of course, they won't admit this. They will howl at the moon, scream obscenities at nice, unassuming columnists with no axe to grind"


Suggest you just sit there and wait till life gets easier.

Working...