Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

AMD Athlon MP 1800+ Processor Review 214

Lars Olsen writes: " has posted a review of the new AMD Athlon MP 1800+ processor -- a big speed jump for the dual Athlon processor family with the new processor running at 1.53GHz. There are also 1600+ and 1500+ Athlon MPs available as well right away at stores around the World. Dual AMD Goodness is now running just as fast as its desktop counterpart ! Here's a quote: 'Those of you who want to jump into the dual processing Athlon world will finally be able to do so with the knowledge that your processors are the top speed that the Athlon family has to offer. And for anyone who already has a Tyan Thunder or Tiger MP board and a pair of Athlon MP processors, you may just want to pop a couple of these new Athlon MP 1800+ CPUs in your system to boost performance.'" Some of the comments following yesterday's "dream system" article addressed dual-Athlon complications, so make sure you read before you buy.Update: 10/15 15:14 GMT by T : Check below for's take on this chip, and Athlon MP systems in general as well.

Augustus writes " takes a look at the Athlon MP platform under Linux and the newly released Athlon MP 1800+ is included. Covered in this article is not only the technology and performance of the AMD-760 MP chipset and the Tyan Thunder K7 motherboard but we also look at why anyone would consider a multi-processor system."

This discussion has been archived. No new comments can be posted.

AMD Athlon MP 1800+ Processor Review

Comments Filter:
  • fingers... (Score:2, Funny)

    by apathy21 ( 468771 )

    Well at least I can still count on my fingers how many GHz we have achieved. I suppose when/if these quantum-based computers come about (on a large scale), I'll have to have an infinite number of fingers all representing the possible states of the processer :)
    • by Figaro ( 20471 )'s not worth it.

      Please stop chopping off your fingers for the decimal points.

    • Re:fingers... (Score:1, Informative)

      by Anonymous Coward
      You are probably better off reading from the site AMDMB copied, AMDZone. They have a review here [] with more benchmarks and less fluff.
  • I'm running at home on a PII 400Mhz and it runs everything I possibly need. My Mother and Father in Law are on a P166 and wondered if they should upgrade. I said no as it really doesn't need it, they just do basic database, spreadsheet and word.

    All too often developers use the increased memory and processor speed to write worse implementations, or to create pointless bloatware. I know this will continue no matter what I say but at the end of the day who really needs this much power, QuakeIV players ? QuakeV ? QuakeIII runs fine with my upgraded graphics card, and top of the line sound card, the processor does bugger all.

    Moore's law is great, it means computers can do more and more, but for the home market its just silly, 90% of people would be fine not changing their machine for 4 years, but they are forced to upgrade by market perception.

    Faster this, faster that.... but never ever actually "better", "more reliable" or "stable".

    Hardware is the excuse for bloatware, its not H/W engineers fault but it isn't an excuse to use....

    (and yes this is partly a dig at the huge swap requirements on the 2.4 kernel)
    • by cfriesen ( 256918 ) on Monday October 15, 2001 @10:23AM (#2430639)
      A few points:

      Have you ever
      a) done audio editing
      b) done video editing
      c) applied a filter to a 50MB+ image
      d) compiled X
      e) done any ray-tracing
      etc, etc.

      Any of these things can suck up vast amounts of horsepower and beg for more.

      Also, 2.4 is getting somewhat more sane in recent releases.

      • Yes, those things do beg for this kind of horsepower, but the problem is that cpus like these (although not MP) are being marketed at the average user, how doesn't know better. Microsoft, in turn, is seeing this, and adds more and more useless, cycle-eating functions to Word, and perpetuates the cycle.
      • You forgot "solving systems of 50,000 equations." People always bring that one up, as unrealistic as it.
        • Don't you need something like this to work out what shower curtains do []?

        • You forgot "solving systems of 50,000 equations." People always bring that one up, as unrealistic as it.

          What is so unrealistic about it? Have you ever tried solving an eigenvalue problem for 10000x10000 matrix in MATLAB? It takes about an hour on a dual PIII, 800mhz with 1Gig RAM.

          Things like that come up a lot more often that one would expect.

          • In short, no, nor will 99.99% of the world's serious computer users, ie. those for whom a computer is a tool to be used rather than an end in itself. The 0.01% who do will likely not be doing it on a regular basis (eg. a couple of semesters of maths lectures). The 0.01% of those who do it on a regular basis (eg. maths professors/postgrads) can go out and get a multi-processor mobo and umpty-GHz CPUs - or more likely, will get their uni department or company to buy it.

            In other words, no-one needs this unless they (a) need to compile mega-programs or (b) do heavy maths work. So no home user and most business users have no need.

            I speak as someone who switched from a P233 to a Duron 800 only bcos the mobo broke - I refused to spend £80 on a new Pentium mobo when £200 would get a complete new system!

            • It is in the best interestes for science to have fast processors marketed to joe 6-pack even if he doesn't need it. This causes fast processors to be cheaper sooner. This allows scientists to run better simulations.

              my bias:
              I keep 16 1 GHz PIII's going all the time at 100% running sims.

            • Uh, you seem to have no idea what numerical matrix operations are useful for. Maths professors rarely need to do such work, as they're more interested in theoretical aspects of maths (there are some exceptions where large numerical simulations are useful in getting an idea of how systems should behave). The vast majority of such problems are done by people who use the computer as a tool - meteorologists, engineers, etc.
        • You forgot "solving systems of 50,000 equations." People always bring that one up, as unrealistic as it.

          (Note: I'm not talkig about home use here) Actually, 50,000 equations is a rather small system. Any idea what weather prediction looks like? Something like 10 equations per grid point, with a grid that's something like 200x200x50=2,000,000. So you end up with a 20 million equation system. Also, many CAD software (eg finite element simulations) also need to solve *huge* systems. The faster the computer, the more precise the simulation (because you can afford more grid points).
          • Yeah, it's basically about solving partial differential equations defined over large areas. Finite element method is basically a method for solving PDEs. All the following areas use numerical methods for solving the PDEs and thus benefit from cheaper faster computers.


            Fluid Mechanics



            Stress Analysis

            Well, that's the ones I know about but there must be a lot more. And not just scientists use these, there must be thousands of engineers working in these fields daily.
        • Engineers do this all the time. I am a mechanical engineer, and the analysis software packages I use to assist in design and analysis of my creations does just this: solve massive systems of equations.

          This is very common and very useful.

          Also, if I had a PC with 100 times the memory and speed, I could still bring it to its knees. As it is, I have to simplify and granulate my models to make them fit the computing power I have.

          How do you think they predict the weather? Design cars and planes? Do thermal analysis? Do vibration analysis? Do electromagnetic analysis? Do displacement/stress analysis? Do computational fluid dynamics? Do transient analysis of all the above?
      • a) Yup... the soundcard does large parts of this, the disk is fast

        b) see a) lots of this after snowboarding holidays. Mostly done directly from the video camera over the firewire connection

        c) Yup... now that is slow, but I've got lots of memory so not that bad... in fact given that I've got 3/4Gb of RAM its probably as fast as memory limited machines with a fast CPU...

        d) Yup... hell done that on _much_ slower machines.

        e) Yup...

        The basic one here is that I don't work 100% of the time on a single task. Waiting for renders is fine (I've always tended to do them as overnight jobs anyway).

        All of the above are very very possible on a PII400Mhz, just ensure its got a good soundcard, a good graphics card, fast disk, and lots of memory.

        Most of those things suck up memory rather than CPU and its the huge amounts of swapping that cause them to slow down.
        • by Anonymous Coward
          That doesn't change the facts, though. If you do have enough memory, you will benefit from a faster CPU. And I don't know what type of video/audio editing you've done, maybe it's just assembling clips. But when you do much more than that, the processor needs to render new output froms from the inputs for stuff like transitions, color adjustments, and overlays and combinations. Ideally, you would want to be able to do this in faster than real-time, and be able to have the full power of digital computerized video editing for use on-the-air at reasonable costs. Similarly, the sound card doesn't do much other than just playing back the digitally encoded streams you have. If you want to change their contents, it's again a CPU job.

          I don't see why people disparage using faster processors for legitimate applications. I've done video editing, and no matter what CPU I do it on, I wish I had more. And no, it wasn't disk I/O bound, because I had no trouble playing the input and output videos at full-speed.
      • A few points:

        The fellow's point was that 99.98% of Joe Sixpacks out there *don't* need all the power that's being hyped. Just because you like running Emboss on your hi-res porn images doesn't mean that some college student in Albuquerque, a secretary in Toledo, or your Grandma needs to do it.

        If you need 2GHz, get 2GHz. If not, don't do it just because the salesman told you you need it.

      • I hear you.

        My grandmother was compiling X the other day on her P166 and she's like, "Goddammit! Git me one of those Amdy Altheron processors!"


    • I missed the part in this article that said everyone should have one in their home.

      High-speed CPUs are very useful to our clients who run large database implementations with voice-recognition data-entry systems, FYI.
    • Computer games are the ONLY applications that tax a home-users cutting-edge machine... At the moment, systems are a little ahead of gaming technology, but in a few months that won't be the case. Just because your parents don't play Dark Age of Camelot or AquaNox, don't assume Joe User doesn't want to.

      • Computer games are the ONLY applications that tax a home-users cutting-edge machine... At the moment, systems are a little ahead of gaming technology, but in a few months that won't be the case. Just because your parents don't play Dark Age of Camelot or AquaNox, don't assume Joe User doesn't want to.

        But the kicker is that these games really don't need such horsepower. I'm willing to bet that if there were any pressure to get any of these games running on a more resource constrained system, like a game console, then lots of unnecessary internal fat would be trimmed right away. But there's no pressure to do so otherwise. And even if a game that could run just fine on a PII 400 requires a 1GHz processor, certain people seem to _like_ the justification for upgrading.
        • by Edward Kmett ( 123105 ) on Monday October 15, 2001 @11:00AM (#2430855) Homepage
          <RANT>A console is a very different environment. You can tune exactly for the hardware because there will be no variances. A PC game has to allow for 30 different graphic cards using APIs that supposedly make the different cards look the same to you but fail miserably. By the time you get done tweaking for the current morass of cards, a new generation of them is present with their own damn bugs. In the console world you deal with 2-3 environments IF you are allowed/it is practical to port given the current state of exclusive games. Also, if you've ever developed for a console, its very different, with a PC you have a lot of freedom to build how you want and what you want, in the console world you pretty much have to build around the hardware. This means you are constrained to build the same kind of engine for most every game you build on that console. If you don't you are just looking for different ways to cull the scene down to fit into the same miniscule space.

          The two environments are very different, and most of that fat can't be trimmed by wishing it away or blaming on programmers .</RANT&gt

          As for bloatware, start modelling cloth, hair, IK, bump maps, and the hardware gets used again. The reason the games aren't doing it now is because they want the comfortable sales window.

          Honestly pushing ultra-high-end features that cut your market to 4% of what it could be isn't a big selling point - good luck convincing your publisher to bring the game to market - and trying to build an engine to scale between low and high-end aggravates the bloat of PC vs. console problem even worse.
        • But the kicker is that these games really don't need such horsepower.

          You're right, they could cut their polygon count down to a quarter of what it is now, precache almost everything (quadrupling the amount of hard disk space used) and probably use 50% of the CPU they use now. Game developers really are into severely optimizing their code, especially those programmers dealing with graphics; They're usually trying to find ways to optimize every single action.

          On the other hand, as others have pointed out, the only way to really optimize the hell out of something is to write it in assembler. That makes any large codebase pretty much unusable.

          The biggest thing game developers could do right now to improve game performance is to use really excellent multi-res in a game. Multi-res is a process where, when used to its fullest, lets you start with very high polygon models for everything, and the game engine will reduce the polygon count one vertex at a time, in some cases all the way down to a single polygon. When done right this will let you draw amazingly complex scenes without slowdown; The computer can tell more or less what you're looking at and decide what needs lots of polys.

          Unfortunately, even those games which are using multires are using a low-rent version where they pre-reduce the vertex count, so you still "pop" from model to model. It's getting better, though.

          The best thing about multires of course is that you don't have to precompute things, like BSP-based schemes, and that it will make the best use of your graphics hardware, while still running well and looking good on lower-end hardware. On the other hand, your graphics card had better handle lighting pretty damned well. Since you can get a GEForce MX400 card for less than $100 (Or a GF2 for about $150) that's really not much of an issue these days.

          • A console is a very different environment. You can tune exactly for the hardware because there will be no variances.

            I agree in principle, but that's not it. It isn't polycount either, as someone else said. At the moment, the average PS2 game has more polygons than the average PC game (that's because if you assume hardware T&L on the PC then you have a severely limited market; lots and lots of mass market PCs still ship with the equivalent of a Voodoo 1 or worse, go to Dell's site if you don't believe me).

            I'm talking about much larger issues. For example, on the PC you come up with a file format for something, then just keep using it because it works. With a little work, it often turns out that a 20MB file of world geometry can be knocked down to 5MB, just because there's so much garbage in there and no one ever thought about remove it. Or maybe there are thousands of keyframes of animation that make no visual difference and can be removed. Or some trifling module allocates 8M at load time and keeps it around, even though it isn't actually used. Or maybe there's poor collision detection code that does way too much work and could be made to run 4x faster. These kinds of things are _common_. I'm a game developer; I've been there.
      • Except that the high-end game curve has not been keeping up with the processor speed curve. Back in the day, the top of the line processor was need to run the game. You needed a 386/25 or a 386/33 to run games that came out within a month or so of the shipping date of the processor. Now the specs require a PII 233 or a PII 400 or equivelent. I've yet to see a game that on the box requires a PIII/1G. Have you? They do require some sort of 3d graphics card, but technology wise, the games try to aim for the greatest penetration of market share, which is about at the 300MHZ-600MHz. That is unlikely to change, since most of these people won't even think about upgrading any time soon, even if the newer computers make the internet "go faster!"
      • >Computer games are the ONLY applications that tax a home-users cutting-edge machine...

        Wrong. Video editing. To convert a 20 minutes of video to mpeg2 takes 82 minutes with a 450 mhz celery, and 49 minutes with a 850 celery. I still have to convert some 50 8mm video tapes to mpeg2.

    • Take a look at MS Word from 4 or 5 years ago. It ran perfectly well with all the formatting options, spell checker, inline clip art -- all on a P100 or even less. Now what do we have? Everything above, but with bloat like the Paperclip, menus that go 5 layers deep for commands that nobody wants or needs, functions that are duplicated in 3 different places, and a GUI that gobbles up as much RAM as my system throws at it, and doesn't let go.
    • by FortKnox ( 169099 ) on Monday October 15, 2001 @10:31AM (#2430691) Homepage Journal
      Wow, your right.
      New, faster technology is being brought out just to make programmers dumber. Its an evil conspiracy against us all!

      Seriously, though, what is your definition of "bloatware"? Lets say I'm writing Quake4. I want to use C++ and lotsa nice OOD that's easier to write, easier to read, easier to expand, easier to debug, and easier to maintain.
      Is that "bloatware"?
      Sure, I coulda used assembly on the whole thing and it woulda been efficent and fast! You wouldn't need the super hardware!

      Hope you don't want to mod it, or me to fix any bugs, though.

      Maybe us developers like faster systems so we can implement software with better techniques to make technology grow? Sure it requires a little more hardware, but I wouldn't call it some evil conspiracy.

      It doesn't matter what technology is out there, there will always be crap (bloatware).

      BTW - You might want to buy this shirt [].

      • Things like SOAP are a classic example. CORBA is a perfect way to get computers communicating, it uses IDL to describe the services, it works on any platform and works using a binary protocol which can be tunneled via HTTP if required.

        SOAP is an ASCII based RPC mechanism, when was that a good idea ? So you can _read_ computer to computer transactions ? This is possible because we have cycles to burn and so doing two sets (or more) of textual conversion isn't seen as a bad thing(tm).

        Outlook, Netscape 6, .Net all manage to turn computers that previously did useful work into slow chugging behemoths. As another example consider this....

        XEmacs used to be considered the worlds largest piece of bloatware... its 4.2meg, its got email, news, web-browser, editor, mayan calendar and the kitchen sink in there....

        Mozilla appears to be 16Meg at least (IE was 100Meg when I installed everything!) Is it 4 times as functional, 4 times as reliable... nope.
        • Assembly was a perfectly good way to program, why did anyone need C??

          C was fast and efficient, why did anyone need C++?

          They just burn extra clock cycles!

          Ugh, under your rules, innovation would be in a standstill.

          • These steps are nothing to do with _now_ yes we needed to have machines that went from 1Hz to 400Mhz or so, otherwise it was a pain in the arse, but the last 3 years has seen insanely powerful machines, and not seen the sort of increases in quality that could be expected.

            And no-one EVER needed C++, its a HORRIBLE language :)

            LISP, Smalltalk now you're talking :)
            • I think we're seeing each others points, so I'm not butting heads anymore.

              I, personally, have an Athlon 800, I'm a big gamer, and I'm perfectly happy with the machine, not upgrading it for at least a year...

              But I'm also a developer that believes in good design and good design and good desi.... etc... and good coding techniques. Even if it sacrifices memory and horsepower.

              And C++ has its ups and downs, as does any other language ;-P
      • This is just IMHO.

        If Quake4 is released w/any bugs, runs slow on decent hardware (I consider a 400mhz computer decent), and is fucking HUGE (minimum req is ridiculously high) then I will be sorely disappointed.

        If you need to assemble the god damn thing to make it run fast, do it. I am sick and tired of "great" games being released that are frickin' huge and slow and require a dual athlon to run.

        I don't care if I can mod it, I don't care if you can debug it (there shouldn't be that many in the first place for how much it costs), and I certainly don't care if you think it should be easy for you to program.

        Freeware is one thing. A seriously high-end game should run fast and not need a dual athlon.

        If Quake4 is released it better play like Q1, or there will be yet another version that I won't play ;)

        Just my worthless .02
        • Isn't this _always_ the case though? I remember trying out Doom on a 386 or so. Forget this! Then I used a Pentium 75. A world of difference. I thought "how could software ever need more than this?" Then Quake came out. I could only play at 320x200 with my slow P75 and ATI Mach64 card. Once I got a K6-300 Quake ran awesome (even if a tad outdated). I missed the boat on Q2, but once I got Quake3 it happened again. I had purchased an nVidia TNT card which was too slow for Q3. Now I have a GeForce GTS 2 which is great (plus a AMD Athlon 650). All I can say is w/ Quake4, Doom3 and Return to Castle Wolfenstein coming out.. be prepared (not to mention whatever Valve software or Epic Games comes out with).

        • but do you care when you get it? If they can use OOD tools to release the game in a year instead of in 10 (write it all in assembly, tune it for all possible hardware it may run on), then I'd rather they do that, and I'll upgrade my machine to run it.

          "But quake ran fine on my PII!" - then run Quake.

          "They should make this new game run fast on my 4 year old computer." No, you should buy (or write) games that run fast on your 4 year old computer (try 4 year old games). I want games that are released in my lifetime with lots of features and visual effects - so I get hardware that can run them.

          And if Quake4 played like Quake1, why would they make Quake4? Especially if it ran the same on the same hardware? I think you're a sales demographic ID can afford to lose.
        • Quake is about the only program I'll excuse for requiring a fast CPU and a ton of RAM. In every game there have been two to three times the number of polygons on-screen at once. How do you expect to do that much more without requiring a faster CPU?

          Currently video cards only draw the scene you describe, until recently they couldn't even transform the light (T&L) the scene so the CPU had to do it all. (Transform means taking the level, clipping out bit syou can't see, bits that are occluded, and then transforming what's left to fit the screen in the proper perspective. Lighting then takes that and a list of all the lights and calculates which walls are being lit.)

          If you want to do a few hundred thousand floating point calculations to draw the scene you're going to need a very fast CPU to do it many times in a second.

          Quake is a graphical game, it's doing exactly what it says on the box. Now, MS Office, that's bloat.
      • I want to use C++ and lotsa nice OOD that's easier to write, easier to read, easier to expand, easier to debug, and easier to maintain.

        In theory, you should be able to write such classes so you can define one flag and the debug stuff will compile away to nothing. (or just a few extra pointers)

        So the developers need good machines, everyone else doesn't.

        Except some companies are shipping debug builds as their final product. I'm not sure why. (Black & White, for example, includes the debug mfc & msvcrt dlls.)

    • This lets traders value financial products faster so they can decide whether to act or not on a price.

      OTOH, if the traders bothered to get their option pricing models written in a decent computer language rather than VBA, then yes, maybe they could run on a 256MHz P2.

      Unfortunately, the banks are firing a lot of their IT staff [] because, frankly, throwing hardware at the problem is cheaper than writing the stuff properly.

    • Well, a pair of spanky new Athlon MP 1800+s might help when it gets slashdotted, right?

      shut up man
    • Hardware is the excuse for bloatware, its not H/W engineers fault but it isn't an excuse to use....

      No. Good hardware is never an "excuse" for writing bloatware. Most times you refer to programs as bloatware, it's not the programmers who intentionally write "bad code" but the development enviorment and it's assosciated overheads that cause bloatware. Besides, most bloatwares have lots of features that you may not need, but others do.

      All said and done, I don't think we are doing too badly as far as bloat is concerned...and any bloat that exists is more a reflection on programming methodologies being used and their limitation as we scale.

    • by Carnage4Life ( 106069 ) on Monday October 15, 2001 @11:20AM (#2430974) Homepage Journal
      I am constantly amazed by people who claim that faster hardware leads to bad code as if we've been living in the Golden Age of quality code for the past few decades.

      With current hardware, people are still writing code a lot of code in C and C++ for performance reasons which has lead to buffer overflows, segfaults, core dumps, general protection faults, and blue screens becoming generally accepted aspects of computer programming. Now that the hardware is finally becoming fast enough, maybe we can wean ourselves from C & C++ and move over to writing apps in Java or even C# instead of still dealing with the same issues that were solvable problems 20 years ago. Programmers have shown that it is practically impossible to deliver significantly problem free C/C++ code in a decent timeframe while programming environments like Java have shown the opposite. Once hardware creeps up enough we can rid ourselves of the problems of C & C++ once the performance gains are not worth the amount of bugs one has to deal with, which is already happening in lots of server applications.

      Also once, hardware creeps up enough maybe some of the stuff that has been in research labs for the past 20 years can finally see some use. For instance microkernel are generally seen as a superior way to design an OS but have had difficulty taking hold due to performance reasons (although Windows NT is based on a -kernel architecure and MacOS X is also built on the Mach -kernel) which wil change once hardware advances make it possible for the performance difference to become acceptable.

      A.I. being built into applications as well as the OS is another place where hardware performance and memory availability would play a big part in helping come to fruition.

      How about voice recognition and face recognition being built into the applications you use?

      How about bringing virtual reality to masses?

      Or do you think that a 1 GHz CPU and 128 MBs of RAM is all the power a computer user will ever need?
      • by Grab ( 126025 )
        This is a problem unrelated to language. It's surely possible to write good code in C, or C++, or Java, or C#. The problem is that ppl don't, and won't regardless of the language.

        Read any software QA textbook, and you'll find they all agree (and experience tells you the same). How do you learn to code? It's not by being taught, it's by hacking away in a dark room somewhere. Individual coders/engineers may being incredibly skilled, but the experience doesn't get passed on, so the next generation of engineers make the same mistakes as the last one! Personally, I split up software developers into "hackers" and "engineers".

        The "hacker", when given a vague problem to solve, sits down on his own and bashes out a piece of code without reference to requirements clarification, design documents, etc. It may even work - but it will be an unmaintainable nightmare, and if it doesn't work first time (or if it works sporadically) then it's over to printf and the debugger for months. Documentation, where it exists, will be written post-facto, and you'll be lucky if it explains the code properly. No-one else will be able to rework the code, and the hacker himself may not remember how it worked 6 months later!

        An "engineer", OTOH, spends most of their time working in Word and a CASE package working out what they want to do and how they're going to achieve it, and runs his ideas past someone else to see whether a fresh pair of eyes can spot anything wrong. By the time the engineer goes for his favourite text editor, the problem's most of the way solved, and any bugs can be found by comparing design against code (ie. peer review). Any future changes are simple to include, as the design explains how everything works in sufficient clarity that anyone can pick it up and rework it.

        A really good engineer (and I'm not one, yet :-) can distance himself from his own work enough to review it himself to make sure that a new reader can follow it easily. This differs from a hacker in the same way that a solo round-the-world sailor differs from the nutter who sets off across the Atlantic on a boat he bought at a garage sale: the former starts off knowing that there are risks, but has the experience to avoid or minimise them; the latter sets off not knowing that there are any risks, and only finds out when he hits the rocks. :-) An engineer doing RAD may well have a few trial hacks at the problem to see what works - but the difference is that the final result will not be constrained by these, ie. the experiments will likely be thrown away so that the final version is not cluttered with legacy crap from when the problem wasn't understood properly.

        I've not run Netscape 6 for more than a few hours total, and it's already crashed on me more than once. Java is no magic bullet. Sure, there's some ways C will let you kill things which Java doesn't let you do. But coding standards such as MISRA define "safe" subsets of C, and by following them you will minimise the risks. Is it better to be coding in C, knowing how to avoid the problems, or coding in Java without knowing about any pitfalls? And as for timescales, Netscape are hardly a shining example, are they? :-) I'm not saying that Java is bad and C is the one true way, I'm just saying that more layers of indirection and "slower" code do not necessarily make it more reliable. What makes it more reliable is good design, and that is something you have to learn, not something you're born with.

        For a typical user running typical productivity software, a 300MHz CPU and 128MB of RAM is all they'll ever need. More power will only be required for a new "breed" of programmes - maybe the Metaverse, maybe not. But your typical home computer user will not require any more processing power until a new killer app comes along. OfficeXP is not that killer app.


    • >(and yes this is partly a dig at the huge swap requirements on the 2.4 kernel)

      Linux 2.4.10 doesn't have the huge swap requirements of the older kernels. I went from using 500MB of swap per node in my cluster to using 50MB of swap running a cfd code by upgrading the kernels to 2.4.10 (512MB of memory per node).

    • If I ever have to wait for my machine to do something, it's too slow. I don't care if it's idle the majority of the time, I want it to be insanely responsive when I'm in front of it.

      But your point is valid, if the current software weren't using all the new speed, we might already be there. (Well, for most things. Crypto cracking will still use an infinite amount of CPU...)
    • SO what are you saying? that we should just drop all reaseach and development and settle on what we have becasue its good enough? If we did that then we'd probably still be stuck with 486s because then, everybody thought "what the hell else can we put in a computer? why would we need anything else? 640K aghta be enought for anybody...
  • Firingsquad has an excellent review [] comparing dual Durons to dual Thunderbirds using both Palamino and non- versions of both chips. They conclude that the Palamino Duron is the best bang for the buck.
  • Damn! Slashdotted! (Score:4, Interesting)

    by Arethan ( 223197 ) on Monday October 15, 2001 @10:25AM (#2430651) Journal
    Note to web programmers, MySQL doesn't like it when it runs out of connections. Try increasing the connection pool size. Also, instead of having the page try to open the connection just once, and fall all over itself if the connection fails, try putting the connection request in a timed loop with a timeout of around 5 minutes, and a sleep(5) in the middle to help throttle a little. Your MySQL server will thank you, and your web page viewers will thank you.
  • If you read Tom's Hardware [], you may have seen this fantastic article [] and brilliant video [], which shockingly demonstrates how AMD vs Pentium chips cope with heat emergencies. Considering the disastrous results with so many of the AMD chips, I'd be hesitant to buy anything OTHER than a Pentium until AMD can conclusively show that their chips are "smart" when faced with heat emergencies (heatsink fan stops, heatsink falls off?)
    • Losing the fan but not the heatsink is not going to cause flame-out. And losing the heatsink is really only an issue for people in earthquake zones and LAN partiers.

      If you're really really scared, get one of the heatsinks that bolts onto the motherboard instead of clipping onto the socket.
    • While it's true that AMD CPUs are, uh, sensitive to cooling, I don't see that as a show-stopper. When you buy the parts to build your own Athlon system, as I did recently, you get plenty of warning to NOT TURN THIS ON WITHOUT A HEATSINK (yes, they shout, as they should).

      Other CPUs are also very sensitive. What's rather surprising is how well Intel's P4 thermal shutdown works. I suspect AMD will get around to doing something similar. But in the meantime, I've attached a nice quiet (3800 RPM, not the 7200 RPM version) ThermoEngine to my Thunderbird, and it cruises at around 100 degrees F. Some newer/bigger heatsinks bolt to the motherboard, rather than clip on to the socket, which I suppose helps if you're really paranoid about its falling off. I use Motherboard Monitor to keep track of the temp via the Win98 system tray, and wish Linux distros would include similar capability out of the box (yeah, I know there's a way to build it in yourself...).

      But then I do admit to using a 1 GHz Tbird rather than a faster one because I don't want that excess heat or power consumption.
      • Other CPUs are also very sensitive. What's rather surprising is how well Intel's P4 thermal shutdown works. I suspect AMD will get around to doing something similar.

        The new Athlons (XP and MP) have thermal sensors on board according to AMD's site. I still can't find any information indicating whether/how they actually use these though.

    • The new Palimino chips (which these are) have the thermal diodes in them. I'm not 100% sure if the CPUs will auto shut down based on teh diodes reading or if it requires a BIOS intervention, but I doubt it would matter.

      Yes AMDs will incinerate themselves if the heatsink alls off - but funny, you don't see many people saying this has happened - yes it has to a few, but honestly - I'd rather get the higher performance for my dollar and risk having to replace the CPU if the heatsink fell off - something very unlikely. But if it did, the replacement CPU would be pretty cheap given how prices on processors fall over just a few months! And total cost would STILL probably be chaeper than an equivalent Pent 4 system (not CPU, system) Hell my 1GHz Athlon has been chugging along for months and the heatsink is still on solid!

    • by (H)elix1 ( 231155 ) <> on Monday October 15, 2001 @11:27AM (#2431011) Homepage Journal
      As a side note - an Intel motherboard will short out of you let the floppy drive slide onto the board whith the power on. Pouring coffee into a laptop makes interesting smells. Putting a CD-ROM in the microwave for 10 seconds if you want a real show.

      Seriously here, you are missing out if this kind of thing actually sways you away. The biggest flaw, IMHO, is the AMD cores chips way too easy. I would really like a coating of nickel or copper like the Intel chips have. As an early adopter of the Chrome Orb (rev 1), the hard part was safely getting the heat sink on.

      I've found that an AMD CPU will give you warning signs like lockups, kernel panics, and other goofy things when you loose a fan. My mainboard will shut down 5 sec after the post if the CPU fan is not spinning fast enough! Since they are good up to ~100C, using a motherboard monitor prog will go a long way to making sure it runs safely and shuts down before it gets into deep weeds. A copper heat sink goes a long way to passive heat removal as well in an emgerency situation.

      This is like buying a car based on how well it runs without oil in the engine. I suspect my BMW would make for a fantastic video if I tried that too. DON'T DO THAT! I would not pay extra for an engine that would - like using synthetic oil to give an extra two minutes of use.

      Buying a CPU that throttles back and paying extra for it -- that might be insurance, but I stopped buying retail boxed CPU's with the three year warr.... It would cost me more to ship an old 400mHz CPU back to Intel than to just replace it these days. I paid $99USD for a 1.4G CPU a couple weeks ago. At that price, these things are practically disposable.
      • I've found that an AMD CPU will give you warning signs like lockups, kernel panics, and other goofy things when you loose a fan. My mainboard will shut down 5 sec after the post if the CPU fan is not spinning fast enough! Since they are good up to ~100C, using a motherboard monitor prog will go a long way to making sure it runs safely and shuts down before it gets into deep weeds. A copper heat sink goes a long way to passive heat removal as well in an emgerency situation.

        IIRC, the main problem with the AMD processors was that they would burn out in around 3 seconds in the (unlikely I know) event that the heatsink fell off. Another point was that the plastic tabs which the heatsink was clipped to, weren't particular strong, so it perhaps wasn't as unlikely as one might think.


    • Most (if not all) motherboards let you specify a temperature above which your pc will shutdown... or you can use a mb monitor to do the same thing withing windows/linux/whatever and give you a warning if the temperature exceeds xx degrees.

      So, where's the problem?
    • Is Intel paying you directly or thur alternative sources? ;) Yeah, it was fantastic at E3 when they couldn't show me a Black and White demo on the P4 when it kept blue screening. That was fantastic! Anyway, go ahead, pay twice as much, get 2/3 of the performance in a server. I'll happily serve a million plus sessions a month on my little Athlon 900.
    • I have an Dual CPU 800 (Intel).. the CPU's won't shut the system down if the heat gets to hot, but the motherboard will.

      the case fan died on my box , and because I run dual-seti at night, the machine heated up and started beeping.. woke up me up, but found that the box shut itself off .. later discovered that the motherboard was sinking ship after 75deg celc. from either CPU.

  • Interesting, this. (Score:2, Interesting)

    by dave-fu ( 86011 )
    I like how the little guys are going to benchmarks to indicate how their product actually performs while the big boys (Oracle, I'm looking at you) are recusing themselves from it.
    Too bad that IT managers go with what they know (everyone else is using) and what's worked for them in the past.
    It may be confusing for Jane Consumer, but it's nice to see that AMD's finally gotten a marketroid with a clue as to what works. Now if only their stock would start working, too...
  • by MadCow42 ( 243108 ) on Monday October 15, 2001 @10:31AM (#2430693) Homepage
    It always happens... you jump in and build your dream system, and immediately it's out of date. Oh well, a duallie 1.2ghzMP isn't anything to laugh at! Glad to hear that the TigerMP supports the new chip speeds out of the box, anyone know how high it will go?

    A few notes on the TigerMP though: VERY picky on RAM, very picky on how it's seated (read: install memory before board is in your case, so you can wedge it in on a flat surface!), but since getting past that, it's been ROCK solid! Beautiful system I must say!

    MadCow... always 500mhz behind the curve.
    • I, too, am building a dual 1.2G system. If these 1800's run at 1.53 GHz, then each CPU is only 27.5% faster, so it's not like you're really missing out.

      Thanks for the tip on the Tiger MP. I bought high-quality memory, so it should work (when it arrives).

    • Unlike Intel, AMD does not like to screw you over by constantly changing sockets. AMD has stated that the *all* the new CPUs they release should work even the oldest SocketA boards provided that the power requirements are met. And they will keep the same socket for all the future CPUs. So in a year or two you'll be able to upgrade to something like 2x 2.0GHz or more. (Upgrading it right now to 1800XP makes no sense though. The performance increase is too small). Also, AMD has stated that they'll try to keep the same socket for the ClawHammer/SledgeHammer series -- they'll change it only if neccessary.

      Compare that to Intel. Over the past 3 years we've had Socket 7 (pentium), Socket 8 (pentium pro), Slot 1 (p2/p3/celeron), Slot 2 (xeon), Socket 370 (p3/celeron), Socket 423 (p4), Socket 478 (p4)...
      • Thanks for the info... however I'm still interested how "high" of a clock speed the motherboard itself will support (multipliers, etc).

        All Tyan lists is "supports two Athlon MP processors"... no frequency range, like most other motherboards out there. It'd be great if I can drop in two 4ghz processors next year when the next bloatware OS slows my system to a crawl!

        However, back in the real world, I'm now ripping MP3's (at 12x speed+), running Seti@home at full speed (realtime priority, just for fun), surfing the web, and running Komodo/Mozilla, and still only running at 70% CPU usage... it's not like I need more power right now! q:]

  • Roadmap (Score:3, Informative)

    by nilstar ( 412094 ) on Monday October 15, 2001 @10:32AM (#2430703) Homepage
    Why don't you take a look at the AMD Processor Roadmap to see more on their processors.... ... though the site is in german.... translate it with babelfish:
  • About the naming (Score:2, Interesting)

    by TheMMaster ( 527904 )
    I've been an AMD fan ever since, erm well always actually even my 486 was an AMD ;-)
    I really think AMD will have te expect some problems with this. Back in the good old days (r) of the pentium and the cyrix 6x86 I worked in a computer store and we also sold cyrix computers to customers that didn't want too spend too much money (so sue me)
    Very often people came back because they saw that their Cyrix PR200+ wasn't actually running on 200Mhz and demanded a refund (which they didn't get ofcourse) we had to explain the whole thing and it costed us a lot of time
    That's why we stopped selling them back then
    Another thing is that the semi-geeks (the dudes that THINK they are geek but basically know nothing) won't buy them because "they are already overclocked"
  • A couple of the 1800's would be real nice here on a Tyan Thunder board, however, doesn't AMD have a record of potential heat death vulnerability []? I believe that article was even mentioned here, but I can't dig out the link.

    Tom's Hardware [] notes that the AMDs can cook really fast and beyond the ability of the motherboard sensor to flag. I guess these have on-die sensors but these were noted as being fairly ropey as well.

    Intel's P4 seemed to do quite well out of the test as the clock slows automatically as the die temperature increases (in effect the processor ignores the clocks until the temperature goes reasonable). This means that it will even run without a heatsink (but very slowly).

    I just get very nervous about having high-end silicon that is vulnerable to a SPOF. It a heatsink detaches or the processor fan fails - blam. If the chassis fan fails, at least there is some chance of a shutdown, but those processor heatsinks make me uncomfortable. Yes, I know I can buy quality, but MTBF is just that, a fan can still fail early.

    So I wait for AMD to get a bit more serious about thermal protection and stick with using cheaper processors as thermal fuses.

    • If you're seriously worried about a heatsink falling off, you could always try positioning your case so that your motherboard is horizontal.

      Frankly I think people are being just a little too paranoid about this whole issue. It's like monitor implosion. Possible != likely.
    • by Jeffrey Baker ( 6191 ) on Monday October 15, 2001 @11:17AM (#2430957)
      Why do you kiddies keep beating this particular drum. Your heat sink should never fall off! Why is it falling off? Because you don't know how to properly build a computer? Than buy a Dell and don't sweat it.

      For your convenience, here is a list of other things you should avoid buying because they have "fatal flaws":

      • Internal combustion engines (can seize if their oil pan suddnely falls off)
      • Airplanes (can crash if their engines suddenly fall off)
      • Nuclear power plants (may malfunction if all coolant pumps fail)
    • This probably is not comparable to the new athlons, but my duron ran for a while (perhaps an hour or so) without the heatsink fan, and it's fine. Of course the heatsink was scalding, and I let it cool for a long time, but it's still running strong.

      At nearly twice the clock speed, those athlons could still run quite a bit hotter than my lowly duron, I suppose. I would still expect that a hardware monitor set for fan RPMs or processor temp would catch a failure in time. Don't set it on 149 deg. F. If it's above 125, something is wrong.

      BTW, exactly what do you do to your computer that could detach the heatsink? Most heatsinks (unless you buy quality) can be a pain in the butt to detach even when you want to detach them.
    • So I wait for AMD to get a bit more serious about thermal protection and stick with using cheaper processors as thermal fuses.
      So I wait for slashdot posters to actually do a little research and discover that AMD has listened to their customers and put thermal diodes in the Athlon XP line...
  • by LazyDawg ( 519783 ) <lazydawg@h[ ] ['otm' in gap]> on Monday October 15, 2001 @10:39AM (#2430741) Homepage
    Hey all you /. people with a fab, here's a fun idea to piss off intel and AMD. Make the clock/speed irrelationship totally obvious.

    Imagine an x86 compatible processor that runs at a clock speed of 50ghz? That's right, fifty BILLION hertz! Now, that clock only ever hits a counter that lets the 8086-compatible processor cycle once every half to full second. You could get a whopping 1-2 IPS :)

    You'd be able to make millions selling 8086's that use the first 640k of a bunch of 128 meg chips, and the first 40 megs of a 400 gig hard drive. Think of the possibilities!
  • I've heard lots of reports from reputable sources that cheaper Athlon XP's do work in multi-CPU systems. (Even the original Thunderbird supposedly works, although not at top speed due to some cache interactions). I've heard that the Athlon XP uses the same Palomino core as the Athlon MP, so there is really no difference at the hardware level.

    Can anyone confirm this? Is this new, higher-priced series of Athlon MP's simply a marketing gimmick, a la NVIDIA's Quadro cards? (which are the same as a Geforce hardware-wise - save one tiny resistor that tells the driver to un-cripple certain optimizations - but cost 2-3 times as much a Geforce)
    • by Anonymous Coward
      The only difference between Athlon XP and Athlon MP is that the MP is tested to be sure it works in a multiprocessor system. There is no physical difference between the chips.

      The chance of a dual XP or dual Duron setup not working is infinitesimally small.
    • by whovian ( 107062 )
      The gist according to is that there was a initial batch of XP's that were
      SMP-enabled and mistakenly shipped. AMD supposedly will be disabling SMP in the XPs very soon.
    • Every single AMD CPU from Duron to Athlon MP supports SMP. There are reports that AMD will simply try to disable SMP on the XP line. But the core architecture is SMP-capable.

    They only compare against the 1.2Ghz Athlon MP though... although they intend to do an expanded article soon.

    shut up man
  • by shut_up_man ( 450725 ) on Monday October 15, 2001 @11:22AM (#2430988) Homepage
    Before AMDMB went splat, I read enough to see that in some tests (most notably memory), the Athlon XP (yes, SINGLE) beat the dual Athlon MP setup soundly. This is because the XP tested in a VIA KT266A motherboard, which has the edge in performance over the standard AMD760MP.

    I think the Athlon MPs are awesome, but having a much cheaper, single-processor setup beat out a dually in some tests throws a bit of cold water on my upgrade lust.

    shut up man
    • Umm, because memory bandwidth is independent of the number of CPUs you have?

      If you're running tasks/benchmarks that aren't CPU bound, multiple CPUs won't do you any good. If you're running multithreaded apps or multiple single-thread apps, multiple CPUs are a Good Thing, and two AthlonMP 1800+ CPUs will outrun a single AthlonXP 1800+ on a KT266A motherboard. Linux kernel compiles, fr'instance.
      • Actually, this is somthing that is slowly but surely changeing. We are seeing new mobo chipsts which actually have 2 seperate busses for each processor. But only thing that is combined would the the bus that the RAM runs on. But this also is somthing that is in the works, and may not be a bottle neck for too much longer.

        The problem is, their will ALWAY's be a bottle neck. No matter what you are dealing with. Wether it be the internet, the computer memory sub system, or the traffic on the way to work. Once we make one thing faster, it shows that another isn't quite up to par. So, that is th next thing that needs to be worked on. Wether it be the mobo manufactures, processor manufacturers, the wonderful people that lay that precious fiber optic cable, or the road crews that interupt my morning commute to work.

        Things like the nVidia nForce shipset are (At least IMHO) going to advance computer technology even more than a newer, slightly faster processor. Why? Because of battle necks such as this memory issue we are seeing with the AthlonMP SMP systems.

        Granted we have to give them some credit. When the Via chipsets were first released their memeory bandwith was HORRIBLE. Even to the point that it was better to stay with the BX chipset over upgradeing to the newer Via133 chipset. But, that has been fixed for the most part through things as simple as BIOS updates.

        Their is a lot to a computer system, and their is a lot that makes it function properly. And if I had time I would get into the bandwith limitations between the northbridge, and the southbridge, the interactions with a SMP systems and the different cache's available to that processor, and their bandwith/ latencies, etc.

        /pointless blabbering

        - Ice_Hole
  • AMD (Score:1, Redundant)

    I have to say, I'm quite pleased with AMD's processors. For the price to performance ratio, you usually get about 10-20% more performance for about 1/2 the cost of a comparable Intel processor.

    I do think they should provide a more accurate "instructions per second" rating rather than relying on Intel as the benchmark for their rating.
  • Gee, the ALU of a 2GHz Intel PIV is double-pumped meaning it runs at 4Ghz!! They should call it the Pentium 4000!!

  • by Brento ( 26177 ) <brento AT brentozar DOT com> on Monday October 15, 2001 @11:50AM (#2431156) Homepage
    Thresh's Firing Squad has a review of the Tyan Tiger with dual AMD Duron MPs [], which is probably of equal or more interest to us geeks. For those of you who weren't aware, AMD Durons work in multiprocessor mode as well, and they're very, very close to Athlons in terms of performance (and obviously cheaper.)
  • Get it? Get it! (Score:4, Insightful)

    by bill.sheehan ( 93856 ) on Monday October 15, 2001 @11:58AM (#2431213) Homepage
    Is the latest Athlon processor overkill for any normal computer user? Yup.

    Is there any software currently available that requires this kind of speed? Nope.

    Is there any sensible reason to upgrade your CPU? Nope.

    Is my rational, analytical mind paying the slightest bit of attention to this argument? Nope.

    It's all about the megahertz, baby! In an earlier generation, we were the people tinkering under the hoods of our Fords, trying to get a little more oomph out of a carburetor. Most of us don't need it, most of us have no idea what to do with it, but since when has that ever stopped us? More speed! More storage! More bandwidth! I want more!!!

    Good job, AMD. Keep 'em coming.

    My id is sneaking up behind my superego with a rock...
  • by ruiner5000 ( 241452 ) on Monday October 15, 2001 @12:23PM (#2431358) Homepage
    Here's the low down on the dual Athlon. [] It is incredibly fast for any server or workstation application. Of course the app has to be SMP capable which is why your seeing the new KT266A chipset single CPU system beat it out in some apps, but those are only non SMP capable apps. It is apples and oranges. Yes, I would like to see some chipset improvements to the 760MP. The latency is too high. Perhaps the 760MPX will address some of this. I would very much like VIA to commit to their dual Athlon chipset, but they have not as of yet. Another issue is heat. While they do use the cooler running Palomino core, they are still quite hot for say a 1-2U rack. The shrink to .13 micron early next year will eliminate that issue and should hasten adoption by larger computer makers. For the time being though it is a relatively cheap solution for those who need it, and is a blazingly faster web server for those who know how to set it up. Check out my review [] for more, my site is still up, and we didn't copy anyones site idea.
  • I didn't realize that people took press releases as gospel. What's all this crap about a "big speed boost?"

    We're talking about mhz increase of less than 10% -- in how many months? It's been over a year since the T-birds were introduced.

    Yeah, they have a new core. Whoopee. It's not a dramatically new improvement, and apparently AMD has decided that if its chips, in name, are as fast as P4s, they should cost as much too.

    I like AMD stuff, but the Mhz Myth shit hasn't worked for apple, ever, and it won't work for AMD. Apple tried the Mhz Myth stuff back when the ppc601 came out, and despite 6 or 7 years of PR bunko, it's not caught on.
  • From over at Firing Squad []...

    "The initial batch of Athlon XP chips shipped out to distribution were unlocked and this was not suppose to happen. Within a week or two, these unlocked CPUs will be phased out, or recalled. I'm not sure what will happen but AMD has confirmed that the Athlon XPs will be locked very, very soon.

    Some of you are lucky, to have snagged a few Athlon XPs that were unlocked."

  • Anandtech's review (Score:2, Interesting)

    by acm ( 107375 )
    Anandtech has a good review that compares all the latest p4's and athlon xp's. Check it out here [].
  • Hammer [] is going to be unveiled today as well.
  • by WillSeattle ( 239206 ) on Monday October 15, 2001 @02:40PM (#2432175) Homepage
    The only thing that matters is as follows (in rank order):

    1. Bandwidth - face it, email and the web are king. Unless you're a gamer.

    2. Video Card - if you're a gamer, you're better off spending your money on this and making sure it has tons of cache.

    3. Sound Card - if you're a gamer, you're better off spending the rest of your money on this. The rest of us don't care, so skip this.

    4. Memory - more, more, more. Yes, even more.

    5. Bus speed - more channel so those CPUs can actually send more data.

    6. Hard disk - you really should have more RAM, but once that's crammed, get better seek and access times here.

    6. Chip speed - WAY DOWN HERE! - yes, if you maxed on all the above, then you MIGHT notice the difference between a 1GHz and 1.8GHz system. Otherwise, unless you're a graphics artist, YOU SHOULDN'T WASTE YOUR MONEY!

    Naturally, when people review systems, they compare older systems with slower bus speed, less RAM, slower HD, and cheaper cards to new systems with faster H/W. Buy the motherboard and cards yourself and pop in a slower chip and spend the extra money on RAM - you will get way more bang for your buck that way.

    Aside - I own AMD shares, so sure, go buy these speed demons! But don't do it because you have to, do it because you know you just like BIG NUMBERS.

  • I was under the impression that AMD changed to this new naming scheme to avoid the public's concentration on MHz. Why, then, do I read an article on Slashdot in which AMD's new naming scheme is broken down into MHz equivalents?

    Seriously, I think we all agree here that AMD is making a bold and necessary move to diminish the importance of MHz. Unless we follow suit and stop using MHz as our measure of performance, the public will never catch on. I think the importance of attaching a "model number" to a chip name is that we will eventually forget about MHz altogether and focus on pure chip performance. Let's start that now.

    The Mhz equivalents for each of these new processors had no place in this article.


To do two things at once is to do neither. -- Publilius Syrus