Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AMD

AMD's Athlon XP 2700+ 310

Posted by CmdrTaco
from the coming-soon-to-a-nwn-box-near-you dept.
kraven_73 writes "According to some Taiwanese sources, AMD will officially reveal its Athlon XP 2700+ processor on the 7th of October. Most interesting is that this CPU will have a 333 MHz FSB. The first implementation of this increased FSB on Athlon platform. It is expected that the novelty will be based on the latest Thoroughbred core stepping 1, just like the current Athlon XP 2400+ and 2600+, and will work at 2.17GHz."
This discussion has been archived. No new comments can be posted.

AMD's Athlon XP 2700+

Comments Filter:
  • To bring down the price of slower chips to reasonable levels, that's what the point is.

    Expensive bleeding edge crap.
  • by Winnipenguin (603571) on Thursday August 29, 2002 @10:21AM (#4163616)
    TH has this info and advice:

    Once again, Intel wages war on AMD, fighting to attain the fastest desktop CPU. AMD is sure to launch the Athlon XP 2800+ soon (in October at the latest), so that it will be able to keep close on the heels of its arch-rival. Intel has also made preparations of its own, with the P4/3066 up its sleeve.

    At any rate, the real winner is the ambitious end user, who will be able to choose between the P4/3066 and the Athlon XP 3000+ by the time Christmas rolls around. Both the successor to the P4 and the AMD Hammer won't be available until next year.

    As always, price-conscious buyers who are interested in getting the best price/ performance ratio are a bit better off with an AMD Athlon XP than with a P4..

    Link here:
    http://www17.tomshardware.com/cpu/02q3/0208 26/p4_2 800-16.html
    • they have the FSB running at 400MHz and call it 800MHz

      thats surposed come come out with the chipset and Opteron (or whatever marketing call it, the K8 )

      Intel be worried very worried

      regards

      John Jones

      • The clawhammer and operatons don't have a front
        side bus, anymore, the memory controller is
        on the chip
        • The FSB of a Hammer chip is the bus linking the chip to the North Bridge, which no longer includes the memory: it still has the AGP port, and needs to get to the PCI bus somehow. In the Hammer system, the FSB is actually a 32-bit HyperTransport link, running at 400 MHz DDR, so 800MHz effective, for a combined bandwidth of 6.4 GB/s. So yes, Hammers still have an FSB.
          • No, the northbridge is on the hammer chip itself. the FSB is on the chip itself, between the core and the switch fabric that is on the chip that connects up the CPU core, the memory controller(s) and the HyperTransport interface(s).

            HT is just a system level interconnect.

            Running at 800MHz DDR (1600MTransfers/s), 16-bits in each direction for 3.2GB/s in each second in the case of Hammer.

            So no, Hammers do not have a FSB as such.
            • by barawn (25691) on Thursday August 29, 2002 @12:07PM (#4164463) Homepage
              Depends what you define as a northbridge, and what you define as a FSB. The bus type (EV6, HyperTransport, whatever) is just a name for the signaling and protocol - the name of the bus itself can still be "Front Side Bus".

              The "traditional" northbridge had a memory controller and an AGP controller, as well as a PCI controller. The PCI controller got moved completely off the North Bridge to the South Bridge and replaced with a proprietary interconnect in a lot of modern chips. The memory controller was moved on die, but the AGP controller is still off-die, and thus needs a chip for it. This chip could be called the "north bridge". It's just a name - AMD calls it the "HyperTransport AGP 3.0 Graphics Tunnel" (which doesn't really make much sense, as it also has a HyperTransport link to a south bridge - how does THAT relate to graphics?) but it's still a North Bridge, just without the memory controller.

              There are two HT links on the system, which is why it makes sense to call it a "north bridge" and a "south bridge": there's a HT link from the CPU to the North Bridge (the AMD 8151) and a HT link from the North Bridge to the South Bridge (the AMD 8111).

              So, yes, they do have a FSB, unless you want to call it something else: "highspeed HT link" and "lowspeed HT link" (for the North Bridge-South Bridge interconnect) maybe? Got me. It doesn't matter. The FSB has always been the high speed link out of the processor to a bridge chip, which then has a low speed link to another bridge chip which has all the PCI, LPC, ethernet, all that crap. Hammer doesn't change that, it just removes the memory controller from the North Bridge.
              • It is called an AGP tunnel because "tunnel" is a HT term for a device which has an upstream HT connection, and a downstream HT connection.

                The FSB has traditionally carried CPU signals to the device that has the memory controller. As the memory controller is on the die of the Hammer, there is no FSB off the chip, just a high speed interconnect to connect up further processors or I/O devices.

                The Hammer core does have a FSB. It runs at core speed and connects to the on-die switch that connects up the core, HT links and memory controllers.

                HyperTransport is a point-to-point link, not a bus. Maybe you could call it a Front Side Interconnect, or how about Processor Interconnect, because Opteron's will have 3 HT links - and 3 FSB's on a processor sounds a bit silly...
                • It'd be more appropriate to call it a "North Bridge with AGP8X and HT Tunnel", as there's definitely no requirement that the north bridge connects to the south bridge using an HT link. I think one of VIA's chipsets is doing that so they can use an old south bridge... Anyway, it'll be most appropriate to ditch the "North/South bridge" concepts if they ever switch to one die for the PCI bus+AGP port+everything else. Then it'll just be a HyperTransport system hub. Of course they'll also have about 1000+ pins on one chip, but who's counting? :)

                  However, it really all comes down to what you define as the "FSB", which is tough because Intel just made up the term back with the PII. You can call it the core/memory/I/O interface, in which case, yes, it's internal now. You can also call it the processor's external data bus, in which case it was replaced by the HT link. One block diagram I saw from AMD called it the "system interlink" or something like that.

                  It's difficult to try to use old acronyms on a new design, especially because Hammer is really quite a striking difference from the old designs. I'm sure AMD will use something like "system interlink" or something like that for the main HyperTransport link. I doubt it, though - people use "bus" for just about any topology now. Wasn't the EV6 bus for the Athlon really a point-to-point link, anyway?

                  Well, though, if there's one thing we can agree on, though, no one will ever be claiming that a Hammer-based processor is limited by its "system interlink" or whatever. 6.4GB/s is way more than enough for now, especially when all you're doing is shoving data at the PCI bus and the AGP port.
                  • The nVidia Hammer chipset is a single chip design. On-board AGP controller, PCI controller, all the USB, Firewire, IDE and other gizmos as well.

                    Some SiS Athlon chipsets are single chip as well. Pretty stable as well and well featured, and cheap.

                    A HT device that only has a HT uplink is known as a "HT Cave".

                    Old southbridges used to be PCI devices. E.g., VIA 686A/686B as used on the KT133, etc.

                    1000 pins doesn't seem to be a real problem for BGA devices like chipsets at the moment. AMDs 8131 is around 800-900 pins IIRC.
          • Both the terms "front side bus" and "northbridge" are rather outdated and really don't apply to the Hammer architecture at all.

            "Front Side Bus" stemmed from the PPro, which had two memory data buses, one on the "front side", which connected to the memory controller and everything else, and one on the "back side" which connected to the cache. With the Hammer, the cache is integrated so the "back side" bus never leaves the die (same as with the P4 and the AthlonXP) while the memory controller is also integrated, so part of it's "front side" never leaves the chip either, it just has a memory bus. Hypertransport is a chip-to-chip interconnect that is used for the rest of the system. You can call it a "front side bus" if you like, though the term really doesn't make any sense in this context.

            "Northbridge" and "southbridge" also no longer make any sense. These terms originated because they were the "north" and the "south" end of a PCI bridge, which gave the processor a way to talk to PCI devices. Of course, the functions that these chips now perform no longer have any PCI at all in them, and the bridge is entirely in the I/O chip (what some people are still calling the "south bridge"). Intel is using their Accelerated Hub Architecture instead of PCI for interconnects, while AMD will be moving to using Hypertransport again as a chip-to-chip interconnect. AMD calls their chips "hypertransport tunnels", which is a somewhat more accurate title then a PCI bridge.

            FWIW the only chipsets that I'm aware of which still use actual PCI north and south bridges are the AMD 760MP(X) and the ATI Radeon chipsets. AMD uses 66MHz/64-bit PCI to interconnect their chipsets, which gives them as much bandwidth as any competing technologies. ATI, meanwhile, is using 32-bit/33MHz, which is part of the reason why their Radeon chipset will likely really stink in the desktop market (nice for laptops, useless for desktops.. but I digress)
            • You're not going to get any argument from me here: "FSB" was a term that stuck around far past when all chips had moved cache on die, and should've just been called "system bus" for a while now, especially when the system bus speed and the memory bus speed were independent, which means that the "system bus" wasn't even really a "memory bus" - it was just a bus to a chip that contained a memory controller. NVIDIA's nForce chipset really showcased that, with the DASP integrated into the chipset. They were really trying to be a secondary processor. Now, with a HyperTransport link out, the best term would be "system interconnect" to satisfy those who can't stand using "bus" for a point to point protocol.

              As per the North Bridge/South Bridge distinction, I'll agree that the original idea of the word no longer applies, but no one really has a good set of words for them yet. The "I/O" chip isn't really a pure I/O chip - AGP is an I/O port as well, and it also contains system monitoring information. Yes, it's input and output to the processor, but, well, everything is by a strict definition.

              I dunno. "AGP tunnel" and "I/O chip" don't sit well with me. The first name stresses AGP too much, and while it's the main reason for the chip right now, it may not stay that way - in addition, the AGP chip doesn't need to be a tunnel (I think one of VIA's Hammer chipset is still using V.link, since they're using an old southbridge - I think). I think I'd prefer "High Speed Peripheral Interface Chip" and "Low Speed Peripheral Interface Chip" - that's pretty much dead on for the differences between the two chips, and the reason for the separate chips. Yes, there are those who merge the two chips (thus creating a combined Peripheral Interface chip) but many motherboard vendors will want to keep the two chips separate for reuse in multiple platforms.
  • by McCart42 (207315) on Thursday August 29, 2002 @10:22AM (#4163623) Homepage
    Most hardware review sites I've read commented that upping the FSB speed was the best way AMD could reclaim the speed crown...Intel regularly uses much higher FSB clocks with their chips (in the neighborhood of 533 MHz). I may be missing some crucial aspect of AMD's strategy but that seems to be what is holding them back right now, from a high-level standpoint.
    • I'm sorry, but Intel does NOT use a 533Mhz FSB speed any more than AMD use a 333Mhz one. The "533" refers to 4x133Mhz (it's a 133Mhz bus with QDR tech) whereas the "333" refers to 2x167Mhz (it's a 167Mhz bus with DDR tech). Incidentally, I think that Apple is the first company (unbelieveably) to have implememted a 167Mhz FSB in their new "DDR" G4 designs. Shame the G4 chip isn't up to using the (otherwise fast) bus in DDR mode. Oh well, roll on the fabled MPC 7470!
    • Intel uses a Quad-pumped architecture, that is, in a given cycle, the flip-flops trigger four times (beginning rise, ending rise, beginning fall, ending fall).



      AMD only has a 'double-pumped' architecture, where the flip-flops trigger on both the rising and fallig edge of the clock signal.



      Unless AMD licenses Intel's technology, they really can't compete in that arena for awhile. There are other strengths to the AMD platform that help bridge the gap, for example.

      • They have two clocks, ninety degrees out of phase, and the latch data on the rising and falling edges of both clocks. This provides the four transitions needed for quad-pumping.

        As opposed to the other answers, which made no sense.

        Have a nice day.

    • AFAIK - and this is based on earlier P4 systems - although Intel's bus has a higher throughput it also has a higher latency. Only applications that demand throughput gain an advantage, while many applications can actually run slower on these "quad pumped" busses. DDR memory has the same problem when compared to SDR memory, but nowhere to the degree of RAMBUS memory. AMD seems to have found a balance. For the future we can only hope that QDR will not have the same latency issues. However, I share a similar concern and I wonder why AMD didn't go to 400mhz DDR. You don't have the latency of the Intel platform, but you have great throughput. I can only speculate that it's because AMD runs at a much lower clock rate that the extra bus speed would do little for real world performance.
  • by DeafDumbBlind (264205) on Thursday August 29, 2002 @10:23AM (#4163628)
    Might as well wait for the Hammer.
    The built in memory controller should to wonders for latency. Of course the 64 bit stuff will be a nice future feature to have.

    • by ackthpt (218170) on Thursday August 29, 2002 @10:29AM (#4163682) Homepage Journal
      Might as well wait for the Hammer.

      This, of course, is the risk of having a really sexy new item coming down the pipeline. At some point those Xtreme gamers/programmers/modelers/or just people who like to have the latest and most expensive thing on their desk to play solitaire with, begin to hold back on purchases and wait for that new item.

      I'm on the fence, but at the rate I've actually done anything to build my next system (hey, I did buy a cabinet! :-) the wait for the Hammer shouldn't be much longer (why does this name summon the memory of the artwork inside PF:The Wall, hmm, something there, but what...)

      Fortunately for AMD, not everyone is holding off and all these really spifftacular improvements of , what will eventually be $60 processors in a couple years, are pretty damn exciting.

      • I'm on the fence, but at the rate I've actually done anything to build my next system (hey, I did buy a cabinet! :-) the wait for the Hammer shouldn't be much longer (why does this name summon the memory of the artwork inside PF:The Wall, hmm, something there, but what...)

        I'm waiting too, but that's because I don't think I'll have to upgrade for at least a year.

        Remember how long it took for Athlon DDR chipsets to stabilize, and for the prices to drop. I'm not expecting a reliable, affordable Hammer/Opteron system until at least mid-2003.

        Now's as good a time to buy as any (just not for the top-of-the-line models, with the speed war going on).
    • Might as well wait for the Hammer. The built in memory controller should to wonders for latency. Of course the 64 bit stuff will be a nice future feature to have.

      I thought for a while that I'd do that, but I started getting tired of 12-hour SVCD encoding jobs (which is what you get with a 1.0-GHz Athlon when you use TMPGEnc at some of its highest-quality settings). Besides, a single-processor Hammer setup looked like it was going to be more expensive than the dual-processor Athlon MP that I just put together. With 12-hour jobs cut down to just 3 hours, life is good. :-)

      (Whether a single Hammer would be faster than a dual Athlon MP is still an open question, especially with 32-bit apps. I've heard Hammer is supposed to be 10-25% faster at the same clock speed when running 32-bit apps, but one processor would still need to be damn fast (probably 3500+ or better) to keep up with a pair of Athlon MP 2100+s.)

    • Might as well wait for the Hammer. The built in memory controller should to wonders for latency. Of course the 64 bit stuff will be a nice future feature to have.

      Do you remember the Osborne computer? [digitalcentury.com] It was a very popular CP/m computer. Osborne computer grew like crazy. Osborne announced an "Osborne II" computer, and IIRC, sales dried up, as everyone waited with baited breath for the new model. Because revenue shrunk Osborne couldn't afford to finish development of the new model. Then the IBMPC came out, and his target market disappeared.

      If too many people hold off purchasing an AMD now, because they want to wait for the newest, whiz-bang thing, then the possibility exists that AMD will not be able to finance the development of the K8 on time, or even that AMD will go bust.

    • Might as well wait for the Hammer. The built in memory controller should to wonders for latency. Of course the 64 bit stuff will be a nice future feature to have.

      Let me share a budgeting lesson from the game of Monopoly, that I feel is on-topic here.

      In Monopoly, practically everyone wants to acquire Park Place and Boardwalk. Sure, when your rivals hit those properties, once they have hotels, they have to dig deep. But Those properties are expensive to buy, and expensive to develop. Whereas Baltic Avenue, and its sibling, are very cheap. Developing houses and hotels on those properties is also very cheap. And yet, when you do the math, the return on investment on those two properties is the best on the board -- better than Boardwalk.

      The new machine, the cutting edge machine? You know you have to pay a premium for it. You know its value will depreciate very quickly. Its value will depreciate much more quickly than a computer built around a more mature technology.

      Sure, I figure buying the latest, whiz-bang thing at premium prices, so you can have bragging rights, is a fine strategy, if you are rolling in dough. If I won the lottery, I would go right out, and buy a premium machine, with lots of memory, and 512MB of DDR.

      But if you are on a budget, I don't think it is a good strategy. A lot of my baby-boomer pals hold off on buying a new computer, until they can pay for a premium, latest whiz-bang thing. When I ask them why, they say, "well, I want it to last me for five years or more. So I have to get a really powerful machine, so I won't be left too far behind."

      I figure that, if you are on a budget, you should buy the technology that is a year or three behind technology's cutting edge. It is a lot more affordable. So, you can afford to replace it, or upgrade it, more frequently. I figure, on average, my computer is more up to date if I upgrade it every two years, but only to the level of last year.

      My last CPU was a K6-2 500MHz. I paid about $75 CAD (about $50 USD) for it. I used it for about two years. Last week I bought a Duron 1100 for $75 CAD, and an ECS K7A motherboard, for another $75 CAD. Next year maybe I will replace my old PC133 RAM with DDR. Maybe I will get an Athlon 1400, when its price drops to $75 CAD.

      No, it doesn't give me bragging rights. But, on average, I figure I am farther ahead than if I blew all my dough on a premium machine I expected would last me five years.

      My buddies who buy that latest whiz-bang thing are happy with their bragging rights for the first six months, and then, if they follow their budget, they have to sit through 54 months of feeling their computer was an expensive lemon.

      • > In Monopoly, practically everyone wants to acquire Park Place and
        > Boardwalk. Sure, when your rivals hit those properties, once they
        > have hotels, they have to dig deep. But Those properties are
        > expensive to buy, and expensive to develop. Whereas Baltic Avenue,
        > and its sibling, are very cheap. Developing houses and hotels on
        > those properties is also very cheap. And yet, when you do the math,
        > the return on investment on those two properties is the best on the
        > board -- better than Boardwalk.

        Okay, I'm going to follow your line of reasoning way off
        topic here, but rest assured I will bring it back full circle
        and return to the topic at hand eventually...

        Both are poor investements, if they are the only thing you develop.
        Boardwalk takes too long to develop and doesn't get hit with any
        frequency, and Baltic and Mediterranean with hotels can get hit
        three times and not pay you enough to land once on any serious
        developed property. Sure, they pay for _themselves_, but you
        can't build a game strategy around that, unless you plan to forego
        dice and land on your own property every time.

        The light blue, orange, and yellow properties are the ones you want.
        The orange ones (New York and so on) are best. Build them to three
        houses as quickly as you can for optimal return on investment. When
        you can afford it, push them on to hotels for the extra income. The
        yellow (Marvin's Gardens and whatnot) are a bit harder to get
        developed, and the light blue (Connecticut et cetera) max out too
        low, but they still give a good return on your investment. If you
        can get both these sets, build the light blue ones up first, and
        pray the orange ones don't get built up by someone else before you
        can get serious with the yellow ones, because a couple of lands
        on St. James will wipe out your chances of building up any
        investment capital. In a pinch, you can substitute the magenta
        or red ones, but it's an uphill battle, because the magenta (St. Charles &c) cost more than the light blue to develop and don't get
        hit enough to pay off like the orange, and the red ones compare unfavourably with the orange and yellow on the same grounds. I
        should mention that the light blue set by itself is inadequate
        to allow you to compete in the game. However, it can be good
        enough to let you get another set developed that you otherwise
        could not (say, the red ones).

        In the event _two_ powers emerge with sustaining levels of hotel
        income, then the properties on the fourth side of the board (green
        and dark blue) become important.

        If you play with an open market (trades and sales among players
        permitted), it is _always_ a good investment to purchase any
        bank-owned property you land on except the utilities, because
        developable property is worth more than the bank price. (Usually
        substantially more.) If you play with a closed market, you have
        to be more selective in the early game, so you can afford to get
        one complete set. Also: resist the urge to believe that the
        rents on undeveloped properties (excepting railroads when there
        are no serious (>Baltic&Med) developed properties yet) can have
        an impact on the outcome of the game; it ain't so.

        > The new machine, the cutting edge machine? You know you
        > have to pay a premium for it.
        This is true.

        > You know its value will depreciate very quickly.
        While also true, this statement is meaningless. _All_ hardware
        depreciates rapidly, whether it was top-of-the-line or bargain
        basement or used. Today's $200 system will be worth approximately
        nothing in sixteen months.

        > Its value will depreciate much more quickly than a
        > computer built around a more mature technology.
        Only because it has further down to go. What is more interesting
        is not the resale value but the replacement value and the cost
        of maintaining it at a usable level.

        > well, I want it to last me for five years or more. So I have to
        > get a really powerful machine, so I won't be left too far behind

        There is merit in this approach. Now, "really powerful" may be
        overkill, but you do want to get a system that will be able to
        be maintained with affordable upgrades for several years, for two
        reasons. First, it means you can get comfortable with the system
        and finally get to the point after about two years where you
        _don't_ discover every _week_ something you hadn't got around to
        installing yet that you need (PAIN), and second because upgrading
        is a good deal cheaper than replacing, so the costs ballance out
        if you strike a decent happy medium.

        Now, it's possible to go to far. A Boardwalk system is not
        for the average user. It's price-prohibitive. But it's possible
        to get a system that can be developed (upgraded) to a decent and
        reasonable level, like New York and St. James, for a pretty
        reasonable price, and it will last you a lot longer than a
        Baltic system. My computer right now is over four and a half
        years old (well, most of it is; some components are newer).
        It will be at _least_ another year, maybe two, before I have
        to replace the system. (Some components I'll be able to keep,
        of course, but I'm talking motherboard and CPU at least, and
        probably some other major parts too at that point.) If you
        bear with me, the ecconomics of this will bear me out.

        Discounting the monitor, which is really a subject for another
        thread, I paid $1550 for this sytem new, in 1998. It's a
        PentiumII/233 system, but the motherboard was a nicer one with
        lots of expandability. I could have got a system for around
        $1200 at the time, but it would have been much lower end, not
        nearly as upgradeable. For example, when RAM prices dropped,
        I eventually beefed up my system to 512MB of RAM. If I'd bought
        a $1200 system, it would have maxed out lower than that, and I'd
        have replaced it by now; instead of spending $80 on RAM a few
        months back, I'd have probably spent $400 on a new system. PLUS
        I'd have had the hassle of losing my nice, comfortable system with
        everything I use already installed and going back to an out-of-the-
        box system with virtually nothing installed, at least two years
        sooner than necessary. Compare:
        $1200 + $400 + PAIN = $1600 + PAIN
        $1550 + $80 + comfort = $1630 + comfort
        In addition, I had a somewhat better system ad interim. My
        conclusion: Yeah, Boardwalk systems are for people rolling in
        dough, but Baltic systems are for people who enjoy pain. Buy St.
        James systems (or at least Connecticut systems) and stay sane.

        What this means is, you don't have to wait until you can afford
        a Hammer system. All you have to do is wait until the news of
        Hammer systems hitting the market drives the prices on moderate
        Athlon XP systems through the floor, and buy one of those (St.
        James) or a good quality non-bargain-basement Duron system
        (Connecticut). If you feel guilty about saving money at the
        expense of a struggling computer industry, make a donation to
        your favourite OSS vendor or something.

        Disclaimer:
        People who use a lot of CPU power may find that things
        break down differently. Most of what I do leaves the CPU
        sitting idle most of the time, so I find that things like
        RAM and drive space (I'm a multibooter (six OS installations
        on the same hardware and counting...), which uses up drive
        space several times as fast) are more important. If you do
        a lot of raytracing or calculate the factorials of large
        primes, you'll have to upgrade the CPU, and that costs more.
  • by Anonymous Coward
    Sorry, my ears are now tuned to the Hammer frequency.


    Subject, of course, to pricing ;-)

    • by scotch (102596) on Thursday August 29, 2002 @10:42AM (#4163753) Homepage
      Did I hear you right? Is it almost Hammer Time?

      • In my crystal ball I can see an email inviting Intel marketing people to an upcoming strategy meeting.

        The subject line is becoming clear... it says... Stop Hammer Time!
      • Please never utter those words again. I actually had to live through that decade -- once was enough. :P
      • It doesn't matter because you can't touch that!
  • I've been curious, I have an old standard Athlon motherboard (Socket A), and I wasn't sure if it would work with a new Athlon XP processor, or if I would have to upgrade the motherboard too. I thought I remembered reading an earlier slashdot article a while back about motherboard incompatibility, but I wasn't sure. I would just like to know so I can budget a new motherboard, if necessary, in my computer upgrade in a few months.

    Any help would be much appreciated, thanks!
  • A 333MHz FSB is all well and good, but until AMD actually delivers the XP 2400+ and XP 2600+ that they supposedly released a week ago, I'm going to take this sort of announcement with a grain of salt.
  • by KelsoLundeen (454249) on Thursday August 29, 2002 @10:45AM (#4163771)
    Okay, let me preface this by saying I'm genuinely curious about the answer. So I'm not trying to sow the troll seeds here.

    That said, I'm curious about what people are using these super-fast processors for. Apart from upgrading so that you can play the immiment Unreal Tournament 2003 demo ("Only two weeks away!") and hoping to get the jump on a Doom 3 system -- what exactly are people doing with their super-high powered rigs?

    I just upgraded to an Athlon XP 2000+ (from a PIII), and while I sorta dug the impressive 3DMark2001SE scores (over 10,000 with a Ti4600), I'm still not exactly sure what I need all this speed for.

    For gaming, yes.

    But for what else? MS Word still opens in a split-second.

    OpenOffice 1.01 still opens pretty quickly.

    IE, Netscape, and Opera still open in a split-second.

    And, yes, now I run Quake3 with all the settings cranked.

    But this sorta of "gee whiz, that's cool" wore off in a couple of days.

    Now I'm left with a pretty powerful system, but I'm at loss as to what it has actually improved. Maybe if I were doing a lot of coding, then the compilation speeds would jump significantly, but I guess since my main coding right now is writing a fairly small (only around 6,500 lines) text-adventure in INFORM, I haven't really seen the jump in compilation speeds I'd see if I were compiling hundreds of thousands of lines of code ...

    So, I'm curious. I haven't tried NWN yet, so maybe that's the sort of high-powered cybercrank I need to get myself hooked on the slickmercury speeds of AXP 2000+ and Ti4600.

    There's always the new Neocron (sp?) beta 4 out ...

    Anyone?

    • I'm still not exactly sure what I need all this speed for.

      You probably dont ;) If you have an Athlon 2000+ you should be all set for some time... Even games do not require a processor like that, as the graphics card plays a much more important role in game-performance. I have an Athlon 1200 and a Geforce2GTS, and I have yet to play a game that does not run beautifully.

      Processor speeds are most important if you do a lot of heavy number crunching, such as video encoding, etc... or if you just want to kick a$$ on Seti-at-home
    • by VAXman (96870) on Thursday August 29, 2002 @10:55AM (#4163849)
      Mainly to develop faster microprocessors.
    • by edremy (36408) on Thursday August 29, 2002 @10:56AM (#4163859) Journal

      I'm not normal, but here's a few from my background

      Video editing. Nothing out there is remotely fast enough for what I want to do, and what I want to do is pretty limited.

      Computatational chemistry. Nothing out there (or scheduled for the next ~100 years) is fast enough to do the simulations people are really interested in.

      License key cracking for those companies who decide to use encryption. :^)

      • I second video editing. With high speed disk drives (10-15K RPM, 3.5ms access time, etc.) my bottleneck is always Premier eating up my CPU. I do not do video editing professionally, just for fun. I want to render in real time, but I'm actually rendering at over 10times realtime if there's a lot of filters and/or effects.
    • I'm still not exactly sure what I need all this speed for.
      For finding things [mersenne.org]
    • Why running windows of course!

      heh, but in all seriousness, I use most of my speedy athlon machines for running algorithm confirmation tests, and silly things like mersenne checking.
    • That said, I'm curious about what people are using these super-fast processors for.

      I agree; I don't see the point. I just upgraded to a Althon 2100+ from a Celeron 500 and the difference is minimal. Kernels compile in a flash, but other than that, no great improvements. Some lags is a few applications are gone, which is nice.

      What I really want is a faster hard drive--the only real wait on my system is for large applications starting up as they come off the hard drive. Opening Openoffice takes about 10 seconds; closing it and opening again (from cache) takes maybe two.

      I'm thinking about setting up my /usr partition as a two-disk RAID-0 [tldp.org]. The throughput should double (small test partition confirms). Sure the probability of failure doubles too, but my /usr is all backed up by my local Debian archive anyway. :-)

    • Most others already said it - video editing, video encoding, high end math. If your compile environment needs that kind of horsepower, then you're probably using bad tools or doing something horribly wrong in your code (duh, there are exceptions, but for 95% of the coding out there?)

      In the business world there are often needs for more CPU - although more often than not the issue is I/O throughput rather than CPU.

      As for NWN - shrug... I'm running it on an Athlon 750 w/ a 32 MB GF2. Am I missing out on some of the eye candy? Probably, but it still runs just fine.

      I do plan to buy a new system in the near future though - but I'm playing the waiting game right now since I don't need a new one yet and some of the parts I want (or would like) aren't available yet. Mainly I'm waiting on the NV30 to be released. A 333 MHz bus Athlon would be nice (although there's a slim chance of returning to a P4 -- I'm still vaguely looking for someone doing benches comparing an RDRAM and DDR setup) and Serial ATA would be nice. USB2.0 and Firewire are everywhere now, and all the new MBs for Athlons will do thermal die checks.

      Oh, why do I want a new system? UT2k3 and Doom3 of course. Duh.
    • Porno. (Score:3, Funny)

      by autopr0n (534291)
      The custom viewer app I wrote to moderate Autopr0n.com is still pretty slow on my duron 1.2ghz. It basicaly renders all the .jpg on a page into a back buffer so youc an flip through them quickly. It can take up to 10 seconds to decompress all the pics.

      So, it would be nice to get as fast a computer as I can get my hands on :)
    • I know you are going to get slammed by the 'Bill Gates said 640k was enough' crowd so I thought you would like to know that Doom 3 isn't going to require a Freon cooled GeForce 3000 XP+ SupraGamer to be playable ...

      Doom III playable on current hardware says John Carmack in Interview with GameSpy [gamespy.com]

      -DR

    • Mozilla

      yes I'm serious. At times it's still the only application that can actually make Winamp skip. (That's with an athlon 1700)
      • funny, I run it on a P2-300 and don't have any such problems, maybe something is wrong with your soundcard and/or motherboard (many Via chipsets have pci bugs that don't agree with many soundcards).
        • Ever veiw pages with heavy Chinese or Japanese? Win2k reports Mozilla will spike up to 99% CPU time on some Japanese pages, and essentially stay there until you get off the page. I think Mozilla 1.1 fixed all that, because I haven't had a problem with that for a while now. I still had to put up with that behavior for months.
    • For most users, the answer is no, you don't need any faster. Today's 'obsolete' chips (like my AMD XP 1500+) are much faster than what I need. For a few power users (scientific stuff, large app compiling, rendering, SETI...) these will offer some benefit. For a few 'bleeding edge' enthusiasts, they are desirable. For the rich, they may as well have them. Everyone else - 800MHz is probably enough; it's about the bare minimum for decent DVD playback (you can scape by on a bit less but that's a reasonable minimum.)


      Remember also that today's 'normal user' chips are yesterday's fastest - if chip makers thought the way you did, then we might have stopped with only a few hundred MHz (and of course 740K RAM).


      The other benefit is in coding - today's languages are designed with bloat in mind. That's why C++ is better than C (war! war!) - C is more efficient, faster, and therefore better... but C++ on a faster processor is just as fast (or faster), who cares about efficiency, and easier to program and maintain (the whole purpose of OO programming) - the point is, let the RAM and processor take care of the dirty work, and give us the apps to play with. The bloat simply doesn't matter any more. Obviously this is oversimplifying somewhat (peace! peace!) but the principles hold true... just think, if processors were fast enough, and computers were powerful enough, then programming languages could be so powerful (read - idiot friendly) that all you would need to do is fire up your 'Microsoft NaturalLanguage++(h4X0r 3d¦7¦0n)' and type in "/\/\4k3 4 l33t g4/\/\3 4 8¦7 l¦k3 'quake 8'" (make a l33t game a bit like Quake 8 - I'm not very fluent in h4x0r I'm afraid) and it would do all the dirty work for you.


      In the short term it's nice when things do start a little bit faster, but for most intents and purposes it doesn't matter. In the long term, when your house central server is automagically ordering your groceries, booking the car in to get a service, paying your bills, playing you two different DVDs in different rooms and some elevator music in the hall, running SETI and PersonalGenomeDecoder (OK, I made that one up) in the background, and the kids are deathmatching with a video-link up to their friends (who live in big plastic bubbles on the moon), and sending all this info and more back to microsoft; only then will you appreciate what these slight incremental power-ups might do in the long term.

    • That said, I'm curious about what people are using these super-fast processors for.

      Well, I was *going* to use it to try materials/chemistry simulations based on brute force approximate solutions of Schrodinger's Equation...

      But, in practice, it's been for gaming. Tribes 2 ran like a slide show until I upgraded both my processor and my video card. It's also nice being able to play Amiga games under WinUAE without the sound skipping.
    • Someone already mentioned video editing and DVD mastering. This is job #1 for my fastest single-CPU setup, which is always OC'ed and which cannot possibly get fast enough. I'm a small operation with only a few machines. It is no fun to have to stop working and wait hours for a chunk of video to encode.

      ALSO, there's a lot of frame-by-frame work going on here with things like Photoshop/GIMP (yup, use both) and that's where my current dual Xeon setup comes in. Here again -- for some of the more complex filters or transformations (i.e. perspective transformations and so on) on many successive images or frames... you run out of CPU very quickly. I/O is actually somewhat easy right now... a couple of mid-range drives in a RAID can easily deliver data as fast as I need it or write it as fast as I need to store it. I'm always waiting on CPU.
    • You're thinking too much along the lines of a home-user running Windows. There's a very real presence of Intel (and unfortunately, not so much AMD) CPUs in low cost workstations for business use. Linux is certainly part of the picture as well, as almost all EDA vendors have or are releasing Linux versions of their tools.

      What do I use my 80x86 cpu for? Well, I work in a hardware engineering group which does ASIC and FPGA design. We have a CPU farm of about 30 machines with Intel P4s running RedHat 7.2. (May see AMD Hammer chips in the future - we are excited about this possibility). We run everything from RTL and gate level VHDL and Verilog simulations, to chip synthesis, to test insertion and fault grading simulations. One of the last chips I worked on required such a large set of ATPG vectors (and the design was just so huge), that it required breaking the test vectors into ten groups, and even then, just one file (10% of the total) required an 8GB Sun box to convert the vectors to the fabs tester format, and the gate-level simulation took 10 days. PER FILE. Yes, that was total of 100 CPU days of simulation time for one chip just for ATPG vectors. And these were running on 1.8GHz Pentium 4s with 3GB of RAM. Not surprisingly, leading edge tools in this field are starting to look at distributed simulation over high-speed backplanes (read: not ethernet).

      Tomorrow's technology is designed and verified on today's hardware. Every generational step in every sector of industry leapfrogs like this. You can't design next year's high performance video card using 80286s. Definitely for ASIC/FPGA design, there isn't a system fast enough for how quickly we (the engineers) would like a simulation to finish in. Being able to run more simulations overall means a better design out the door. More stuff caught up front. The faster a simulation runs, the quicker it will finish, meaning we can get by with fewer high-priced licenses for our EDA tools. (Licenses are usually in the tens or hundreds of thousands of dollars EACH).

      You can never run enough tests before a product is done. How many tests/simulations are run, depends on how long they take to run. Give me the fastest CPU you got, decked out with the most physical RAM it can handle. (Sadly, the 32bit limit on current 80x86 platforms is hurting us badly - go x86-64! go AMD! Capture the workstation market!)
    • Again, video.

      My computer is admittedly aging. I have a Matrox Marvel G200 video card and I record TV shows direct to disk, edit out the commercials and then re-encode them to mpeg2 or divx to burn on CD. At this moment I've got almost all of Farscape. Once upon a time I recorded all of Babylon 5 to videotape. Call me an eccentric collector of sci-fi. ;)

      These newer Athlons/P4's should be able to record straight to MPEG4/DivX without an intermediate step which would seriously reduce the disk space and time required. My current computer takes about a day to encode a 1-hour show to mpeg2.

      However I'm waiting for the dual Opteron systems to come out. That way I can also use my computer while it's recording/encoding.

      Yeah, I could get a PVR, but then I would lose the ability to edit out commercials, and do you know any PVR's which use MPEG4 and can transfer shows off so I can burn them on CD?

      I can't stand to watch regular TV anymore. About 3 seconds into the commercial I get bored and want to go do something else. Besides I don't like them telling me when I have to watch their show. I have better things to do. I'll watch it on my own time thank you. ;)

      -- Bob

    • Speech recognition?

      Not personally involved, but close to a year ago I seem to remember reading that good speech recognition with the fastest Pentia (Don't remember if it was PIII or P4.) and K7 was still "slightly slower than realtime."

      Seems to me that that situation may have changed, that we may now have faster-than-realtime speech recognition. Maybe now the system can figure out what I said, and then have time to do it, too.
    • Try Seti@Home (Score:2, Insightful)

      by DrDebug (10230)
      Ever get addicted to looking for little green
      men? Try Seti@Home. Once you are hooked,
      you will want to process a workunit as fast
      as you can!

      Or, if you want to aid humanity another way,
      try Folding@Home, where they 'fold' proteins.
      There are a couple of zillion ways to fold a
      protein, and figuring them out sooner than
      later will definately aid people.

      Faster CPU's can only help the cause.

    • video editting, big number crunching, keeping the house warm.
    • Easy. Minimal system requirements for Netscape 7
    • OpenOffice 1.01 still opens pretty quickly.
      OpenOffice.org 1.0.1 compiles in 6ish hours for me. Hence, I need more speed ;)
  • Who cares about the "2100+" or "2900+"? I've been holding off upgrading for months waiting for AMD to release Hammer.

    :( *sniff* so close, yet so far away... Hammer..

  • Am I the only one who really, really disklikes the naming scheme of these processors? Although I know that clock speed does not always reflect performance, I would still rather see CPU names that include it.
    • Why? They tell you the speed of the processor. Why does the name have to tell you the MHz? Do you go around telling people what kind of car you drive, and tacking the horsepower on to the end of that? "Yeah, that's a '98 Chevy Cavalier 2.2L 110 HP".
      • Good point. Especially when even for cars, one would have to ask: At what RPM is the horsepower obtained? And what's the torque curve look like? And what's the curb weight of the car? etc.

        Raw statistics are meaningless without an understanding of the entire context. Further, they actually enter the category of outright misleading.

        And for the record, a moderately good car with a great driver is far more impressive than a great car with a useless driver... :)

  • by frankie (91710) on Thursday August 29, 2002 @11:04AM (#4163916) Journal
    333FSB will finally allow Nforce2 (and other DDR400 mobos) to show some serious advantage, since their speed was limited by Athlon's FSB [google.com].
  • Intel, Athlon, Intel, Athlon, hmm. Intel, Athlon........ I got it;

    Sun UltraSPARC
  • Barton? (Score:3, Insightful)

    by AsnFkr (545033) on Thursday August 29, 2002 @11:26AM (#4164139) Homepage Journal
    Where is the Barton core? If AMD would drop the 512k cache along with the jump on the front side bus, they would really be giving Intel a run for the $:preformance ratio. Not that it really matters to me, I like the rest of the /. crowd am sitting and waiting for the Opterons to hit the market. Opteron+serial ATA= Fast DivX encoding.
    • Re:Barton? (Score:3, Insightful)

      by MtViewGuy (197597)
      I believe that Intel is scared s***less about the Barton core Athlons due late this fall.

      Remember, the Athlon XP 2600+ already can keep up with the Pentium 4 2.53 GHz part on the majority of applications--and that's with the Athlon hamstrung by DDR266 DDR-SDRAM and only 256 KB of L2 cache on the CPU die! Imagine what the Barton core Athlon using DDR333 DDR-SDRAM and 512 KB of L2 cache on the CPU die can do.

  • Based on recent events, I predict a follow up SlashDot submission detailing the first review of said AMD processor within the next 6 hours.

    Said posting will be full of rants regarding the fact that..........
  • by AppyPappy (64817) on Thursday August 29, 2002 @12:07PM (#4164465)
    Play Doom While Heating Your Room
  • Ok, I've been a long time intel user (my last AMD CPU was IIRC a 486/40) not really because I think Intel CPUs are superior (they are not) but because I think Intel -chipsets- are much better supported and stable.

    It is nearly time to upgrade my aging dual p3-450 (doom3 is coming out :) and I wouldn't mind getting an AMD and use the extra saved money for the (expensive) DX9 video card (ATI 9700 or NV30 if it comes out soon).

    Soooo, the million $ question is: is there any motherboard/chipset out for AMD XP CPUs that is *STABLE* with a geforce4/amd9700 class video card and a SB live/audigy sound card running under 2K and Linux?

    By 'stable' I mean 'it never, ever, ever, ever crashes or has compatibility issues', basically like my ASUS P2B-D (BX chipset) that in the several years of service has always been stable like a rock with my SBLive and Matrox G400Max both under linux and 2K (save having to run in single CPU mode under 2K due to the crappy SB drivers that don't like SMP systems).

    I am not interested in answers like 'yeah, it's stable, but every time I quit a game I have to reboot before I start another', or 'yeah, it's stable, I use it as my fileserver 24/7', or 'yeah, after I put the voltage to x, the FSB multiplier to y, bought power supply z... it's sort of stable'.

    I'm buying this for games, not for server-related things where the hardware compatibility is not stressed at all. And I want something that 'just works', not having to always have to be on the 'download hardware drivers' treadmill every time something comes out...

    • Actually though, if you compare the price of an AMD 2600+ to the Intel p4-2.53Ghz (Which have virtually identical CPU power) they are in fact exactly the same price!

      So if you value your Intel stability, you might as well just stick with intel...
    • I just gave up on AMDs.
      I suggest you consider these points.
      1)the run hot. the AMD chip can take it, but the other chips, espcially the bridge, can not.
      2)you will need to either water cool, or deal with the sound of a blow drieer every time you urn it on.
      3)As soon as most tech supports find out you have an AMD, thats the first thing they blame. Thats not AMDs fault, but its what happens in the real world, which is the one you have to deal with.

      My Athlon 1.4 chip just killed a mobo, I believe its becayuse the fan blows the heat from the CPU across the bridge.

      when INTEL drops there priceses next month, I'm going to buy an intel chip.
    • The key to system stability is a good motherboard and high quality RAM. A little more spent on these components now means a lot less headaches down the road. Get the ASUS A7V333 with the VIA KT333 chipset. It's a great board and I've always used ASUS boards without any problems. Avoid PC Chips and ECS (Elitegroup owns PC Chips, they're the SAME company!) brand boards, a quick search on Google groups will reveal that PC Chips is the industry leader - in the amount of RMA'd boards!

      Consider the Turtle Beach Santa Cruz [turtlebeach.com] instead of Creative Labs cards. While Creative has fixed most of their issues with VIA chipset motherboards a long time ago, there are still some people that have issues. The Santa Cruz is less expensive (I just got one off eBay for $55 + $7 shipping) and a great card if you don't need the optical in-outs of the Audigy. Plus, you don't need the Audigy's FireWire, the ASUS A7V333 already comes with FireWire. The audio quality of the Santa Cruz is outstanding and it can even record itself. (Just don't tell Microsoft, they'll probably not allow it to be used with WiMP 9.)

      As for the video card, the Radeon 9700 has all the markings of a true speed demon, but I'd wait for Nvidia's answer to it. As a general rule, (and speaking as an 8500 owner myself) ATI cards have certain annoying-yet-tolerable "issues" with various games and ATI usually gets sidetracked writing drivers for a new product instead of fixing bugs. ATI does have the best DVD player support and the TV-out quality is unmatched, so if you prefer movies over games, ATI is the way to go.

    • If you want something that "just works", your best bet is to buy old. Seriously, EVERY chipset out there, regardless if it's from Intel, VIA or whoever else requires driver downloads to work properly.

      However, that being said, the best chipset that I've encountered in this regard, bar none, is the nVidia nForce chipset for the Athlon. Intel's chipsets have had more then their share of problems (the PIIX 4 south bridge had TERRIBLE drivers when it first came out, and the i8xx series was piss-poor for it's first 4 months of availability). Intel does eventually get their drivers right, but it takes them a while. VIA takes a while and still never gets them quite right, and ALi is worse still. SiS uses very few drivers, mostly just stock microsoft stuff, so it mostly works, but they still have the odd problem.

      As for nVidia though, worked great with only a single driver-pack install. Easiest install of any current hardware I've ever done... at least for Win2K.

      For Linux, things could be a bit better. The motherboard/IDE controller work perfectly with just the generic IDE drivers and offer quite respectible performance as long as you use hdparm to turn on DMA. The built-in NIC was rather problematic on the first revision of the drivers, and it took until about 3 weeks ago until they finally released the second revision, however now it seems to be working fine. Built-in audio worked fine from the begining, but only with a VERY limited portion of the hardware's feature set. The only thing that worked great right from the get-go was the integrated video, which used just the regular nVidia XF86 driver + GLX driver.

      Now, of course, simply having a good motherboard is NOT going to give you a system that won't crash. You need memory that's working properly and a good quality power supply as well. Be sure to always run MemTest86 (or a similar memory tester) on ANY new memory for at least 8 hours before deciding that it works. The power supply is a lot harder to test, and usually your best bet is just to stick with a good name brand, though even that isn't foolproof. Antec seems like a pretty good bet for good quality and widely available power supplies. Note that these two bits of advice apply equally for both AMD and Intel based systems.
  • and will work at 2.17GHz

    If it runs at 2.17 GHz, then why the hell are they marketing it as 2.7 GHz? Being an EE, I am well aware of the fact that different architectures -- like IA32 vs AMD -- have different per-clock-cycle performance aspects. Yes, I also know that the customer just sees numbers and thinks 'gee, P4s are running at 2.7 GHz now while Athlons are at 2.17 GHz. P4s must be better then.' But I don't see it as ethical to get around this assumed ignorance by telling what amounts to an outright lie. AMD should instead win customers from Intel by convincing people that their processors are better even at lower clock speeds (which they are, really). If people started to think that AMDs were better at lower clock speeds, AMD's popularity would explode.

    I am not being an AMD basher here. I have always been an AMD user, and continue to be one to this day. And contrary to popular belief, there are plenty of computer stores out there that label 2700+ systems as 2700 MHz. Even then, AMD knows damn well that most users think 2700+ means 2700 MHz, and that they don't realize that the s/MHz/+/; is just AMD's way of obscuring the misleading marketing. Fact is, the stores and AMD *are* marketing the systems as 2700 MHz, which they are not.
    • You said it, you're an EE. Most of AMD's customers aren't and they need to sell chips. Like it or not Intel deliberately created a pipeline with too many stages so they could clock their chip high. People care about performance, AMD is telling people their chip performs like an Intel or better at some clock rate. Otherwise their customers wouldn't get it because they aren't EEs. You clearly know the real clock so there's no problem. Get over it, if AMD marketed like you suggest they might be out of business already.
    • AMD tried that, and their popularity dropped, profits dropped and they were barely able to sell their top-end chip for $100. Apple is still trying it, and still failing miserably (though Apple has the unfortunately downside of actually have very slow chips, regardless of whether you look at their clock speed).

      Face it, the few that know enough to bother looking beyond a single number know enough to realize that clock speed is a completely useless number.

      Personally I don't really care too much one way or the other. AMD replaced one totally meaningless number with another totally meaningless number. Ideally this sort of thing would just encourage people to actually look beyond the pretty number and try to figure out what it actually means, but both myself and AMD's marketing dept. are well aware that that sort of thing is WELL beyond what 99% of the buying public are going to do. So, model numbers it is, and they worked. AMD's profits and average selling price have increased a lot since the indroduction of their model numbers, even now in a bit of a PC market downturn.

"If truth is beauty, how come no one has their hair done in the library?" -- Lily Tomlin

Working...