Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD

AMD K7 550 Hands-on Preview 55

Kenn Hwang wrote in with the review to the new K7. Click below for a snippet-suffice it to say that these things /move/.
Winmark: In this particular synthetic test, the K7 continues to shine down on the Intel competition. Here, we see it leap ahead by almost 25% over both the Pentium III and the P3 Xeon. Note that as above, the hard drives differed from system to system, though fact that the P3 systems were using dedicated 7,200RPM Fast/Wide SCSI drives don't seem to help their case much. Very impressive showing on the part of the K7.
This discussion has been archived. No new comments can be posted.

AMD K7 550 Hands-on Preview

Comments Filter:
  • by Anonymous Coward
    If I'm not mistaken, he had no choice about what to use hardware and software wise. I imagine it was quickly thrown together by amd for testing purposes. You'll notice they only mentioned installing the benchmark software, nothing else.
  • by Anonymous Coward
    I have a hard time buying the FP numbers. Even on Thresh's page, they mention maybe the benchmark program maybe having a problem with the K7. How in the hell could they add all the various improvements (pipelining, etc.), but still manage to make it WORSE than the generation before it?

    Chris, who forgot is password again.
  • by Anonymous Coward
    Ever seen the FPU marks for a 21164a with 2megs L3 vs 4megs L3? It's crazy..
    I think that the same kind of thing applies to the K7..

    The only way to tell would to see matrix multiply or fft of various sizes (as working set approaches L2 the performance would drop like a rock if I'm correct)..
    In which case, you simply don't use 512k L2.
  • by Anonymous Coward
    For those of you clamoring for some REAL benchmarks, you're out of luck. This site is run by one of the best Quake players on the planet. It's not surprising that all of the benchmarks are geared towards gaming performance. This is exactly what most of his readers want/expect.

    Please take into account the intended audience before ripping on his content. Be also advised that this was pretty limited by what AMD would let him test with.

    Dan
  • by Anonymous Coward on Monday May 24, 1999 @06:17AM (#1881404)
    They have said publically that they have no plans to put serial numbers on their chips. At least that's what their PR people have been saying all along. Who knows what they're actually thinking?
  • by Anonymous Coward on Monday May 24, 1999 @09:00AM (#1881405)
    Come on!

    Since this is a CPU we are talking about, why are they running tests which are more dependant on system then CPU.. Those tests are mostly I/O bound. This CPU is supposed to have a kick-ass FPU but nothing they do is really that FPU dependant (even quake does a lot of int stuff).

    How about some real tests:

    SpecInt
    SpecFp
    FFTW compiled with standard egcs compiler.
    Hand tuned FFT benchmark from AMD.
    Linpack w/ stock egcs
    Any of the above with a tuned compiler (as long as they make it available)

    Also how about a fpu smashing real world apps:
    MP3 encoding, jpeg encoding, varrious gimp filters on vairous image sizes.
  • by Anonymous Coward on Monday May 24, 1999 @06:43AM (#1881406)
    No, they unfortunately don't.

    I have worked on UNIX boxes for years and regarded the furor over the chip IDs as really, really dumb.

    Here's why:

    SPARCs, ppcs, and other chips (let alone those big, ugly bipolar modules) always had IDs. They were expensive, it was very important to know if one failed which one it was, from which run, and so on. It was also important for service, asset control, and so on.

    Of course, these chips also allowed stuff like switching to service mode via a switch that connected to a trace on the chip, board NVRAM that would pull off the last part of any error condition that had killed the box (or was supposed to, anyway), and so on -- they weren't cheap and they were designed with that in mind. They were designed for very long life and the ID made tracking easy.

    When the Pentium ID thing came out, my first thought was "Good. Intel has finally decided to join the big boys and this will make asset tracking far easier." I was unimpressed, though, by the fact that this wasn't accompanied by specs that required network booting, enough NVRAM to bootstrap to that point, and other hallmarks of "real" systems. I was just flat surprised by the idea that they had to tract this over the web via software other than in the microcode. No, Intel still had no clue. And then the privacy nuts blew a gasket.

    Well, it wasn't the ID that was stupid, boys and girls -- it was the idea of tracking it with userland software. That's the bottom line. Take a look at this article (http://www.zdnet.com/zdnn/stories/comment/0,5859, 2194863,00.html) for a good rundown of why is was just really stupid and a good example of how far Intel has yet to go.

    Want to know why I like the idea of Mips or ppc or microSPARC or ARM NCs on my networks? I know who they are and where they are, because I can track them at a low level and stuff like the firmware is a damned sight harder to fake without a huge amount of effort (out of proportion to the cost of a $600 NC). Theft is a big issue, so is tracking cost of support and allocating it to departments. IDs help me. IDs would also be good for theft issues with chips, which have become serious (as serious as RAM theft used to be).

    With appropriate systems engineering, I would prefer chip IDs. Don't yelp until you look a little beyond the spectre of Intel knowing what pr0n you favor. It is a historical accident that two very unpleasant companies (Intel and Microsoft) have dominated so much of the computer market and this too will pass. Look at the larger issue and the longer record of use of chip IDs and don't freak out without a good reason.
  • by Anonymous Coward on Monday May 24, 1999 @10:05AM (#1881407)
    I think that several trends that are ongoing and look likely will push chip IDs (IDs everywhere, actually) quite hard. My area of concern is asset control and keeping up with code patches and so on. Theft is less of a concern to me -- I don't have armed men running into the machine room and saying "drop that lousy coffee and hand over your Pentiums." But truck hijacking is getting common in California and all of us do pay in t he end (well, I have Alphas, but still, you see my point I am sure).

    Anyway, aside from the asset control issue, this is a short summary of why I like IDs:

    1. this allow automated updates of firmware. Keeping track of firmware is a pain. The x86 boxes are worse than the 9000s and the RS6000s, as then you have to work with the SCSI code, the motherboard code, the NIC code, and so on -- at least with the UNIX boxes it tends to be pretty stable, but it seems like we are adding something new onto the Compaqs every two weeks. Of course, they are running NT, so uptime is sort of a non-issue, but still. We are looking realtively quickly at everything having a small microcontroller. Everything, including lamps and toasters. They will run something tiny (I would bet on QNX, actually) that can do running updates. This will likely often be transmitted over the power lines. Having a chip ID would allow a small but far more powerful system in a home or office to keep track of what was out there and what revision level the microcode was at. I suppose that I might be mistakened and the code will be so good that it would never need to be updated, but the I wouldn't bet on it. The future will be full of rooms that turn out their own light and even tell you when the bulbs are burned out, toasters that keep up with the aging of the elements and adjust power to suit, and Dr. Pepper machines that take your fingerprint for a Dr. Pepper and charge your account. All of these will need occasional updates, and that is the way it will work, because as cheap as the chips get, the software will always be cheaper to change. Chip IDs and methods to keep track of them are coming. I would LOVE to have this on a database here so that I could categorize fixes according to severity the way I do patches and apply them as absolutely needed and know exactly what my risk and exposure will be if I hold off. I think that in large companies this would pay off in lower insurance or better ratings.

    2. The same thing would work at home, but a little differently. At home, I also think that it would pay off in lower insurance and better ratings -- LoJack works, right? If someone would have a big problem ever using a stolen PC again, it would reduce the value, because not too many people would buy one. I would think that this would look like a good idea, not just for PCs but for VCRs, TVs, stereos -- the whole list of stuff you can buy out of a trunk in crack neighborhoods at 03:00. This could cut renter's insurance and homeowners insurance and I expect that a)it will and b)it will get standard.

    3. Finally, just in terms of standard stuff, chip IDs really ought to make management easier. This would really help the less sophisticated users that you were describing. Time is rarely free of charge and I think that small businesses and home users would appreciate automatic upgrades of firmware, a real trampoline system that would allow you to fall back to a previous bios level if the update failed, better internal monitoring, and something like an RSA and ssh licence sold with every PC to connect to a trusted for-profit service on a regular basis to do the updates for them even more than me, because I can do this stuff. They may not be able to.

    Again, I think that people should look down the road a little. Intel and Microsoft are outstandingly bad companies. They do bad things. Microsoft is skirting "evil" and easily is "dishonest." This is an accident of history. I would expect that a company like Sun or NCD might be on top in a few years (perhaps Moto will come back -- after all, Mips did) and would simply behave in an ethical fashion. Chip IDs aren't evil. People who don't want to have them may have a point that disabling them is a pain, they may have a point that Intel and Microsoft are essentially guaranteed to misuse them and should be stopped, but they do not have a point that the solution to the abusive behavior of Intel and Microsoft is to ban chip IDs. That is like someone saying, after having backed over the mailbox and Uncle Ed after flooring the car in reverse in the driveway "They shouldn't make cars that let you do things like that." No, you have missed the point. Or someone saying after the latest shootings that guns should be banned. No, loonies shouldn't have guns (I live in Texas -- lots of loonies, lots of guns, and I feel rather stongly that guns aren't the issue -- I have seen too many case studies up close and personal)(along the lines of that Jeff Foxworthy joke: You know you're a redneck if the last thing that you have ever said before losing consciousness is "Hey, y'all -- watch this!"). The solution to abuse of privacy isn't to make a move towards living in a cave (no, I am not suggesting that you are doing this exactly, but you are suggesting capitulation, like those huge speed bumps that are all over residential neighborhoods these days instead of police giving tickets, stepping bravely into a third world relationship with law and order and giving up any attempt to control the situation properly). It is to stop the abuse.

    Anyway, this is way to long. I appreciate your points, but I respectfully disagree.
  • by Anonymous Coward on Monday May 24, 1999 @05:07AM (#1881408)
    Perhaps the 512k cache at 1/3 CPU speed is
    too meager to keep the FPU unit fed with
    data? It can handle up to 8 meg(!) of L2
    cache up to full CPU speed, so perhaps future
    versions of the K7 will have better floating
    point performance.

    I imagine it's being released in its current
    config because it will be affordable for us
    and profitable for them. Expect future versions
    to be waaay more pricy.
  • Posted by oNZeNeMo (guns'n ammo):

    As a lowly service technician at a local computer store, I must speak out about the VIA chipsets, more specifically those used in 100Mhz fsb Slot1 boards.

    After a BIOS revision and a lot of software patches, I discovered that the VIA board would not handle a Diamond Viper 770AGP and a SoundBlaster Live! at the same time, as rampant page faults and unexplained freezes plagued the system.

    On the same system, (this is a clean Windows 98 system with nothing installed but drivers and software relating to the soundcard and CD-burner), it was also discovered that it would, without fail, encounter buffer underruns when burning a CD directly from another CD-ROM. This was done with different brands and speeds of CD-ROMs and 2X Re-writable speed using an HP 4X IDE burner.

    Now while I am no fan of IDE CD-ROM burners, and have every reason to be spiteful towards Intel, all of the problems were solved with the installation of a motherboard with an Intel ZX chipset (meant to be a cheap alternative to the BX chipset). If anybody can offer any speciffics as to why such problems would occur, a post would be most appreciated.
  • Posted by swtprince:

    As having a direct line to AMD since I am in computer sales. The hype about the ID is very high and most of it negative.

    AMD flatly stated that they will NOT have an ID on their chips nor forsee one in the near future.
    ________________________________________________ __
  • by manitee ( 2974 ) on Monday May 24, 1999 @05:10AM (#1881411)
    Well I guess that webserver is not running on a K7.

    Being a AMD K6 owner, I have found the AMD chips to be a cost effective and performance-comperable alternative to Intel. Plus, buying a non-Intel CPU is good karma ;)

  • I just wish that someone would install a real system on one of those boxes, and actually _compile_ some software (SPECfp for example), and then see how it worked. It would be a lot of fun just to see what Intel CPU egcs/pgcc should optimize for to get the best out of the K7.

    Those ``Business benchmarks'' keep giving me the impression that those benchmarking guys are clueless. Why benchmark a CPU by measuring the disk throughput (or, we don't even know whether it's throughput, we just know it's business), instead of actually running something that is CPU bound for sure ?!

    Also the tests that involve 3D graphics accelerators seem pointless to me. Sure, Quake may still use the CPU for something, but if this was supposed to be a CPU test and not a 3D graphics accelerator test, then why not let the CPU do the _entire_ job, instead of just what's left over from the 3D accel. ?

    Just the fact that they posted the 3D and disk numbers, along with a notice saying that the disks and 3D accelerators weren't the same, makes me think that these people really have no idea of what they are doing.

    Although I'm running on Intel boxes exclusively (oh, except for the PA7000 in the basement), I'd really consider my next box to be AMD K-something based. If they can ship fast CPUs at $250, and I can buy a motherboard that supports four or eight of them, then I don't care if the FPU is 10% slower than a Xeon. The price/performance is going to make this choice _really_ easy.

    My previous box got around 40 BogoMIPS, my current one around 400, and I'll definitely be going for 4000 next time :)~ Eat me.
  • If you reread the article you'll notice they really haven't taken a step backwards between the K6-3 and K7, they both have large 128KB full speed caches on the die (compared to 32KB on the Pentium II/III), the 1/2 vs 1/3 speed backside cache will probably not slow things down much.

    -P
  • Small problem with using software rendering is that you will be benchmarking the memory subsystem more than the CPU. A software renderer has to load textures from main memory to the CPU in order to map them onto polygons, this is a very memory intensive process, and will easily overwhelm any calculating speed improvements. A hardware renderer on a the fastest possible 3D card with reams of texture memory would be more indicative of calculating performance than a software renderer, at least for Q2.

    Dastardly
  • I'm no expert of compilers and object code, but when a new architecture comes out, compilers and drivers etc. have to be rewritten to take advantage of the new hardware. In the case of multiple FPUs which all do slightly different jobs as in the case of the K7, the instructions have to be carefully scheduled in the right order to get the benefit of the available performance. If it can do additions, multiplications and vectors in parallel, putting three adds in at the same time followed by three multiplies then three vector ops will be slower than putting in an addition, multiplication and vector at the same time, three times over.
    That's my limited understanding of CPUs. I hope it's roughly correct.
  • Anyway, I was under impression that AMD was supposed to, for the first time ever, beat the equivalent Intel offering in FPU performance -- with some super-duper optimized triple-pipelined FPU unit, with special optimization for single-precision FP arithmetic
    I distinclty (yes, I know that isn't spelled right! :) remember hearing the same thing about the K6, and the K6-2. Now, i'll admit that each time AMD have been getting closer to the mark. But, my point is that my jnudgement is reserved until a real K7 gets here for me to test in real world situations.
  • I can see what you are saying about the usefulness of an ID, but that usefulness is specific to the type of environement that you are working in. For 98% of home users and probably 50+% of small businesses there is no use for a chip ID.

    Also, with the inherent security, stability, and myriad of other flaws that the predominant OS of home/small business users, it's undoubtably better that a potential privacy flaw like the PIII ID is not put in place to be exploited.

    Intel, and AMD produce their chips for the "low" end of the computer food chain. Maybe ID's are appropriate and necessary for chips at the higher end of the food chain, but I really don't believe they are at the end I operate at. Having a chip ID on a $5000 chip makes sense, on a $100 chip why bother?

    I think if Intel and AMD want to push their chips up the food chain, then they should optimize them for that purpose, possibly by including an ID so that that they can fit in with the higher end chips that they are competing against.
  • Ok, I see where you're going with this. You may be right. Personally if I were designing a system that allowed me to track my computer hardware I wouldn't put the ID in the processor. I would put it in a dedicated chip which is in a non-removable mount on the motherboard. This would allow the processor to be upgraded without changing the system ID. Also, I think you're looking too far down the road. The level of pervasiveness that you are talking about is at least a couple years away, plus you have to have the wiring infrastructure in place to take advantage of it.

    I know that toaster and light fixture companies aren't going to be producing software upgradable products anytime soon (I watch the home tech news pretty closely). Probably by the time smart technology has gone to the point you are looking at, a whole new set of issues will crop up. And I'm betting that the privacy nuts are still going to be screaming.

    Once again I see that individual machine tracking via an ID chip being desirable for businesses (especially large businesses), but not especially desirable or useful for homes.

    But as you say, it's not technology (or anything else) that's bad, it's how it is used that has the potential for being bad. (The gun reference is very telling)

    It will be interesting to see what kind of compromise the privacy nuts and the techies come to.
  • by John Karcz ( 6939 ) on Monday May 24, 1999 @06:46AM (#1881420)
    First, I must agree with your comments on the cache. I am much more curious about how the K7s with half and full speed caches will perform compared with PIIIs and cooled K6-IIIs. On the issue of video cards, I'm not even sure why high end 3d cards are used when benchmarking these machines. I suspect a more useful Q2 benchmark would software render to a tried-and-true old card, like a Millennium II. Benchmarking a processor-and-coprocessor combination appears to horribly complicate the system, and hide what the processor itself is doing. (If you're benchmarking the busses on the various machines though, it makes more sense to use very intelligent cards which utilize all of the capabilities of the various busses.) On the issue of MMX/3dNow!, my belief is that the benchmarking code should be utterly un-optimized for either of these. Maybe this is unrealistic... perhaps the usual C compilers optimize for MMX naturally. In that case, if it's really that standard, go ahead and let the compiler do it, but don't go out of your way to optimize for a specific instruction set. I'd personally prefer to see how the FPU cores compare on the same playing field, with software I have now. If some new chip gets games to run faster because of new instructions, that'll be a special treat, but I'd rather not see it in mainstream benchmarks until that instruction set is common in compiled code. I tend to do scientific simulations, so I'd like to know how well hardcore floating point code, compiled with standard compiler options, works on various processors. I don't want to know how it could perform, if I went out of my way to optimize the assembly. :) Granted, my needs are different from those of people who primarily use precompiled games, since the coders there can spend the effort tweaking the details. What I need is probably close to what a lot of other free OS users need, though, since most of our binaries are optimized solely by the compiler. One last thing... Most of the benchmarks I've seen have compared different clock speed chips on the same graph! What's the deal with this? I'd a least prefer it if the benchmarkers divided the score by the clock speed, for instance, to give a more meaningful measure of the efficiency of the chip itself! (Sure, I do this in my head already, but its a pain, and just seems like sloppy plot making.) I'd love to see what a K7 does with the full speed cache installed... hopeful we won't have to wait to long before they start to appear! John
  • I've been saving my pennies for the K-7... Now that we're nearly down to the wire, I have a couple of newbie questions...

    Will AMD's chipset for this thing be supported by Linux? I haven't had any trouble with any boards yet under Linux (knock on woood), but I've always been using hardware that's at least 2 or 3 notches off of bleeding edge.

    Secondly, what do people think about swapping the cpu/board out from under the disks/cards/peripherals? I've got a decent setup for my workstation, and I'd like to just put the new cpu/board in. What are peoples' experiences with this? Should I compile a monolithic kernel before the upgrade to cover my bases?

    Thirdly, has anyone heard who's going to be retailing the K-7 hardware?

    Pointers to existing Howtos, FAQs, etc encouraged and appreciated. Thanks!!!
  • Perhaps the particular chipset/setup was somewhat conservatively configured. If that was the case, then it'll be interesting to see if the 'good' parts of the benchmarks as well as the poorer parts increase when AMD sort out the configuration. If they do, then this processor will fly.
  • Good point, if slightly tastelessly made ...

    Personally, I'm not going to get upset about the K-7's performance until there's actual production boards and chips out there for ppl to play with. VIA seem to make pretty good chipsets (he says on his K6-2/300 - I got one a while ago when 300 was fastest :> - on an VIA MVP3 chipset) .. be good to see what they can do.
  • by d lux ( 11620 ) on Monday May 24, 1999 @05:44AM (#1881424)
    I know that this review is a review of non-production hardware. I still have to say that I'm very disappointed. Not so much in the numbers, but in the configuration of the system. Two points in particular:

    (1) The L2 cache is only running at 1/3 speed. I don't care about the size of the cache, but to me this is a step backwards. The PIII 550 has 1/2 speed L2 cache. The Kryotech Kool K6-III 550 (a thermally accelerated K6-III 450) romps the both the PIII and the K7 in some benchmarks. Why? L2 cache - the Kool K6-III 550 has full speed, on die L2.

    (2) An Ultra TNT2 video card was used, with older nVidia drivers. Near the end of the review, it's mentioned that there are newer drivers available with better 3dNow! optimizations. I think that either a 3dfx V2 SLI or V3 should have been used, since their 3dNow! drivers are better, or at the very least the newer nVidia drivers.

    I still think that the K7 has potential. It had very high Business Disk and High-End Disk Winmark scores. This leads me to believe that the hardware _is_ capable. Don't write the K7 off until a shipping processor has been reviewed (and hopefully one with at least 1/2 speed L2 cache).
  • /.'ed pretty quick.

    The Register [theregister.co.uk], which I'de call the European equiv to /. also ran the story [theregister.co.uk] so im guessing that poor server's been crunching bandwidth for a while. I too am an intel [intel.com] hater. Originally a Mac [apple.com] lover and just plain anti-PC, it wasn't until I really found out what a horrible company [faceintel.com] intel was. Plus, when I went shoppin this last summer to build my own machine, I was impressed with how small, fast, and inexpensive the K6-2s [amd.com] are/were. I've always figured that having a big ass chip that heats up a house is poor design: I could put a jet engine on a pinto [geocities.com] and speed by a porsche [porsche.com], but i'de still have a piece of shit car [sockets.net].
  • I can't imagine why exactly I need a kickass FPU. Last time I checked none of the software I used really cared especially if I had an FPU. Compilers, web servers, version control systems, databases. None of those need an FPU. A lot of people use their computers for things other than playing Quake.

    Imagine that.
  • while yes , as I expected the results showed off amds impersive
    cpu power when handling general applications. but still
    lagging in teh FPU scores :(
    still this isn't yet an optimised system, lets hope amd can optimize the frimware
    drivers before release and get those FPU scores up a bit. but still
    this is the fastest AMD cpu to date, (with regrads to the kryotech K3, gettins higher
    scores in some areas) I'm glade to see that this "beta" version of the CPU is really pushing
    for what its hyped up to be.

    can't wait for some real benchmarks on this thing...
  • while yes , as I expected the results showed off amds impersive cpu power when handling general applications. but still lagging in teh FPU scores :(

    Well, the server is slashdotted... Anyway, I was under impression that AMD was supposed to, for the first time ever, beat the equivalent Intel offering in FPU performance -- with some super-duper optimized triple-pipelined FPU unit, with special optimization for single-precision FP arithmetic. Guess not...

    --

  • Well the K7 is here now. Cry Cry about the FPU. Know what? These things might kill the Intel MP server market, where FPU power takes a back seat. Especially if all the goodness I'm hearing about AMD's MP architecture is true...
  • much closer than the Coppermine anyway...

    and what about Merced?
  • -cost effective, but certainly not fpu effective.
  • The more K7's that sell(to game market, server market, business market whatever) the cheaper they become. The more K7 motherboards they sell the cheaper they become.

    If the K7's FPU makes it a dog for FPS games then it loses a very sizeable market share to intel. This means less hardware support and therefore less software support.



    Games are important for the future of any platform.
  • I had some problems with my AMD K6II/350 using the VIA chipset (obviously were talking Socket7 not slot1 here, but it does have the 100Mhz fsb). I couldn't get a SB128 working with an ATI 3DRage Video card.

    I tried every possible driver combination, but nothing worked. It turned out it was the USB driver for the motherboard chipset that was provided by Win98 that was the culprit. I wasn't even using the USB, so this kind it confused me. But, after getting the proper USB driver and installing it, everything has worked without a flaw.

    Just thought I'd share my experience :)

  • I'll admit. I was a bit put off by the odd results. But after a little research I learned something that's telling me to wait.

    The "Fester A3" board is apparently the same name of the board that's been used to demo K7's for quite a while now. The shipping board is supposed to be a "Gomez" board, with a "C3" number. So is A3 2 debug levels back? If so does that make this K7 2 debug levels back?

    Early K7's were supposed to be quite buggy.

    I'm just gonna wait for the real-deal before I jump to any conclusions.
  • by nigiri ( 22248 )
    I assume these things DON'T have the chip ID, yes?

  • by account_deleted ( 4530225 ) on Monday May 24, 1999 @05:34AM (#1881437)
    Comment removed based on user account deletion
  • Um, the K7 is not here now and will not be untill late June early July...

  • Your argument for chip ID really has little merit in the real world, Most rational people fail to see why an individual chip ID is necessary to ensure quality control? In mass produced low-end chips, surely a batch ID is sufficient.

    And why does someone who voices a genuine concern over privacy have to be labelled as a "nut"? Just look at the recent example of the M$ GUID - despite protestations to the contrary, it did in fact play a part in the apprehension of the suspect in the Melissa virus fiasco.

    As regards security, it's really up to the system administrator to police their own networks with regard to theft, if you forsee a risk, take some simple physical measures to prevent it, bolt your mission critical systems down, lock the cases, use common sense! Don't expect chip ID's to solve these issues, cars have VIN's and they still get stolen after all....
  • Although it's been said before, it is truly the case with the K7 that compilers will have to be rewritten in order to take full advantage of the chip.

    I'm not the most knowledgable on the way that the P2's x87 works, but I don't believe it's pipelined anywhere near to the level of the K7. Realize that pipelining does two things: it provides a massive speed boost to code that is properly optimized for the pipeline, and it provides a massive speed hit if the code completely violates the way the pipeline wants to work. Current compilers are not K7 pipeline-ready, and I doubt AMD has had a hell of a lot of time to spend writing compilers... they do have to make a chip, after all ;).

    Also, with these being unoptimized chipsets/processors, and the L2 cache not being anywhere near what the final L2 cache will be (it's probably far too expensive to experiment with 2MB full-speed on-die caches, especially for beta chips), I wouldn't be too worried about these chips. I may be jumping from Intel once AMD gets these beasts out and stable.
  • by JCholewa ( 34629 ) on Monday May 24, 1999 @07:37AM (#1881441) Homepage
    AMD has been completely absent from hyping the K7. Haven't you noticed? It's the *absence* from hype that's been causing all this speculation.

    However, losing your head over this silly "preview" isn't the way to go, you know that. Remember that Tom's Hardware Guide posted up benchmarks of the PII shortly before it came out, and it was *annihilated* in benchmark comparisons with the Pentium MMX. Of course, the final version turned out far, far better than that. What you have to understand is that in the bugfix (no I don't know what it's really called, either) phase of a processor's design life, entire swaths of the product are turned off or tuned out to avoid major bugs in the current revision. Although the current C3 revision of the K7 is generally free of bugs (all processors out there have bugs, but production versions just don't have damning ones), earlier versions were wracked with self-destructive errors and problems of all sorts. So, basically, you have to take measures to get around those bugs for those revisions -- actions like this could severely cripple the processor and make benchmark results totally, completely unreliable compared to final revision.

    On a more personal note, I've looked at the microarchitecture and there is no way that a PIII's x87 can outperform the x87 on a fully formed K7. Not a chance in all the universe.

    And, yes, I do admit I'm biased. Do you?

    -JC
    PC News'n'Links
    http://www.jc-news.com/pc
  • One last thing... Most of the benchmarks I've seen have compared different clock speed chips on the same graph! What's the deal with this? I'd a least prefer it if the benchmarkers divided the score by the clock speed, for instance, to give a more meaningful measure of the efficiency of the chip itself!

    Or perhaps they should instead divide it by the price of the processor? :) But more seriously, while it might be convenient to be able to think K7 as being equal to a fixed fraction times a PII of the same clock speed, there are so many other effects coming into play in the benchmarks that I'm not sure this would really be useful.

  • How many of you all are in the industry and took the tour of AMD's facility this past weekend?

    Guess what! The FIRING SQUAD review is a farce. The photos were press release photos from AMD and the benchmarks were fabricated. The K7 is NOT going to be released for testing for another two weeks (at which time, yours truly will be getting a Fester w/ a 550). The ONLY K7 information that has been released was understood be from an insider at AMD that was buddies with Tom to Tom's Hardware Page. Tom agreed to not display any of the information, but none of the information was concrete anyway. Some info leaked (of course) but nothing specific.

    AMD wants to avoid hype and false benchmarks until the final release of the CPU so they have given out NO benchmarks and NO evaluation machines. The only K7 machines running are IN THE AMD FACILITY.

    Besides, why would Firing Squad get a K7 unit BEFORE AMD's top resellers and Maximum PC magazine? Even the CEO of AMD appears in Maximum PC, but not the K7?!?! I don't think so.
  • The slot configuration was to allow the use of standard Foxconn slot interfaces in manufacturing motherboards and the PII casing for the CPU (you think Intel makes those casings?). Also, us resellers won't have to run out and buy a special K7 fan to cool the new CPU. We can use PII fans. Hey. They're just trying to make it easier on everyone. Of course, Intel is making it MORE difficult on everyone with changing Slot processors to SECC2 style so all of the resellers have to stock different fans and customers have to know ahead of time what type of sink/fan combo they'll need for their OEM style processor. But that's par for the course for Intel, isn't it?
  • I'll admit right off the bat that I'm not (yet) too experienced with taking out a motherboard & swapping it w/ another one, so if anyone has better info than me, post it and spare me the flames... :P

    Taking a motherboard out of the case should be simple. Just disconnect all of the power cords running toward it (and it might be a good idea to mark on them what they were for, a print on your current motherboard should specify their use, though I am not sure wether this is absolutly nesicary, it seems like a good idea). Take all of your add-on cards off the board, and you should be left with your motherboard stripped down inside the case. There will be little plastic knobs that need to be squeezed to let the motherboard pop out...

    Your new motherboard *should* fit into the case, assuming that you are buying one that is the same form (not sure on if thats the correct use of the word...) as the origonal, ie. replacing an AT motherboard with an AT motherboard
    In many cases, the size of your new motherboard wont matter, because most of the cases I've seen seem to be able to take several different sizes of motherboards, so you should be ok there... but you might want to make sure that this is the case before you go out and spend your wad of money on that new MB... but then again, its only $50 for a new case...

    Back to the point, once you've got your new MB in the case, its as simple as reconnecting all of those wacky things you just took off 10 mintues ago...

    Hope I'm close to right on all of these things since I just bought a new MB and it should be fedex'd over within the next few days... :P

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...