Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel

Intel Demos Williamette at 1.5GHz 228

|0|4 writes, "There's a CNET article about Intel's demo of a Williamette processor running at 1.5GHz. " Mentions the 1ghz P3s and other odds and ends. As always with Intel, 'Demands exceed expectations' with their new chips, so it'll be awhile before they cost less than a compact car.
This discussion has been archived. No new comments can be posted.

Intel Demos Williamette at 1.5GHz

Comments Filter:
  • Williamette. Why not just name it Floralprintdress or PussyProc? Come on, it sounds like a type of doily. It does not strike me as 'speedy' or 'powerful'. Maybe that's why an AMD Sledgehammer is on my list of things to buy...

    Dyslexic.
  • I was about to post and say:

    "What the hell takes 5 hours to compile!?!. My PII-450 (64Mb ram) does not take anywhere near that much time to compile the whole kernel."

    THEN you mentioned NT and C++. All's explained now. sorry.

    --
    Simon
  • What I don't understand is WHY Intel keeps their FP instructions, it is notoriously known for complicating compiler works (if it is trying to reach big performance) and slowing down things. Why not add new instructions (as will AMD do with their future SlegeHammer CPU), it should have been done a loooong time ago.

    They did. It's called SSE2. You don't have to use it for SIMD, there are instructions that just treat it like a flat floating point register file with 8 registers. Very much compiler-targetable as far as I can see.

    Check out the PDF [intel.com] file from Intel about Willamette.

  • NO The flaw was with Inel engineering on the chipset now with the Ram.
  • Must crack keys...must join Slashdot team...

    Rob is my hero...

    (And now I return you back to reality)
  • LOL..."We"? I didn't have anything to do with it. Did you?
  • I just read through the optimization document Intel now has available on the Willamette. I'm very impressed with the direction they've taken the instruction set. They have filled out the SIMD instructions, extending MMX to their XMM 128-bit registers. They also support true 64-bit integer operations (no more multi-instruction ADD or MUL sequences). They've added prefixes to the branch instructions to provide hints to the branch predictor -- very useful for profiled code.

    The hard thing now is going to be figuring out a good scheme for allocating variables between the standard, the MMX, and the XMM register sets when generating code. I'm thinking that the integer set will be best used for address calculations, while true scalar and FP values would do better living in MMX and XMM registers when possible.

    I'd also like to thank Intel for releasing this information earlier rahter than later. Last year, I was peeved when the Pentium III instruction details weren't made public until the chips were released. Intel may have given themself enough lead time to actually have Willamette optimized software (especially compilers :) available when the chip debuts.
  • Help, I upgraded my processor and it erased the UVPROM my BIOS was on.
  • Compile time is I/O bound. Buy a fast SCSI disk, lots of RAM, and a fast bus if you really want to speed it up. =)

    A 1Ghz CPU running on a 100Mhz bus with an IDE drive. As for intel HAHAHAHA... sorry it's just that it's so outlandish to expect cpu power alone to improve performane. These intel/AMD press releases are funny. Also things like cache speed being like 3/5 or even 1/2 to 1/3 cpu speed make this even more lauaghable. I think a 4x - 8x ppro ( cache and cpus @ 200Mhz ) are the best x86 machines out for what most of the linux crowd.

    I wanna 4x SMP K7 to still up here with my ppro ( socket 8 ) dual workstation.

    OpenProjects #debian - in irc you can't run away from the taunting
  • Excuse me? Who would run a mission critical server on an x86? I see AMD and intel both on the low end server/end consumer market.

    I still wonder about PSX2 and other appliances vs PCs - will programmers like us be the only ones with PCs soon?
  • I agree. I have three machines at home. Celeron 333, P-133, and a 486-33, all running Linux (windows on the fast one for games only). The smallest machine is a print server and it works quickly and reliably. My P-133 is very snappy and responsive, even when loaded down with 70 processes! For programming I couldn't ask for any more speed. I'm not writing the Linux kernel, and even then it would be perfectly fine. Of course, under Windows the Celeron 333 sometimes disappoints, but that's mostly because I'm running games in Windows. For Linux, the Celeron is also snappy. In fact, I can perceive very little difference in responsiveness between those two machines. One is damn fast, the other one is damn faster!

    Everyone should recite this mantra: "A Pentium 133 does 200 MIPS. A Pentium 133 does 200 MIPS. A Pentiu..." 200 MIPS is a meaningless benchmark number, but it does give a rough insight into just how fast even that old machine is, especially when running a lean operating system.
  • It's definitely 1.5GHz. But still this is a only 50% increase over what will presumably be shipping from Intel's competition before this summer. It's no different from jumping 50% from 500MHz to 750MHz - which is pretty much what happened from summer last year to Christmas. A 50% jump in clock frequency in six months is starting to become status quo in this one-upsmanship game between Intel and AMD.
  • It's not going to be released any time soon, so who cares what the stability is? The engineers have more than half a year to iron out the bugs, why does it matter if it can run Word at 1.5GHz right now?
  • Just a side note, you could of course "double"* your speed to hard drive access by going to Ultra66. This assumes two things, 1) you haven't done so, 2) you have Ultra66 capable hard drives.

    * This does not double your drive access speed, but does increase performance, and it's pretty common in new hard drives, but can't say that it's standard yet on the boards. I've only recently been seeing daughtercards for connecting these. And all the stuff I've read is of course for Windows, so i don't know linux's capabilities.

    Just a side note, resume with discussion about AMD's awesome processor and Intel's inability to keep up.
  • 20-stage pipeline, ouch, Intel is really playing the MHz game, but as long as a regular user thinks that more MHz == more powerfull, they will keep doing this.

    > 2. The FPU is running at half the speed of the ALU.
    > So FP performance will not be up to par for scientific applications (or games)
    > as compared to an AMD processor running both at the same speed.
    > Athlon's FPU is already known to blow PIII's out
    > of the water.

    Do not forget that FPU are very diffent from integer unit, and that speed alone means nothing but speed and pipeline length, paralelism do counts. I think that you are jumping to the conclusion here.
    What I don't understand is WHY Intel keeps their FP instructions, it is notoriously known for complicating compiler works (if it is trying to reach big performance) and slowing down things.
    Why not add new instructions (as will AMD do with their future SlegeHammer CPU), it should have been done a loooong time ago.
  • Well, Intel sells CPUs and not memory chips, so what they are really saying is:

    the new OS requires 250 more megahertz of chip power to get the equivalent user experience (if you only have 32MB of RAM)

    On a "reasonable" NT machine with 128MB, the "user experience" is about the same between NT4+ActiveDesktop and NT5.

    (And, yes, I know that Linux users will flame the 128MB number. Yawn. A memory upgrade is still cheaper than a CPU upgrade. And since corporations don't usually upgrade CPUs, that means a whole new computer if you were to follow Intel's instructions.)


    --
  • Just want to point out that "no... you're wrong on that count".

    1200 dpi is the minimum for line art (one color, solids). For halftones, you just double the linescreen of your imagesetter. Text books are around 133 to 150 lines per inch, meaning that you want 266-300 dots per inch. High quality books, like coffee table or artsy books, are around 180 - 200 lpi, so the maximum resolution you'd want out of a halftoned piece of art is 400 dpi... and that's an extreme measure.
  • I'm just curious as to what in the world you're doing with 900dpi images? I thought that in most cases 300 dpi is overkill.... Or are they 8 1/2 by 11's destinedd to be posters?
  • They HAD to show off their fastest chips simply as proof that they can create them. If they'ed fallen silent after AMD's demo, then the world would stop looking to them as being the leaders in the x86 world.

    Of course they're not going to go from 800 MHz to 1500 MHz overnight. There's much more money to be made selling the same people 800 MHz machiens, then 833 MHz, then 866 Mhz, etc...
  • AMD would cannibalize so many of their own sales by releasing the absolute fastest processor they could, rather than incrementally upgrading speeds as they and intel are doing.

    It's also in their best interests to slowly grow the market, rather than have everyone crawling all over one another in order to acquire one of their chips. Why? Because they can't make them fast enough, that's why. Kind of like what's happening now with Intel and their 800 Mhz systems. They announce them. Everyone lines up to order. People stall their purchases. Very few people get them...
  • It's pretty funny watching the completely different tone of /. reactions between when AMD demos 1 GHz and when Intel demos 1.5 GHz...
  • <I>Linux users, you brag so often about the cleanliness and speed of your OS against the bulky Windows... then why do you keep using one the worst language in the world</I>

    Nothing wrong with C - something's wrong with those who believe they have to recompile everything now and then. Any sane makefile setup will compile only what needs compiling, making C compilations nice and quick.

    <I>A language that is so unreadable that there's a contest of code offuscation ?</I>
    Exists because C is popular, clear C code is definitely possible, and of course I can write pascal so bad that nobody can read it - but there are so few pascal programmers to impress out there...

    <I>When I read a comment "if it crash try to compile without the speed optimization" </I>
    Broken for sure. But compilers get fixed. And pascal has the same problems from time to time, 16-bit delphi had a lot of "don't do that" type problems.

  • don't forget...
    the athlon bus is a *switched* processor bus that is engineered to run from 200Mhz-800Mhz. I don't know the details of Wiliamette bus, but knowing intel.... 400Mhz shared bus is quite likely.
    Consider what the difference will mean in 2 years when 2 and 4 cpu systems become quite common. Folks are already buying dual cpu boxen(boards at least) by the tens of thousands. If MS ever makes NT/2000 the consumer windows OS, system builders will start selling dual 600Mhz systems instead of pushing expensive 1Ghz monsters. Margins will suck either way, and they can get better volume on the chaper boxes.
    -earl
  • Nobody is stopping you from buying Alpha, Sparc, PPC or some else non-x86 chip..
  • Am I surprised by this news? No. Do I believe it? No. Why not?

    IMHO Intel had to release some big news, because lately the press has been portraying Intel as in severe trouble because their key 64 bit chip (code named 'Itanium') isn't measuring up to expectations, and AMDs chips are out there in quantity and making customers extremely happy. Transmeta releases specs for a viable x86 threat to the low end processor line (Crusoe vs. Celeron), so Intel has to push the edge up for the Pentium lines to remain marketable. Tom's hardware does a review of the Athlon, and low and behold, the chip measures up to expectations.

    Trouble is, the Intel press release/party, etc. and the C/Net article are long on hype and extremely low on independently verifiable specs. Perhaps they can clock a Willamette at 1.5 GHz now, but the key question is when will the yield rates (number of processors per silicon batch) be high enough to compete economically with the Athlon?

  • Heck, at these speeds what do you need the GeForce for? You could actually get playable speed out of an S3 ViRGE!

    ...Once the GLX diver is perfected, of course.

    :-)
  • Business ethics are something important to me. Since Intel can allow itself to bully companies around, it's 100% AMD for me. Besides, Athlon is a superior product.

    I currently have a 550 MHz Athlon, as well as a K6-2 and a K6.
  • I do lots of graphics editing, and every MHz counts. Try applying image filters on a 128MB IMAGE. My current machine is an Athlon 600 w/ 256MB RAM. Just doing a Print Preview on a 900dpi image is a feat. And No, your 466 MHz Celeron won't cut it.

    Most other graphics designers I know use NT workstation and dual CPU's... Or SGI's IRIX (if you have the money for it).

    Let's also not forget Database servers, who usually execute processor-intensive code. Also think about sites like Slashdot, that get mega-hits/hour and every hit is a dynamic page. In these cases, the high CPU frequencies really do a lot.
  • I guess people at Slashdot only use their PC's for gaming. No, you won't see a light of difference between a 400MHz and a 600MHz for a game. But on a Database server, or for a graphics editor, a 1GHz chip is most appreciated.

    When I apply a "smooth" effect on a 96MB 900dpi image, it takes my 600MHz Athlon roughtly 25 seconds to complete. Could I use a 1GHz? Obviously. But not for Quake.
  • Look at the back cover of your Quake III Arena for Linux CD. Beautiful graphics, crisp and clear. 1200 dpi resolution, 32-bit color, approximately 80MB of information.

    Obviously your Canon bubblejet won't print this, but pre-press requires this kind of quality.
  • Your graphics card *IS* a rendering sub-system.

    I certainly can't agree with this. While the card can render triangles with textures, it hardly falls into the range of the cards out there that do T&L in hardware, accept a common API call that all operating systems can share, and do so with little interaction from the main processor.

    Your Sound card *IS* a band in a box

    While my sound card does wavetable synthesis, my computer must tell it which notes to play when, when to swap instruments and so on. While you have to tell a midi sequencer what to do, you can simply upload the relevant data and use it at will. My card, the SBAWE64, actually has 32 channels, and emulates 32 in software on top of that. I'd wager the 128 only has 64 hardware channels. It is still a Software solution.

    but that's because of refinement, and has little to do with the capabilities of the chips themselves.

    The Amiga demos used hardware tricks such as copper bars and sprite moves. It took some digging to get that out of a pc. I've seen it in text mode, and I was impressed. I've also seen some PC demos that had un-explainably good graphics that worked completely with standard vga graphics doing 3d. That was tight code.

    My point is this: If you can do it in hardware, don't do it in software. Doing things in hardware raises costs in the short term, but adding one of these cards decreases system load while increasing the productivity of the core's time.

  • Correct me if I'm wrong but doesn't the Athlon motherboards do this now?

    I thought their archetecture allowed you two instructions per clock tick so the 100mhz bus effectively operated at 200mhz (for comparison purposes only).
  • I think its sort of the chicken and egg syndrome. Sometimes you need the increased resources before the aplications that can take advantage of them come along. Personally I see Voice Recognition as the 'next big thing' to hit PCs en masse and that will definately eat up those lovely clock cycles :)
  • I;m not going to comment on the performance comparison, but I'll say this:
    The Athlon is a 'true' 7th generation x86 processor. The PIII is still 6th generation (it's still the antiquated PPRo core). So, that means that WILLAMETTE is Intel's -7TH- generation chip.
    AKA: Intel's answer to AMD.

    I'll note that other posters have mentioned that Willamette's FPU blows chunks, so I wouldn't be worried about AMD getting pushed out of the gaming market *shrug*

    The K8, IIRC, is Sledgehammer, the hybrid 32/64, and is going to run 32Bit MUCH faster than Itanium. I'm going to make a (not so) bold predicion and say the Itanium, and POSSIBLY McKinley aren't going to go very far once they are actually released. We -STILL- don't have this chip on the market and they've been talking about the fucking thing for 3 years.
  • Hmmm.... my 300/450 difference is pretty apparent running netscape, Eudora, booting (much quicker into NT, 98 or Linux), and way faster in games, compiles, and a lot of othre things (the Gimp).

    Menus feel a lot faster too - everything does...
  • However, more and faster can't take the place of smart. What I'd like to see is more media processor chips. You know like Sid and Nancy, and Paula, and so on. Even the 68xxx chip series started out as a process controller.

    That would be the smart way to go (it's the way I build my computers, which is how I get away with a K6-200 in a machine that plays DVDs (should try an even older 5x86-120 sometime and see if it'll still work)). It's not the cheap way, though, which is why pretty much the only way to get a computer built that way is to build it yourself.

    Winmodems suck. Software DVD sucks. Give me hardware or give me death! :-)

  • (code offuscation ) Exists because C is popular, clear C code is definitely possible, and of course I can write pascal so bad that nobody can read it - but there are so few pascal programmers to impress out there...

    I'm affraid this is not a valid argument. A begin...end sequence is still more readable than { }, especially when you get lots of them together (and don't get me talking about those fucking C pointers, they really suck a lot and make code even more a mess !).

    Broken for sure. But compilers get fixed. And pascal has the same problems from time to time, 16-bit delphi had a lot of "don't do that" type problems.

    I've been using "code optimisation" on Delphi for years without a single problem with it. On the other hand most large C programs behave differently depending of the compiler optimisation... which is, by all name, a bug and nothing else. The difficulty of writing a C code parser compared to any other *really* structured language (ADA, Pascal, whatever) shows clearly that there's something wrong with it.

    I know, there are many ways to make C easier to use with macros and other defines, but why bother trying to make a bad language look good when there ARE good languages around. All studies done shows that code in C is more bugged, longer to develop and maintain that code in Pascal or ADA (no I don't have a link to provide here unfortunately). This is a reason why aerospace software is NOT written in C...
  • Being a programmer I know that five hours compilation is the norm on a 500MHz PIII

    What ? 5 hours ? I compile all my programs, including my multithreaded web server code in less than 1 second on my PII 333... Oh wait, that is because I use Delphi, not some junk like C++ ;-). I think that before improving (expensive) CPU we should try to use elegant, cleanly designed software tools. Linux users, you brag so often about the cleanliness and speed of your OS against the bulky Windows... then why do you keep using one the worst language in the world ? A language that is so broken that it takes hours for the compiler to try to figure out how to compile the code ? A language that is so unreadable that there's a contest of code offuscation ? A language that sometimes looks like assembler with macros ? A language that is so wrong that compiler optimizations can output buggy code because they are not failproof !!! When I read a comment "if it crash try to compile without the speed optimization" I think this is just unacceptable, just as the Windows crash are unacceptable... I mean, if the optimization can generate buggy code then call it "beta feature" and don't release it until it produce always 100% correct code.
  • Well my policy is to go for best balance of performance and price... right now I would buy an Athlon over a P3. But Willamette sounds way more powerfull than an Athlon ! Just look at bus speed : 400 Mhz against the "meager" 200 Mhz of the Athlon... from every point of view Willamette looks much better than the Athlon. To fight Willamette AMD will have to design a new core, so unless the K8 is out before Willamette, Intel will take the speed crown again... and from the news and rumours around it won't happen.

    As the song say "sometimes you're the windshield, sometimes you're the bug". Intel is the bug now, next time it will be AMD, and so on...
  • Instant computing. When you click something it is done. You don't think about it, you don't wait, it IS. As in it works as fast as when you drop something. You open your hand and its gone, no waiting. The death of progress bars. The % symbol goes homeless.

    As noble a goal as this sounds, the only way this could possibly be achieved, is to become 100% complacent with software's present capabilities (whenever you choose to define as present )

    To state it another way (and to terribly misquote thousands of other developers who have said the same):

    Hardware developers' sole purpose is to increase the capabilities of hardware, ie number of CPU cycles, storage, et. al. available to perform tasks (and, optionally, to reduce the number of cycles necessary to perform some hardware-specific functions)

    Software developers' sole purpose is to utilize those CPU cycles to perform constructive (or at least entertaining) tasks. As Hardware speed increases, consumers demand that software progresses to provide additional capabilities, ie justify the need for additional hardware.

    As software grows to need additional hardware speed, hardware developers are forced to provide additional speed & other resources.

    Hardware and software developers are in a perpetual race of hardware speed vs. software capabilities. The race is called progress. The victor is the consumer.
  • ...it sounds like the name of a river...

    ...oh wait, it is the name of a river, since that's how Intel names its projects in development...

    ...now only if you could spell it correctly (Willamette), maybe I'd feel some pity for someone who buys their processors on the basis of their names.

  • Keep in mind that C|NET News.Com is partially owned by Intel.
  • Welcome to Moore's Law. Ain't exponential growth great?

    Ken

  • I think Rambus DRAM will run at this speed. Not sure if this is correct though.
  • Why is it that whenever Intel or AMD or whoever, tries to make progress by making faster chips, Microsoft does the reverse by releasing a shitty OS that requires more CPU speed and more RAM?
  • I remember the biggest peeve was when I referred to the state as "Or-a-gone." All the scowling faces...*shudder*
    Then I was told its pronounced more like "organ" with a slight "eh" in there somewhere.

    :)

    -Vel
  • Room temperature my left, ehrm... That baby was so hot you could fry chicken between the slats in the oversize, high-speed dual fan cpu cooler!! And they mention no test of stability! I can overclock a 450 K6-2 by a third and have it boot.. It won't make it into the OS, but its running at 650Mhz!
  • You know, quite frankly, I doubt if I'll be buying an Intel chip for quite some time. AMD seems to be taking the better, faster, cheaper road, and I'm happy to support their business. Besides, Intel's fingers are beginning to get as dirty as M$'s, if you consider recent events.

    No thanks, I'll take the Athlon. (Until Transmeta comes up with some comparable (read: really fast) Crusoes for the desktop market.)
  • I mean, I can't tell the difference between my Celeron running at 450mhz or 300mhz! I can, I guess, If I turn on the FPS count on games I'm playing (revolt improves noticably). But honestly, I can't detect frame rate changes once it gets above 20 fps. Others can, but I can't.

    So why bother? I turn my overclocking down unless I'm gonna 3d render or something.......
    ---
  • Heh I just left the Portland area to go back to school. They're FREAKISH about their pronunciation. I think will-AM-it makes more sense. maybe not *shrug*
  • You're right except what do they use to measure frequencies 0.6% really sucks for GHz level frequency measurements, 1ppm isn't very hard.

    However the fluctuations in the clock frequency could be on the order of 0.6% but that still seems high to me.
  • There's a fairly good article over at Tom's hardware guide describing some of the recent mis-steps by intel in their processor line (one that sticks out is the RAMBUS decision) and why they have a hard time changing direction (long plan for releases, if it they make a mistake and it changes it messes up the whole thing).
  • Although I agree with you. I'd like to know what you compile that takes 5 hours?

    You're not using GCC are you? Use precompiled headers and incremental compilation.

    In my experience GCC is slower than VC++ or BCB on large projects.
  • No, I didn't mean a P2 or even an AMD chip at 450 against the Celeron. I meant if you have 2 processors side by side running at 450 and one is naturally running at 450 with a base clock of 125 with a multiplier of 2 (these numbers are made up, multipliers are rarely this small) and the other is running at a core clock of 100 with a multiplier of 4.5 that someone overclocked to that, the real 450 would be faster than the overclocked version.

    Esperandi
  • You're correct, this is the way reality works and "instant computing" will never exist, but it is a good example to throw out when people say "I don't need 1GHz to play Solitaire"... theres another law by some other famous guy (I am horrid with names) that says software is a gas. It expands to fill the space it is given. I completely believe that. I know personally, my computer use is a gas. I moved from a 166 to a 350 with 4x the RAM and such, it took me a month before I was needing more speed because I simply increased the things I did and ran beefier stuff that I never did before... hell, I used to trace fractals for literally days at a time on my 386 with Fractint... now I just trace the more complicated ones that would have taken a month or more ;) I'm looking forward to the Athlon I ordered boosting my speed with video processing a significant amount, but the minute that happens I'll start doing bigger videos, use better codecs that take longer to process, etc, etc....

    Esperandi
  • When the Itanium and AMDs 64 bit chips are out (and BTW, the AMD one already looks like it will make Intel's chip its bitch with 50% better performance running old 32 bit apps and extremely easier programming) you're going to complain that you can't use .

    Esperandi
  • by Anonymous Coward
    First, I have to laugh at how AMD releases a 1.1Ghz processor and everyone here has geekgasms, then when Intel demos a 1.5Ghz chip, people say things like "well, who needs that much power anyway?" Sheesh!

    Wake up and realize this fact: Intel is not at all worried about AMD.

    Then who (or what) is Intel worried about?

    The Internet.

    Have any of you noticed how Intel has been deliberately and steadily shifting it's core business away from CPU's? Intel is investing it's Billions in networking and servers, markets that AMD cannot even touch. You may whine and complain about the high price of Intel CPU's, but if you are a sysadmin buying a $50K database server, the price of the CPU becomes irrelevant compared to reputation and availablity.

    My prediction: in two or three years, after the server boom has started, AMD will inherit a commoditized PC market and will be utterly shut out of the server market because not only will Itanium have left them behind, but Intel will have a much better marketing position since they will be able to offer fully integrated Internet products.

  • Who needs more that 640Kb of RAM for applications? There's no way that personal computers would ever require more than 1MB of memory!

    ;)
  • Nothing is really x86 anymore as is...

    If it runs x86 code, and doesn't run IA-64 code, it's an x86. If you can't get at what's inside, then, from a programmer's standpoint, the only way in which the inside is relevant is its effect on performance (e.g. "do this, don't do that, if you want your code to run fast"). You can't write raw rops to feed to the guts of a P6, so a P6 is an x86; unless Intel lets you write raw rops to feed to the guts of a Willamette - and I really really really really really doubt they'll let you do that - it's an x86.

  • It should be noticed that Intel also increased the instruction pipeline's length from 15 to 20.

    That must be the "Hyper Pipelined Technology" to which the Willamette Processor Software Developer's Guide [intel.com] refers.

    I guess "Hyper Pipelined Technology" is what you use when superpipelining just isn't enough; I'm waiting for UltraSuperHyperMegaDeathPipelining, myself....

    This also makes higher clocking frequencies possible, however trading performance for it.

    Well, ceteris paribus, higher clock frequencies do boost performance, but I guess the deeper the pipeline, the more pain you suffer if, say, you mispredict a branch and have to throw out a bunch of stuff you've sucked up into said pipeline. (The Willamette document in question speaks of better branch prediction by "effectively combining all current branch prediction schemes".)

  • Yeah, as discussed elsewhere 20C is a bit of a stretch. Provided no active refrigeration (peletier, or freon) it must be a few (10C or more) warmer then ambient/room temp.

  • The junction temp can also be the temp between the heatsink and the chip. (As opposed to the junction between differently doped Si.) If you talked to to a thermodynamics guy the temp between heatsink and chip would be the one he was talking about.

  • Just a couple items:

    1. It is reported that the execution units have a 20-stage pipeline. So stalls will hurt big-time unless Intel has something new up its sleeves (which I really doubt). They'll probably let loose plenty of PR using benchmarks with some very carefully hand-tuned code that shows this chip just blows AMD out of the water, but will mean little for most things.

    2. The FPU is running at half the speed of the ALU. So FP performance will not be up to par for scientific applications (or games) as compared to an AMD processor running both at the same speed. Athlon's FPU is already known to blow PIII's out of the water.

    3. This processor won't be released until the end of the year, which the way things have been working means they won't be available until next year. By then, Athlon will have large on-die cache, increased bus speed and possibly SMP systems (fingers crossed).
  • Consider that I can't get more than one or two 750 or 800Mhz P3s to sell and they are not cheap. I *can* get 800 & 850Mhz Athlons. Intel may have produced a very fast one-off chip to demonstrate, but that is not the same thing as making hundreds of thousands of chips for retail sale.

    I am more interested in the SMP capable Athlons that are supposed to be here the second half of 2000 [theregister.co.uk].
  • C|net has no data in that URL. A better URL is ...
    http://www.intel.com/pressroom/archive/releases/cn 021500a.htm [intel.com]
    -ak
  • It took quite a while for us to hit the 1000 megahertz mark. Now, all of a sudden, we've made leaps and bounds, and have jumped up a whole 500 megahertz? Am I reading this correctly? Or should it really be 1.05 GHz (1050 MHz)?

    I'm sorry. What I meant to say was 'please excuse me.'
    what came out of my mouth was 'Move or I'll kill you!'
  • so you all know, at Intel's processor forum going on they were running the Powerpoint presentation on a Willamette. I think it was running at 1.1ghz rather than 1.5 though, while theres a 400mhz speed difference it showed the validity of the chip's ability.
  • Of course, the Timna sounds like a dead end technology (who would want graphics that you have to replace the chip to upgrade?),
    Lots of people. The buisness desktop market doesn't really care about 3d-graphics, neither does the email & web only segment of the market. What these people want is a dirt cheap system with decent 2d performance. SOC technology can help deliver that. I wouldn't buy one for myself, but for low-end systems...
    --Shoeboy
  • Many people in science and technology use Celsius rather than Farenheit. 20 degrees C = 68 degrees F... a bit on the cool side for my liking, but perfectly reasonable value for room temperature.

    Eric
  • Your graphics card *IS* a rendering sub-system. Your Sound card *IS* a band in a box.
    Your drive controller *DOES* do everything it can. Sure.. SCSI does this much better than IDE, as IDE is basically a raw i/o port, nothing more than a 16 bit buffer/latch...but the controller is on each drive.

    The real magic in these co-processor chips, on the Amiga, was the standard platform. Because they were all the same, it was possible for each successive generation of software to be more and more refined, they could bang away on the hardware directly......

    yes. Old Amiga Demos still look and sound BETTER and more pleasing to the eye tha many super-high res things these days... but that's because of refinement, and has little to do with the capabilities of the chips themselves.
  • Does anybody ever use this much processor power? I mean, I can't wait to play quake 4 or perhaps Ultima Ascension on a 1ghz processor. That would be keen.

    However, more and faster can't take the place of smart. What I'd like to see is more media processor chips. You know like Sid and Nancy, and Paula, and so on. Even the 68xxx chip series started out as a process controller.

    I'd like to see the next GeForce256 based card as a rendering sub-system. I want my drive controller to do everything it can, and I want a sound card that acts like a band in a box.

    Most of all, I'd like to see modern software not require the newest chip. If you come down to it, every new chip, every new hard drive and every new graphics technology gets abused eventually. That's unfortunate, especially when you go to computer shows and see 1024x768 3d card demos that look like the 640x480 vga based 3d demos from the earlier 90s.

    I always wait until my processor is out-classed by 100-120% in speed increase before I consider an upgrade. I then buy the next one back. I currently have a PII450, and I will upgrade when we get to 1GHz shipped. At that point, I may buy a 900mhz. But running Linux, I can't see where that will take me except shorter compile times, and the ability to serve to 100+ thin clients in my house or something like that. Of course I could boot into windows and play games :)

    It's a another case of "More and faster." God bless Moore's law.

    ..."More and faster, here we come, white and trashy and incredibly dumb." -KMFDM

  • Pat Gelsinger, an Intel vice president, said the new OS requires 250 more megahertz of chip power to get the equivalent user experience. Analysts at the Intel event said that was a fairly large speed bump and were surprised that a close Microsoft ally would say that.
    from this article about Dell switching website to Win2K [cnet.com]

    If Win2K really needs that sort of a Mhz boost then Microsoft HAS to push intel to release as fast as they can, so people feel obliged to upgrade, and they can turn out the 'old' machines that would run NT4 just fine, but would run NT5^H^H^HWin2K like a dog.

  • by paitre ( 32242 )
    1. Willamette isn't out. Willamette won't BE out for a while yet (try October, based on Intel's current roadmap).
    2. This is still part of the pissing match they have going on with AMD. Woopedy do. AMD is at least putting out products in volume when they announce. (don't start going off about being able to get 750Mhz and 800Mhz machines from Dell, Dell is just about the -only- vendor getting those parts right now).
    3. Intel is scared shitless right now because AMD isn't screwing up for once.
    4. This was at the Intel Developer's Conference. This means they're talking about products and projects at least 6 months down the road.

    It boils down to this: So fucking what. They DON'T have 1 Ghz ready for the market, they WON'T have it ready for a while yet, and if/when they DO put it out, it's going to be atrociously expensive. Basically, just like AMD's 1.1 Ghz demo a few days back, this is meaningless.
    Of course, AMD is more likely to RELEASE that one sometime soon *shrug*
  • The likely reason(s0 that AMD has not upped the MGZ war to the extreme is: 1) They are struggling to meet increased demand. They currently have only a fraction of the market and if their demand increased they would have a hard time meeting it. This would cause several problems. if supply decreases and demand increases then prices must rise. By previously stating they will remain under Intel's prices by 15% they would be exposing ther necks to Intel (can we say pricewar?). 2) Initial silicon die molds generally have low yields. If you end up throwing out half your ouput because yield is too low the cost of manufactuing rises and the rate of defects also increases. The reason Intel is showing the 1.5GHZ is for bragging rights. Their yield rate is likely very low. At best what we are seeing is AMD's short term advantage at being able to output better performing chips (at comparable clock rates) with a relatively high yield. The real war begins if Intel abandons x86. intel will have a hard sell and AMD will be selling to the legacy market. This is the same folly that gave AMD room to grow in the past. Personally I don't see Intel making the same mistake again. But indications are that they may be too far down the wrong path to turn back. If so Intel may be forced to backpetal and rethink its core strategy.
  • I also suspect it was a lack of precision in their monitoring equipment. After all, how much inertia is there in a chip's speed? ;)

    Anyway, I think this is quite a positive development. It does at least prove that Willamette is capable of doing 1.5 GHz. Of course it is a rare chip off the line that can do it now. Too bad the cooling wasn't mentioned, but the fact that is is clockable to 1.5 GHz under *any* conditions is quite a claim.

    Yields will improve and they will improve the quality of the chips steadily -- I have confidence in that much. Unfortunately, I am afraid that their failure to mention either power consumption or heat dissipation methods is not a coincidence.

    Of course, it may be that that was also just an oversight of a clueless reporter who thought 1.5GHz was the only important datum. Maybe we'll see 1.2 and 1.4GHz chips in a few months, after all.
  • No, you miss understood what he said. He said the temperature at the junction(whatever that is) was 20 degrees(C?). So for the tempture to be 20 degrees C at the junction the temperature in the room the CPU was in, it had to be near Antarctic levels.
  • That's great tht they have developed such fast processor, but isn't it still just an x86? When are we going to see some real nextgen chips, Itanium or whatever you call it. Computer companies, for such a "futuristic" industry seem to love to live in the past.
  • So based on this POVRAY benchmark we get:

    G3 400mhz 1.3x
    PIII 450mhz 1x
    Athlon 550mhz 2x

    Athlon 400mhz would be 1.45x (to compare with G3)
    Athlon 750mhz would be 2.72x (Athlon 500mhz overlocks to 750mhz easily)

    Does the port of POVRAY you are using make use of any MMX, 3DNow, SSE, or G4 instructions?

    Pity there are no SMP Althon machines yet.... stick a couple overclocked 500's in there, and get 1.5Ghz of performance!

  • We had an article [slashdot.org] here last week, pointing to a piece from Tom's Hardware Guide [tomshardware.com] that stated:

    Over the next few months, other rumors (all undoubtedly from "reliable sources") will be published suggesting that Intel's next generation "Athlon killing" processors are only a few days away, yet until the Willamette is released no sooner than October, Intel will have nothing new to offer.


    This is looking somewhat sooner than October. We'll just have to see how long it takes them to start producing in significant quantities. Let's hope for Intel's sake that it isn't October. I wonder if that article prompted the demo.
  • I think slashdot needs a separate "Moore's Law" section for near-future intel chips.

    (Like "Science", "Ask Slashdot", etc, some of its articles would also show up on the main page.)

    --

  • I found it interesting that I was unable to find any cooling information on the sites talking about this new Intel chip. One of the main points of the recent 1.1GHz AMD demonstration was the fact that it needed no special cooling techniques.

    One thing curiously missing from the AMD report was what it was doing. The Intel chip was only running a frequency ID utility which is great if that's what you plan on running all day. Who knows, maybe both of these processors melt the second you try to run real code on them. This report, to me at least, just seems like fluff. I would really like it if companies just talked about what they had ready for production rather than just trying to create a media stir. Because megahertz ISN'T a measure of performance when comparing two different types of chips, who really cares other than the media? I like seeing the tech specs but I wish these companies would stop tooting the MHz horn. Give me true loaded performance, not this frequency stuff.

    A side note of genuine curiosity: I've heard RDRAM is slow when transfering many small files but blazing when transfering large files. That in mind, is anyone out there ready to shell out the big bucks for RDRAM?

    /matt

    Microsoft seems to like old rock songs. For the release of win95 they purchased the rights to the Stones song "Start Me Up." Perhaps a more fitting song for the upcoming release of win2K would be "I Fought the Law and the Law Won."

  • I agree. I used to be a graphic artist. I know there is no such thing as too much speed or RAM for those users. I'm just talking about what I do now.
    ---
  • What do you mean a "real" 450? A PII450 benches about 3 to 5 percent faster than a Celeron bumped to 450. It has a larger cache (512k) than the Celeron's 128K, but the celeron's runs at the chip speed. Multipliers are the same. Actually a 300a celeron bumped to 450 runs about the same benchmarks as a 466 Celeron. Why? The 466 is running at a 66mhz Motherboard Bus speed. The 300 is bumped to 100 mhz, so it's almost a wash. Visiti www.overclocking.com or Tom's hardware to see the above benchmarks.
    ---
  • Games/3D
    VMWare
    99.999% acurrate Voice Recognition
    Super Servers
    Realtime compression of video/audio
    Netscape 4.x

    etc.
  • There's some law in software and hardware design, its named after someone but I forget. Anyhow, the law says that humans can't "see" benefits in performance unless they're at least a 20% speed advancement.

    I think that's pretty much about right, and based on that you wouldn't notice the difference (not to mention the fact that the difference between a 300 and an overclocked 450 is NOT even close to the difference between a 300 and a real 450 (unless the multiplers are the same, chance are they're not)).

    If you think we don't need faster processors except for high end stuff, consider this:
    Instant computing. When you click something it is done. You don't think about it, you don't wait, it IS. As in it works as fast as when you drop something. You open your hand and its gone, no waiting. The death of progress bars. The % symbol goes homeless.

    Esperandi
  • Actually, I chalked that up to the cluelessness of the reporter. I mean, what does "barely" mean? They don't slowly increase the "throttle" until it hits top speed; it either runs or it doesn't. That the device they were using to measure the clock speed had some minor fluctuations is not a huge deal. 1.492 is within 0.6% of 1.5G, well within typical measurement error.


    --

  • I was reading a little more about Willamette at www.anandtech.com and the following stuff was particularily interesting:

    1. It will require a totally new chipset and these chipsets will be RDRAM-only! (at least the ones made by intel)

    2. It will have a 400mhz bus. This could mean either a 100 Mhz ddr bus that fetches twice as much data as normal buses or 200mhz ddr bus. Anyways, data transfer rate will be 3.2GB/s. They have announced a Quad Pumped bus recently so 100mhz clocking would make sense.

    3. The integer unit will work at twice the clock speed of the processor. So for 1.5Ghz chip expect 3Ghz integer unit. Can you say fast kernel compiling!

    It will still use aluminium interconnects. There will be additions to SIMD-instruction set(a total of 144 new instructions).

    They did speculate at Anandtech that the only program that could be run stable enough was the frequency ID-utility... =)

  • Increasing motherboard speed woult be nice, too. A 133 MHz system bus is nice, but wouldn't it be great if your 800 MHz processor could access RDRAM at 200MHz or higher.

    Cache... we don't need no stinking cache!

    --"The it'd-be-cool-if department".
  • Hey... the correct spelling is "Willamette". Named after the main river running through Portland.

    And in case you're wondering, the correct pronunciation is "Will-A-mette", not "Willa-METTE".

    Sorry. I live in the Portland area. When I first got here, I pronounced it the second way, and was set straight real quick.


    If you can't figure out how to mail me, don't.
  • On a related note that makes me suspect the conditions under which this result was obtained, at a recent Intel Developer Forum demonstration, Intel unveiled a P3 (ie. Coppermine) operating at 1Ghz. An audience member asked what temperature the result was done at. Intel replied room temperature. Another audience member asked what room temperature was. Intel replied the 20 degrees. Yet another audience members asked which part of the chip was measured at 20 degrees. Intel replies T_j (the temperature at the junction)!!!

    For those not conversant in chip design, this means that the "room temperature" must have been near Antarctic for T_j to be 20 degrees. Gotta love Intel.
  • Correct me if I'm wrong, but wasn't the aforementioned flaw with Intel's i820 chipset, rather than with RDRAM? I don't recall the i840 chipset having the same limitation (ie one of the memory slots disabled). True? Keep in mind, also, that RDRAM won't *necessarily* cost your first born son next year like it does now. SDRAM prices fluctuated by like %300 in the month of October last year, things change quickly. Perhaps if Rambus can get their shiznit together, Intel's investment might actually pay off.

    --Terrence
  • What would these be useful for? I agree that their speed is pretty impressive since they are air cooled, but I can't seem to find a good reasons for buying one--for games, the video card is pretty much what keeps the framerate down, and most high-bandwith servers don't use x86.

    Anyone have any idea what the price of one of these chips will be? Me, I expect that they will initially sell around $2500--but that's just my estimate. Anyone from *HINT* Intel *HINT* to give us more information on their availability/price?

  • 1200dpi is the MIN standard for photo-quality marcom style artwork. fyi,

    --
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Tuesday February 15, 2000 @06:26PM (#1271087)
    Well, if I'm correct, the Williamette is in the Iltanium/Merced family

    Well, you're not correct; see the Willamette Processor Software Developer's Guide [intel.com], which says "Willamette is the code name for the next generation of 32-bit Intel® Intel Architecture (IA-32) processors".

    Merced is the code name for the first IA-64 (Itanium) chip, and McKinley is apparently the code name for its successor (Itanium II, or some other lame name?).

  • by Signail11 ( 123143 ) on Tuesday February 15, 2000 @12:10PM (#1271088)
    Err, the figures that I referred to were in Celsius. I think that you misunderstand what T_j is. T_j is not the temperature of the surface of the chip, but rather the junction between adjacent non-similarly doped areas of silicon. T_j in a normal desktop or laptop computer is significantly higher than room temperature; I don't have any exact figures handy, but I would be willing to bet that T_j is over a hundred degrees C in any commericial x86 chip operating under normal conditions.
  • by inquis ( 143542 ) on Tuesday February 15, 2000 @11:52AM (#1271089)

    I would just like to reiterate that frequency is relatively unimportant compared to how fast this thing crunches numbers. Since no numbers to that effect were released, it can be assumed that this is just another bit of Intel PR posturing.

    Also, it is not mentioned how much this thing was cooled to be able to hit 1500MHZ. I would lay money down that AMD's newest Athlon, when properly cooled, would be able to hit at least this number easily.

    On a side note, this paragraph held interest for me:

    "The second half will also see the introduction of Timna, a Celeron with an integrated graphics chip and memory controller. Although originally rumored to be compatible with next-generation Rambus memory, the chip will at first work with ordinary, less-expensive memory. The Rambus move will occur in 2001, said Pat Gelsinger, an Intel vice president."

    RAMBUS tech, while viable and more than just a little cool, will be dead as a doornail without support from motherboard manufacturers, and it looks like that by postponing its official Intel adoption by several years will effectively kill it good. Of course, the Timna sounds like a dead end technology (who would want graphics that you have to replace the chip to upgrade?), so I don't think that that would be something I would waste expensive RAMBUS on anyway.

    Methinks Intel needs to be beaten with a cluestick.

    the inquisitor

  • by account_deleted ( 4530225 ) on Tuesday February 15, 2000 @12:35PM (#1271090)
    Comment removed based on user account deletion
  • by tjwhaynes ( 114792 ) on Tuesday February 15, 2000 @11:44AM (#1271091)

    Intriguingly, this article totally fails to mention just how much cooling the Williamette required for operation, or how stable it was in operation. The mention that it 'barely made 1.5GHz' doesn't suggest to me that stability was an important part of this demonstration. It's also interesting to note that the time line for Williamette is still scheduled for late this year, so I suspect this sample is one of the best off the line so far. The recent fan-cooled 1.1GHz Athlon demonstration may prove to be a more realistic view of the Q4 performance we are likely to be able to get our hands on, although Kryotech may prove me wrong.

    Also intriguing is Intel's reluctance to push up the speeds of the Celerons closer to their limits. This is rapidly turning into an overclocking dream - I've seen 500MHz Celerons go easily to 640MHz, whereas the Pentium IIIs seem to be selected to be much more difficult to successfully overclock. So the announcement of 600MHz Celerons seems long overdue - my only thought is that Intel does not want the Celeron line encroaching on their Pentium sales, since there appear to be no technical reasons for the delay.

    Cheers,

    Toby Haynes

  • by duplex ( 142954 ) on Tuesday February 15, 2000 @12:32PM (#1271092)
    Lots of people. CAD users, 3D graphics designers, programmers (compilation speed!), etc.
    If you are thinking about posting another "who cares" comment think twice: just because it doesn't affect you doesn't mean it won't affect others. Being a programmer I know that five hours compilation is the norm on a 500MHz PIII. 1.5GHZ Willamette should do the job in just over an hour. That's a lot of time saved on compilation.

    As for "Intel can't supply Coppermines at decent clock speed so Willamette is vapourware" comments is simply rubbish Coppermine is an old design and Intel could only push it so far (I heard they had to reroute the chip to get it to 1GHz). However, Willamette is a new design altogether so if it's done properly they shouldn't have so many yield problems. Having said that I don't think that their design can match that of Athlon which was designed by one of the main Alpha guys (and it shows).

    What truly sucks about this announcement however, is that Intel is trying to make us buy Rambus crap. And I don't want it. And nobody else apart from Intel wants Rambus. It's expensive, has latency problems and carries implicit Rambus tax in it. I hate intel pushing those political decisions down our throats. That's why I stick with AMD.

  • by Stickerboy ( 61554 ) on Tuesday February 15, 2000 @12:44PM (#1271093) Homepage
    ...can be found at AnandTech [anandtech.com]. It covers much more ground than simply the rivalry between AMD and Intel, including some interesting specs about the Willamette architecture:
    • 2x ALU unit (i.e. the integer processor runs at 3.0 GHz)
    • FSB runs at "400 MHz" (similar to the "200 MHz" EV6 bus)
    • the introduction of SSE2
    It also talks more about Intel recognizing the need for DDR SDRAM systems as well.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...