Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Intel

1100 MHz 'Athlon Killer' Due From Intel in December 306

jeffstar writes "According to this article at The Register, Intel has an 1100 MHz 'Athlon Killer' IA32 chip coming out. Yum, that's the kind of sauce I like." Sounds great. If it comes out - and performs - as promised.
This discussion has been archived. No new comments can be posted.

1100 MHz 'Athlon Killer' Due From Intel in December

Comments Filter:
  • Nonsense. They are, as every chip manufacturer is, pushing as hard as they possibly can to advance the state of the art. If a G4 stomps all over a PIII in certain applications, it does not mean that Intel has secret PIIIIs or PIIIIIIIs sitting in dark closets waiting to produce, it means that Intel is no longer producing the highest performance chips. What with the Athlon work being done, it is possible that Intel is now the weakest of major chip manufacturers now that Cyrixes and WinChips aren't exactly a factor.
    Not only could they not 'release this processor at any time', they still haven't released it, and there is every reason to believe they will not release it in the manner they suggest either. It will take longer or be slower. They don't have stuff waiting offstage- this is _the_ premier 'paranoid corporation', the last one in the world that would be sitting around going "Ho hum, we got this chip here, seems to go real fast. Maybe we should make some of them and sell them, or gosh, why don't we just ignore the competition and tack it up on the wall for a while instead? It's real purty-like."
    Uh-uh. Sorry. There is no Intel Fairy. That mystical creature seems to be hanging around Motorola, nVidia and AMD these days...
  • Do you realize how much 8 megs of SRAM would cost? SRAM is the whole reason that many processors (UltraSPARC, Pentium Pro/Xeon, ect.) cost so much damned money. If AMD had an 8 meg Athlon it would cost more than 2 or 3 Xeons.
  • If you choose to believe Intel - go right ahead. Where's coppermine? Wasn't that supposed to ship Oct 17?

    (disclaimer: "coppermine" in no way refers to copper wiring technology. Intel still uses aluminum. Apparently, they know better than the rest of the chip industry)

    "The number of suckers born each minute doubles every 18 months."
  • by Anonymous Coward
    Point 1: I work at Intel. If people even messed around circulating something with "Athlon Killer" written on it they would probably be fired if it ever got near a manager. Point 2: Rambus was not Intel technology, it was Rambus technology....
  • One fellow ("ntsucks") said, "Either Intel has a stunning ability to improve its engineering process and timelines or they were withholding better chips until we had all purchase their current chip du jour"...
    To which I would have to add, "...or they are talking absolute crap".

    :)

    Come on, people, get real. You're being manipulated. It's crazy to take some company's random promises as accomplished facts. You sure wouldn't do it for Apple, why pretend that Intel has a crystal ball? The intelligence you're insulting is strictly your own, 'cause other people are reading your comments and going "uh-HUH. Riiiiight. Aren't people credulous? Damn."
    Seriously. Take a few deep breaths.
  • While that may be it's on-chip frequency that isn't always its emission frequency. If it were it's emission frequency there would be no way in hell the FCC would let it be sold in the US.
  • The Register didn't have any sources. True... but the Register has been right on a lot of things in the past. It's like Matt Drudge: take it with a grain of salt, but it is quite likely correct. --- "Progress is the God of the Machine"
  • Essentially, every time Intel starts with a new chip, they back their production off to the largest die size they can fit and start making chips slower than the end result will be. Then the release the chips in increasing speed. They never introduce a new chip and make it in many speeds, as that would not make them as much money. I stopped bying the in between chips. I wait till the next iteration comes out, then I buy the fastest of the older chip. I won't be bullied by some corporation! I will not be briefed, filed, stamped, indexed! ;)


    Bad Mojo
  • The anagrams never lie! I particularly like that one :)
  • From what I understand - Rev 2 will be copper, .18, support smp, 1 ghz and will have added to the chip - so it would be like a K7-2 =). So if they are compairing to the Rev 1, then this might be correct, but if they are going up against Rev 2 - then this is most likely FUD.
  • It could/would be a big issue for the DOJ for two reasons.
    First, their is a big difference in a market leader making a statement and a market underdog making a statement. For Sherman antitrust violation: a company must be a monopoly and engage in anti-competitive behavior. Adobe does not have a monopoly in the relevant market, but Intel likely does. While their is no bright line distinction, companies have been found to be a monopoly with 60% market share. Calling the chip an "Athlon Killer" may not be per se anticompetitive; however, it would make convincing evidence that Intel's intent was to ddestroy competition. Amercican Aluminunen was found to violate Sherman anti-trust law merely for increasing their production facilities.
    Also, if Intel is under a specific consent decree to not use such language, violation of that decree could have legal reprecussions. The Register refers to an agreement that Intel has with the DOJ; however, I do not know the specifics.
  • This is exactly what intel want you to think.

    Microsoft use similar tactics - spread rumours about some wonderful product you're releasing in the near future, that beats your competitor's. Foolish consumers wait for your product, while you slowly let the release date slip, and slip, as all the while your competitor is losing business.
  • This is simply absurd. A lot of people don't realize it, but even once a processor design is finished it doesn't mean it will be in full production. The shortest time there has ever been between a processor tapping out and going into production is 11 months. It normally takes longer than that. The Willamette has not tapped out yet. The probability of it being out this year at all, let alone at 1100 mhz, is simply 0.
  • There is nothing wrong with competing by producing a better product. It is against the law AND contrary to a free market to establish a monopoly and engage in anti-competitive conduct. If Intel's goal is to drive AMD out of business, it is both a civil and criminal violation of the law.
  • If you read a print ad you'll see those benchmarks where done using twice as much cache that was also running faster which does increase the benchmark numbers. If Motorola used these in production, it would be great, but they're have enough troubles getting above 500 Mhz.
  • The last I heard Intel doesn't plan copper until they switch to a .13 micron process.

    If you extend the int and fpu scores on an Athlon to 1100 Mhz it will actually be faster than this Athlon killer. Besides Willamette was suppose to come out at the end of 1998, they're way behind.
  • Am I the only one who's having flashbacks to that anti-drug commercial from the 80s?

    Just for the record, it wasn't an anti-drug ad. It was an anti-leaving-your-dog-in-the-car ad. The ad went:

    Announcer: "Hot enough to fry an egg?"
    (view of egg frying on a car hood)
    Announcer: "Hot enough to fry a dog's brain..."
    (view of sad-faced dog with tongue hanging out)
    Announcer: "[stern admonition about leaving the dog in the car]"

    Pretty disturbing, really. Poor little dog.
  • by Soong ( 7225 ) on Tuesday October 19, 1999 @10:03AM (#1602642) Homepage Journal
    1. Megahertz is a dead end

    Allready processors are too fast for the rest of the system. This has been alleviated for the last decade by an increasingly complicated system of caches and chipsets. At worst you'll go throgh 3 levels of processor cache, main memory, disk cache and finally disk, for a total of 6 levels of memory. This could go on indefinately but will have decreasing returns, unless the architecture of the computer can catch up to be generally faster. SGI/Cray has done this well.

    2. Megahertz == Marketing

    Ever since the P2, it's been terribly obvious that Intel just develops to satisfy what the majority clueless consumer wants- a higher megahertz number. The P2 made it blatant by being inferior to the older P's when run at equal megahertz. The only benefit was that it would run at higher megahertz.

    Efficiency

    No x86 has been really efficent- in many ways. More gates, more watts, more space, more heat. The unfortunate predominance of x86 is leading to space robots being designed with pentiums because Intel can push through to get the chips certified. When multiprocessing becomes a necessity as clock speeds dead end, who will be able to afford the power and large case for cooling that 8-64 P[3-5]'s will need? It's absurd.
  • Here are some of the things I'd do with a 1100MHz CPU:

    SETI@Home
    calculating digits of PI
    searching for Mersenne primes
    ...any of a number of other neat distributed computing projects.

    But most of all, can you imagine how fast a version of POV-Ray (http://www.povray.org) optimized for this chip would run?

    It's a dream come true for us glass sphere and checkerboard folks!

    Rick







  • JC [jc-news.com] (at http://www.jc-news.com/pc/) made a good point on his page the other day about this, I'll quote it here...

    "Register put up a very interesting bit here. It's about a surprise Willamette introduction in February of 2000 ("paper launch" in December, chip actually appearing two months later, according to the article). I passed this by despite the fact that a good ten percent (slight exaggeration, but you get the idea) of y'all emailed the URL to me. It just doesn't seem likely, considering the design, to our collective knowledge, hasn't taped out (and if it did, it was likely recently). Takes about a year from tapeout to production. You do the math. However, as I said, I wasn't going to put up a link to it, but I just realized something (thanks to Jocelyn Fournier, I think, for nudging me in this direction). The specint95 score of the P7-1100 shown at that register article is utter crap. If it is really the case that it is that slow, then Willamette will be pretty pathetic for servers, especially if you consider the 1MB on-die L2. The quoted score is 43 at 1100MHz. By my guesstimations (with the help of idiot from Ace's), an Athlon at 1100MHz would score between 50 and 55 (perhaps subtract a point or two for dropoff from linearity), depending on whether or not you optimize for prefetching. This means that Athlon pastes these alleged Willamette scores in specint. Actually, from the look of it, given Intel's Coppermine presentation at PF, it seems that Coppermine is also faster than Willamette in specint. I didn't check at all with the Winstone score, but as you can see, if Register's data is true, then it isn't really great news for Intel. I don't know about you, but I'll prefer to believe the more reasonable assumption that Willamette will come out in 2000 Q4 (or 2001 Q1) but will be totally rippin' in performance."
  • And you still only get 40fps in MK (with sound)
  • I may get an Athlon sooner!

    Let the price wars heat up!

    Don't you just love compition!
  • Wrong again! HE said 640K! hehehe :)

    *Sigh* Not only is satire gone, but a search party was sent out and found only its distant cousins, ignorance and misconception.
  • it's about time there's some real competition in the CPU market. AMD is really forcing Intel to get newer chips out much faster.
  • 0.18 micron does not refer to the wafer size. It is related to transistor size. The smaller the circuitry, the faster it can go for less power. If you want to bump up the speed, you have to raise the voltage, which allows you to go for high clocks and mo heat dissipation. I think the equation is somewhere is Hennesy and Patterson's book.

    Hasdi
  • by T4b ( 74819 )
    And to think I just got my 450..
    Do ye it'll be better than the dual overclocked celerons running at the same speed?
  • If you're gonna troll, at least make up your mind on which conspiracy to blame it on! :) Or do you mean its Wintel combined? hehe
  • That's not a bug, it's a feature. :)
  • Anyone know if it'll still have the Unique ID bug in it?

    For some strange reason, I refuse to buy Intel at the moment...
  • This processor, if it actually existed, would be much faster than Merced (if it actually existed), Intel's first cut of IA64. Or did you think because Merced was 64-bit it would be faster than any 32-bit processor?
  • by Anonymous Coward

    This kind of RAM uses a narrower connection to the memory controller (typically 16bit) than tradionnal SDRAM (typically 64bit), but transmits data at a higher frequency (350MHz here), on both edges of the clock (hence PC700).

    Though you get higher memory bandwidth (1.4GB/s here, versus 0.8GB/s for PC100 SDRAM) you must be aware that the memory latency is worse.

    See Rambus, Inc's web site [rambus.com]

  • Why not just go for dual 1100's?

    *grin*

  • 1100 MHz? That's either 100MHz FSB at 11x multiplier, or some really strange FSB speed. You can't get to 1100MHz with 133MHz FSB (it's an 8.25x multiplier). I think someone at the Register is just having fun...
  • I've seen a fair few comments about the Register being biases against Intel and MS, but I've got to say that as far as I'm concerned their coverage of future chips and stuff is pretty accurate. If they say it's doubtful, I'd tend to believe them.
  • But can intel pull that off? Their 600's are really OC'd 550's, running 0.05 (or is it 0.5?) V higher than normal :) and unstable as Taiwan from what I've heard :)
  • Will these fast CPUs run where ambient air temp is 100F (37C)? Why is the min/max ambient operating temp never specified anywhere on CPUs?

    This is because you can put the same processor into a well designed system, that provides adequate airflow that keeps a processor well within operating specifications. NOTE: These are typically not created with "off the shelf" parts. They consist of design work by a (or more likely a team of) thermal design engineer(s).

    OR

    You can install that same processor into a off the shelf case that provides piss-poor airflow, not because the case has no ventalation, but because the moron that put the motherboard (I still call them planarboards when I think to myself) in the case routed his ribbon cables whever they lie. The person that uses this system will experience heat related problems. And being equally unqualified to diagnose the problem, blames the chip! "Dammit! This chip is a space heater!" "No, the runs fine, the system was designed by a moron that thinks that the ability to use a screwdriver makes him a design engineer!"

    (sorry for the rant)

    If you purcahse a commercial name brand system, they will tell you the maximum ambient room temperature that they warrant the system to operate in.

  • As happens way too often, /. managed to lose my first response to this, while giving every appearance of having accepted it. *sigh*

    >What you describe is essentially the initial MIPS project started by Hennessy at Stanford.

    I'm glad someone noticed. ;-) Basically I haven't seen much in subsequent processor designs to counter the excellent arguments H&P put forth in their book regarding How It Should Be Done.

    >Most delay slots are never filled

    Obviously, this can vary a lot, but according to the studies I've seen a single delay slot can be filled with something besides a NOP >80% of the time for most kinds of code. The important thing is that a NOP is no worse than a stall, except that the stall usually has a lot more wasted circuitry associated with it.

    Yes, dynamic scheduling can do "better" than static, but at what cost? Does the improvement make up for the additional complexity and limitation of clock rate? More importantly, are there other things that can be done with that real estate which provide even better bang for the buck?

  • actually amd are working on a 64 bit x86 processor
    i think i saw it on there web page but im not sure where is saw it :))
  • Dream on. NT will someday be 64 bit like Win95 is 32 bit.

    "The number of suckers born each minute doubles every 18 months."
  • (Case in point, I've got PII-266's systems I'd love to plop 333's into, but they don't make those anymore and the current production chips are multiplier locked.)

    No problem. a 333-MHz P2 chip runs on a 66-MHz bus with a multiplier of 5X. The 500-MHz P3 runs at a 100-MHz bus with a multiplier of 5X. If you place the 500-MHz chip into your LX (please don't be a FX) chipset board, it will run at 66-MHz, with a multiplier of 5X giving you... 333-MHz!!! (Of course, you need to make sure that your BIOS can handle the new chip. If the BIOS don't know how to load all the latest BIOS level workarounds for the chip, you are looking at serious instability)

    Now, you will be paying the 500-MHz price for a 333-MHz performance, but you DO get your wish.

    -- I could use some karma, please moderate me up 8-)

  • The basic formula for CPU performance, from Hennessy and Patterson, is:
    • WPI * IPC * CPS
    • WPI = Work Per Instruction

      IPC = Instructions Per Cycle

      CPS = Cycles Per Second

    Classical CISC architectures tried to maximize WPI, and this limited the other two factors. RISC was mostly intended to maximize CPS, intentionally sacrificing WPI to do so. Pipelining, superscalarity, and branch prediction are all targeted toward increasing IPC in different ways. VLIW and EPIC improve either WPI or ICP depending on how you look at it.

    All of these approaches to improving performance tend to have characteristic challenges associated with them. In the current case, you have to deal with the fact that massively superscalar architectures require an instruction stream that keeps all the functional units fed. That means that compilers have to try to resolve data dependencies and competition for functional units, either of which would cause a stall, and also deal with branches which cause bubbles in almost any architecture. It's a very tough problem, which is why chip designers turn to second-order tricks such as speculative/predicated execution and VLIW/EPIC.

    Personally, I think that's all a trap because it causes chip complexity to skyrocket and undermines the very idea of RISC. If I were designing a chip, my goal would be to crank the frequency sky-high and make the compiler (or a translating front-end processor such as Transmeta is rumored to be working on) do most of the worrying about how instructions get scheduled. In particular, I'd go for:

    • A moderate number of moderately pipelined functional units, with fully exposed pipelines including whatever delay slots are necessary.
    • Instructions that specify the functional unit, without on-chip dependency checking and such. If the compiler screws up and issues an instruction before its operands are ready, tough for them.
    • Very limited branch hinting. No branch prediction, no speculative execution.
    • Lots of on-chip cache, because there's no way memory will keep up. If the tag-check logic can't be made fast enough, maybe an explicit stack or scratchpad on-chip.
  • Don't forget that it's much easier to test your home built nuclear weapons on a fast computer than doing underground detonations in your basement.
  • by Anonymous Coward
    I don't think you need a 1GHz chip for word
    processing. I do think you need it to solve
    large eigenvalue problems and run atmospheric
    model simulations. A PC/Linux combo is an
    excellent alternative to expensive workstations in
    scientific research. It's ironic that these
    super-fast chips are really overkill for 99.9%
    of the population, but for the 0.1% of us who do
    serious number crunching it's a great deal! :)
  • by Anonymous Coward on Tuesday October 19, 1999 @06:53AM (#1602677)
    The Register article you link to is very significant, but perhaps not in the way you intended. On 15/04/99 the Register reported the following:

    Intel is twisting the knife by showing OEMs performance predictions stretching out until late 2000 featuring a Willamette IA32 processor rated at 1100MHz competing with an AMD K7 at a paltry 666MHz.

    No specific figures are quoted, but graphs pitting the rival chips against each other show the Willamette 1110MHz scoring around the 50 mark in Winstone98 against the K7 666MHz at 35. On SpecInt95, Willamette reaches 43 against the AMD part's 20.

    The same graph shows a 666MHz Coppermine appearing in late 1999, a clear 12 months before AMD is expected to reach the magical figure.

    And perhaps more worryingly for AMD, a Coppermine-based Celeron appears in early 2000 (probably at 500MHz and 100MHz FSB with Streaming SIMD) which is predicted to perform almost on a par with the K7 666 reckoned to be due 6-9 months later.


    Rather than demonstrating inaccurate reporting by the Register, this report simply presents Intel's OWN predictions.

    It appears from this that Intel was expecting AMD to be unable to supply 666 MHz Athlons until Q4/2000! As you can see, Intel's current production is right on target, but their predictions for AMD were way off!. AMD is over a YEAR ahead of *Intel's* schedule. There's no way for them to adjust for this misprediction quickly, so expect Intel to lose a *lot* of market share to AMD over the next year.

  • "If the heat sink fails" i always thought that heatsinks operated on prity basic laws of physics
    so i dont see how they can fail :P
  • I think that Intel is reaching the upper limit in terms of how powerful they can push the whole x86 thing.

    This has been said now for at least 5 years.

    As of yet, none of the soothsayers that said this, have been correct. Who knows, maybe you are the first!

    No, this does not qualify you to claim "first post" if you are right! 8-)

  • Well, that probably depends, I really liked having my p75 run run at 1.5 bus speed (on an 83mhz bus :), but Athlons have much more parralelism, and multiple exicution paths, etc. I'd be willing to be they would be more efficent.
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • ... but Intel would not let HP release it until it had ia32 reverse compatibility. So now we have to wait an extra two years!!

    Why? You can't just recompile closed source. More reason for OSS I say. This is a very good example of the Wintel monopoly holding back technology progress.

    Also, had intel not designed the x86 architecture in 3 months (normally takes over a year.. but they had to get it out quick) it might not be taking so long for Merced to appear.

    sigh.
  • No, the CPU actually has to support it along with the chipset.. You can't make a dual K6-3 system, becuase the CPU doesn't support SMP.

    Sure you can. You just can't find any boards that will do MP for the K6. In actuality, AMD's chips have supported MP since the original K6, albeit not using Intel's SMP specs since Intel wouldn't release their specs. AMD just developed their own MP spec.
  • Hey! I was running a 4x86/133 AMD system, and overclocked it to 160 fine!!! (Gotta love motherboards with 40MHz as an option)

    If I ever get a peltier I might try for 200 (50MHz)... It would boot but lock up pretty fast, too much heat (had to crank the voltage a bit more) :)
  • If only the MoBo wasn't $200.. doh!
  • by Anonymous Coward
    Point 2: Rambus was not Intel technology, it was Rambus technology....

    The technology that ends up in your PC, that is Direct Rambus, was in part developped by Intel.

    Two quotes from the Rambus web site :

    Developed in conjunction with Intel Corporation, Direct Rambus technology has the performance/cost ratio demanded by the high clock-rate microprocessors used in mainstream PCs starting to ship in 1999.

    December 1996: Rambus and Intel disclose agreement to evolve Rambus DRAMs to meet requirements of PC main memory

  • Some work has been done with this. A threaded architecture might be useful here.

    Predication works on a similar assumption. When hard-to-predict branches are predicated, the hardware wastes some time executing useless instructions, but avoids the mispredict overhead of refetch and/or reexecution of everything after the branch.

    --

  • Trailing? This is a highly questionable statement. It has been trailing in ads and M$ style kick the baby marketing strategy, but hardly in CPU's. Let us see (sorry for the ugly format but Taco is censoring table tags):

    Note that these are Top of the Line CPUs, not what was available at the same time. The idea is where does AMD get when it wants to develop a concept, not where it stands at the moment.

    • 286 ranking: Harris - 25MHz with additional prefetch and optimized core; AMD - 20MHz with additional prefetch and optimized core; Intel with 16 MHz barely...
    • 287 ranking: ITT - 20 Mhz matrix instructions, three register sets, etc; AMD - 20 Mhz optimized core, Intel - 12 Mhz, pathetic
    • 386(SX and DX) ranking 40 Mhz AMD with optimized mul, Intel trailing at 33 Mhz with a worse core. Cyrix was mostly doing coprocessors at the time and there was basically no match there.
    • 486 clones 166 Mhz AMD with X5 - 64 (128) instructions prefetch, write-back L1 cache, etc. Trailed far by Intel with 100 Mhz DX4 which even did not have a proper write back cache. That was the peak point with UMC, Cyrix, ITT, TI wrestling for the branch.
    • Pentium and clonesAMD K5 166Mhz, followed by Intel P5 133, Followed by Cyrix.
    • Pentium MMX and clonesAgain K6 266 Mhz trailed by Pentium MMX and mobile pentium.
    • Optimized P6 like coresAgain K6-3 450 Mhz trailed by PIII.
    • Athlon - Here intel does not have an answer yet.

    A note - so far Intel has used better marketing and came out with products before AMD. So the fact, that AMD blew it out of the water in every CPU category sooner or later was never taken into account. Now AMD came out with Athlon before Intel. The game started to be interesting...

    Adn an additional last comment. Intel can raize their frequency to terahertz if they want but with their current bus it will be still slower then an Athlon...

  • Is anyone else getting tired of non-compatible cpu sockets? Ok.. so I can see AMD using the slot A tech to utilize the alpha tech.... I can see Intel's switch to the slot 1 (because they hold the patent) But how many different freakin sockets does Intel have to bring out? Please... this is getting ridiculous. Socket 7 was great.. (had some drawbacks.. yes) Socket 370 or whatever that non-slot celeron is Slot 1 Slot 2 and now a NEW socket design from Intel? Gah!
  • man i got the rare chance at where i work to show off an amd k7 for a tech expo and i am telling you the thing was fast but whats up with the dawm thing i thought it was suppose to be stable man if it was then i hope amd gets this new chip right because other wise i am sticking with my reliable p3 500 at least i don't need to restart it every 15 minutes when it playing unreal (about every hour) i have to agree when it's not over heating its a very fast chip.
  • Excellent point - nothing to add...
  • ..it is nowhere near the Pentium III "killer" on the server, until they realease that damn MP motherboards.

    I was at the Palo Alto Fry's the other day and they had dual Athlon MB's in stock for something around $400. Would that I had ~$600 to upgrade my system (And add the second processor later).
  • We'll see. My theory is that it will ship all 64-bit, but it will be about as popular on NT-on-Alpha.
  • But intel aren't being anti-competitive and they don't have the same size monopoly as they used to to.

    To be anti-competitive would be to reduce the price to below that of the Athlon and run at a loss and just hope that AMD goes bust first.

    Of course intel want people to buy it's own chip and not AMD's. Of course intel want to drive AMD out of this particular area in chip making - that's what's called competing.

    Nowaday, "the better product" is not just the best tech, but the best tech, price, sales, marketing etc.
    --
  • well, the pentium classic made it all the way up to 200mhz, I think.

    The thing is, AMD always comes out with one system about a year before intel reaches the same version number. the k5 beat the 468, but it pailed against the pentium. The k6 beat the shit out of a pentium, but pailed in most cases against a p6 core. This k7 looks to kick a p6's ass, but I'd be willing to bet that the next get p7's will probably beat it...
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • There there would be a way to do a dual 333 amd system, eh?
  • Intel is evil, I knew it. And we all know about the sum of the ASCII value of "Bill Gates 3" "WINDOWS95" and "MS-DOS 6.22" (I may not have the caps right on some of these and haven't checked them) The Wintel is the tool of satan! Get the Holy Linux!

    Did you mean 'hacker' or 'cracker'?
    Do you know the diffrence? I don't think you do.

  • Actually, the PII/Celeron are more like P6 MMX, and the PIII is P6 MMX/KNI :)
  • What you describe is essentially the initial MIPS project started by Hennessy at Stanford. MIPS is an acronym for "Microprocessor without Interlock(ed) Pipe Stages."

    The problem is, it's very tough for the compiler to do a good job scheduling statically. Most delay slots are never filled. Much more information is available at run-time (in a limited window for the hardware), so it can make some better decisions than a static compiler can.

    However, the compiler can look much further ahead than the processor, so it seems that some sort of hybrid solution is called for. Whether that involves profiling and feedback optimization a la FX!32 and others, new ISA or something else is still an open question, I think. IA64 has made steps in this direction.

    --

  • Every once in a while, they'll run an actual truthful news story, but they seem to do the bulk of their reporting sheerly off of rumors and conjecture. 99% of the stuff they print turns out not to be true. Why do people still use them as a "news source"?

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • Yes, stupid question I know, ofcourse we want it, but what I mean is should we be so eagerly anticipating something that will simply have more resources to be eaten up by sloppy code? (Let's face it, M$ still hold the biggest share in OS, despite best efforts) Shouldn't we be also pushing for tighter and better coding, which would fix probably half the resource problems we have now?
  • nice speed, but do i have to install it in my freezer.
  • Interpreted languages are superior in a number of ways from the standpoint of the programmer. They are simpler to modify and read,

    "Simpler to modify and read" in what sense? If you mean it's easier to read programs written in those languages, and to modify an existing program written in those languages, how much of that is due to the language and how much of it is due to its implementation being interpretive?

    (Is "interpreted" a property of a language or an implementation? I think the first LISP implementations were interpreted, but LISP compilers exist; most C implementations are compiled, but I think C interpreters exist. I could imagine Perl or VB implementations that generate compiled code - I have the impression that VB code can be compiled into machine code - and if you were to translate one of this sort of language into, say, Java byte codes, and to run them in an environment with a JIT compiler, is it interpreted or compiled?)

    Some of the benefits may be due to the implementation being interpretive, e.g. an interpreter might be able to do a better job of telling you where something blew up (although symbolic debuggers can, at least sometimes, do a decent job of that, at least if the code is unoptimized), but I'm curious whether a sufficiently clever non-interpretive environment could do as good a job.

    I.e., speeding up "higher-level" languages might be doable by means other than throwing faster processors at them; one can debate whether they're better doable by those means, but that's a separate question.

    But, yes, it's not ipso facto the case that faster processors server only to encourage sloppy code; some might debate whether software and what it can do has progressed in any truly useful fashion since the days of the Manchester Mark 1, but....

  • by Special J ( 641 ) on Tuesday October 19, 1999 @03:38AM (#1602747) Homepage
    It wasn't even six months ago that people talked about AMD's "Pentium Killer". Now its the other way around. Changes fast doesn't it? Used to be everyone after Intel was trying to make the Pentium Killer. This is the first time I recall in x86 land that Intel is the one making the "Killer".

    Perhaps this a true sign that AMD is a legitimate competitor to Intel; not just in the low-end but the high end too. If you didn't think that already.
  • No, we should be using our superior OS to make them all look bad on the same hardware. I know that's what we've all been doing now, but I suspect the faster the clock the more noticible the differences will become again.

    ----

  • by .pentai. ( 37595 ) on Tuesday October 19, 1999 @03:38AM (#1602754) Homepage
    Seriously curious...what would be the point of this for the majority of us? I'm running a single celeron 400 (soon to be dual) and it does everything I want, without problems. Kernel compilations are in the low single digits, I can play any games full speed, why does the average person here need something this fast? We don't, other than possibly being able to say "haw, my computer is faster than yours."

    Just some thoughts...though I wouldn't complain getting one of these things for my birthday or anything :P


  • This sounds a lot like more vaporware.
    Sure, it'll come out eventually.
    "Released on paper sometime in January" with the chips actually available sometime around two months after that. Now doesn't that strike you as equivalent to "The check is in the mail?"
    I want one. We all want one. But announceing plans to release something drastically cooler than everything else on the market should require a definitive time frame, especially when using that many "killer" buzzwords.

    Athlon killer? Who even has one yet? Where do I get a mother board for an Athlon?
    I can't believe this was anything but the PR departments intentional release of memos to get noticed and to try to take sales from AMD.
  • The Register didn't have any sources. Sorry, but if you haven't read the article, do so.

    "We know, from a highly reliable source..."

    "It's also worth referring back to this piece, which also came from a highly reliable source..."

    "Another reliable source tells The Register..."

    "One US source says..."

    Hehehe. Boy I get a lot of laughs out of this kind of journalism.

    But seriously, it seems to me that at this kind of speed (if it were to be true), the processor isn't going to be the bottleneck (but that will vary depending on what you are doing, of course). The slow point for most of the things I do, is, believe it or not, my internet connection. (And I'm over cable modem.) Give me a low end pentium class machine and a blazing link, and I'll be a happy man.

    However, that is all just my opinion...but I got it from a reliable source. ;)


  • the 386 was revolutionary, the pentium as well,

    "Revolutionary" in what sense?

    The 386 was the first 32-bit x86 processor, and the first one with support for demand paging - it had a new instruction-set architecture. Not particularly revolutionary in general, but revolutionary for x86.

    The Pentium implemented the same instruction set architecture (with some minor additions); it was primarily revolutionary in its implementation, in that it was the first superscalar x86 chip to ship (again, not particularly revolutionary in general, but revolutionary for x86).

    The latter means that, with Pentium, they pretty much, well, "went for the speed race".

    Intel should be coming up with new technology

    ...or getting it from HP. (I have the impression a lot, perhaps most, of the ideas in IA-64 came from HP.)

    The Lame Unit In I.T. does have a new instruction set, because it'll be the first IA-64 implementation; is that the kind of "better technology" you're looking for?

  • by freakho ( 28342 ) on Tuesday October 19, 1999 @03:43AM (#1602772) Homepage
    It won't be an "Athlon Killer" unless it is competatively priced. Assuming this report is reliable, Intel takes it from paper to silicon, and a lot of other stuff, it still won't appeal to te typical computer buyer (which ain't us anymore) unless there's not too huge a price gap. Which would mean Intel selling under cost yet again, and how long can they keep that up? Fiscally, a long time, admittedly, but I'm talking logically.

    fh
  • It would seem like a possible marketing move by Intel. *Maybe* they'll be able to pull this off, but afaik they haven't announced anything official. It would seem that by leaking rumors that they will be shipping these chips by Q1 2000 they might be able to hold onto some customers who might otherwise consider putting Athlons in their systems (OEMs) or buying Athlon-based systems (both corpotate buyers and individuals).

    joe
  • by mosch ( 204 ) on Tuesday October 19, 1999 @03:47AM (#1602777) Homepage
    But I'm doubtful. Intel doesn't have a particularly stunning record with delivering chips early and I'd rather not buy one of their step 0 chips anyway.

    Let's see, AMD gets market share and major recognition with a quality product, and now suddenly Intel is claiming that it can suddenly make much faster chips RSN. Whatever.

    I'm personally sick of talks of vaporware. I love new technology and reading about the future, but I don't buy my computers based on speculation from unnamed sources regarding the possible date that a chip will get put to paper. It's utterly irrelevant.

    Call me when it's in silicon.
  • it seems to me, that adding pipelines (moderate space cost) and execution units (higher space cost) would be bring more performance than higher clock rates. that is, if the same effort to increase clock speed was put into superscalar expansion, the pay off would be greater, provided there's enough space.

    also, it seems to me, multiple short pipelines would yield higher preformance than fewer high clocked, deeper pipelines.

    i believe the reason intel goes the faster deeper route (compared to slower, wider) simply cuz:

    1) it's cheaper to deepen pipelines and it isn't too hard to get a good enough signal to noise ratio for higher clock speeds (is that even an issue?)

    2) marketing. this is the obvious one. they can say "our chips are fast! more MHz than our competitors" and the general public doesn't know any better.

    ps, please correct me on anything, i'm just guessing at some of this stuff
  • by Anonymous Coward
    You'll need a 1100mhz to run the OS minimum.
  • Two words: Power Trip!

    I just get this "MUUUAHAHAHA" feeling when i think of a 1Ghz power machine.

  • by Hasdi Hashim ( 17383 ) on Tuesday October 19, 1999 @03:58AM (#1602805) Homepage
    1. The register is a rumour mill

    2. At 0.18 micron this stuff needs a supa-dupa cooling system. Maybe with sharper fab, you can get this speed

    3. Needs very large cache and very wide memory bus and heavy interleaving because the last time I checked the memory is still running at 100MHz max.

    If I were you, i'll either get a dual celeron bundle [tntcomputer.com] at $799 or a 400 PPC750 with monitor [apple.com]also for $999.
  • Actually, the alien overlords have allowed Intel to bring forth this new chip for the good of mankind. However, you might want to wear tin-foil and collanders on your head, lest the RF emmissions from this Gigahertz+ chip turn you into a mindless overclocking zombie (An unfortunate side effect discovered when the celerons were first overclocked.)
  • HOW DARE YOU!!!?!?!?! Are you crazy???? Of course you, as Joe Consumer, need an 100 MHz CPU for your Word and Quicken programs!!! You should be ashamed for being perfectly happy with what you have!!! You MUST feel the need to upgrade every time Intel releases a CPU!!!! You will be assimilated!!
  • > Now its the other way around.

    That's exactly what you are supposed to conclude. But it isn't "now", it's several months from now. Maybe.

    So long as the chips aren't shipping, it's vapourware.

    Or perhaps paperware, in this case.

    --
    It's October 6th. Where's W2K? Over the horizon again, eh?
  • by hedgehog_uk ( 66749 ) on Tuesday October 19, 1999 @04:06AM (#1602826) Homepage
    According to the article, the Willamette is coming out 9 months early. So were Intel originally planning on sitting on this chip for 9 months, but forced to release early by the impending 1GHz Athlons due early next year? Is this chip going to be fully tested, or will it contain major problems like those found in the i820 chipset? Perhaps this is just vapourware, designed to put off people who were thinking of buying Athlon systems - this wouldn't be the first time that companies have done this.

    I'm left wondering if this article is going to be any more accurate than one the Register ran earlier this year [theregister.co.uk] when they said that the 666MHz Coppermine would appear in late 1999, "clear 12 months before AMD is expected to reach the magical figure". Yeah, right.

    HH

  • by aheitner ( 3273 ) on Tuesday October 19, 1999 @04:07AM (#1602828)
    largely that it's very hard to parallelize code so that you can run it through separate execution units without stalling the processor. With the Pentium's two shallow integer execution units it was possible to hand-optimize your assembly to keep the two pipes filled. But breaking up code that is linear in design (i.e. most programs have a single "flow" and assume linearity of execution as their core model) into parallel chunks is a hard problem.

    Continuing down the "more, simpler pipes" path is akin to explicitly parallel chips. It's a hot area of research, and there are some applications for which it might pay off (the ones where multiprocessor machines already pay off, perhaps: servers that are doing several unrelated things at once) but for doing just one thing and doing it fast, faster deeper is probably far easier a problem. Remember, Intel has had problems with the old P6 core (ppro/pII/pIII) because it's already very hard to write a compiler that doesn't stall it left and right.

    With all that said, I don't see any mention in this article about the actual design of the new chip, except for some very vague (and likely wrong imho) stuff in the article about Wilamette that's referenced in this one.
  • by David Greene ( 463 ) on Tuesday October 19, 1999 @04:15AM (#1602835)
    While marketing probably plays a (small) part, probably more importantly, the parallelism just isn't there. At least not to any degree that the hardware can see it.

    Branch prediction is the major problem. Sure, predicting one branch may work 90% of the time, but when you start talking about wide machines, all of a sudden you're predicting 2, 3 or 4+ branches at once. Your prediction rate goes way down. Fast.

    A student here did a study that showed >50% of the processor cycles were spent recovering from branches. And I don't think the study was on a particularly aggressive machine (though I can check that).

    The encouraging this is, if we can get around branch problems (and that's a huge if), the parallelism is there. But not where the machine can see it. There was a study exploring the limits of ILP in Spec95 (yes, not realistic benchmarks, but it's what was available). If you assume perfect prediction (yes, completely unrealistic, but this was a limit study) and remove the stack pointer (which is often on the critical path of instruction dependencies), you can get parallelism in the hundreds (for integer programs) or thousands (for floating point stuff) of instructions.

    But there's catch. If your instruction window is 10k instructions wide or less (a completely unrealistic size, by the way), the parallelism drops by an order of magnitude or more. The hardware doesn't have enough context to see it. But the compiler does. Think about forking threads on function calls when you can and you'll see where I'm going.

    Some kind of model like Simultaneous MultiThreading may be needed in the future. Compaq is working hard on this for the Alpha.

    What's important to remember is that we've received the biggest speed boosts from the process guys. Cranking the clock and packing in gates (i.e. cache) does much more than adding another pipeline. Remember Moore's Law.

    --

  • If you look through the back-stories of The Register, you'll find that basically everything they've ever said has turned out to be a lie, or unproveable. I'd take this story, and the one about the 1GHz copper Athlons in January, with more than a little grain of salt. Also, those SpecINT numbers are much, much worse than even a P3 overclocked to the same speed would post. I think The Register can't even be bothered to lie convincingly.

    --Conquering the Earth Since 1978.
  • One step closer to the frequency of a typical microwave oven (around 2.4 GHz)... Imagine, your pizza will never get cold if you just put it on top of your tower.
  • I remember two years ago, playing around with projections, and me and my friends came to the conclusion that we'd have 1 GHz processors and 1 MB RAM before 2000.

    I'm happy to see that the GHz barrier is likely to be broken before 2000, if just barely (though you have to wonder how much vaporware this is.) As for the MB RAM barrier, I guess it's always possible, but it's starting to sound like overkill (well, maybe not for W2K, but certainly for most of the uses I make of apps under Linux!)

    Now all I need is a 1 TB hard drive to go with that 1 MB RAM and 1 GHz processor. In a Palm Pilot. There's nothing like misusing power to put any 1990 supercomputer to shame on playing X-mines!

    "There is no surer way to ruin a good discussion than to contaminate it with the facts."

  • Intel didn't release ANY information about the Willamette (P7) at the Microprocessor Forum. Now, as much as people want speed, Intel also has shareholders to appease. If they knew about an impending P7 release, they'd have to make that knowledge public otherwise they'd be misleading shareholders to believe that the Coppermine is the Q4 1999 and Q1 2000 contender.

    While we're on that subject. If Intel does paper release the P7 in December, they've pretty much signed the death certificate for the Coppermine and PIII line. Now Intel's a marketing genius (love them/hate them for their technology, but any company that can convince people they need a PIII for the Internet has strong marketing) so there's no way they'll throw away all those ad dollars on the PIII line quite yet.

    The Register had been getting better, but this is reverting to their old self...

  • AMD has been trailing Intel since the 286. Intel
    has had plenty of time to destory AMD.

    Thanks to AMD and others, Intel is not a monopoly.
    And, you can buy lots of nice machines for what
    a fully loaded TRS-80 cost in 1978.
  • Funny - my Celeron 366 is currently running at 550, and my P3/450 is running at 600...

  • 1.1Ghz? Intel? So is this a replacement for my
    stove or central heating? Do I need one of those
    big restaurant freezers or can I just move to
    Nome and keep it outside?

  • I believe the fact that Intel has had many high cost machines to even release to the public due to cost. How many want to bet that intel has machines in there building running at even higher than 1100.. Think about it, they just release stupid ?33's, ?50's and ?66's just to have more money in their pocket. People will by whatever is the cheapest out and most cost effective.
  • In the past, Intel has always been able to dismiss their competition as inferior. Oh, sure, there were times companies like AMD or Cyrix had a slightly faster chip, or better price-performance on the low end, but they were always brief and/or insignificant. Undercutting prices is a common enough thing on the low end; AMD/etc simply has to reduce costs below the giant Intel.

    Athlon was different. Athlon challenged Intel on Intel's home turf, and won. It was the fastest high-end x86 CPU around, and is going to stay that way for at least several months, if not longer. Intel had a serious threat for the first time. AMD may still be small compared to the behemoth Intel, but David was smaller then Goliath as well.

    The fact was one thing, but as we know, the spin can be another. Intel could have found some sort of flaw in Athlon, or fired up the FUD guns. In most cases, you can argue some point or other as an advantage over your competitiors. Even Windows, to use the favorite /. example, does a few things better then Linux.

    But Intel did not do that. Intel could not find a way to counter Athlon in the trenches. Intel looked for ammo, and found none. For the first time, Intel looked at the competition, and found itself unable to immediately compete!

    Now Intel is scrabling to catch up, to try and build a counter-weapon to use against Athlon. The fact that they feel the need to "kill" Athlon is very telling. It is one thing to know you have a threat. It is quite another to classify it as the threat.

    By accepting AMD's challenge, by admitting that the Athlon is strong enough that they need to target an entire product series at it as a "Athlon killer", Intel admits that they have lost a battle. That AMD has stole ground away from them. That Intel is wounded enough for it to hurt.

    It may be only in pride, or in market perception, that Intel feels pain. Their sales are still large enough that AMD is no immediate financial threat.

    But suddenly, the small fry that they paid little mind to before, has woken up and bit them hard. Hard enough for Intel to step back, shake itself, and wonder what to do about this new threat.

    I imagine the British felt a similar feeling when their American colonies fought to break lose -- and started to win.

    It will be very interesting to watch this war as it unfolds.
  • by Anonymous Coward on Tuesday October 19, 1999 @06:10AM (#1602918)
    Hi, Mike Magee from The Register here (mike.magee@theregister.co.uk) When we use the phrase source, reliable or highly reliable it's because we need to protect the identity of people. How soon, for example, would someone be fired if we published their names and email addresses? We call this protecting our sources. Don't think we make this stuff up -- we don't... Plus, our view is that applying the "sacred principle" of journalism to the IT business is counterproductive. If we had to put every single one of our sources "on the record", they'd say -- err, I signed an NDA... Mike

Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing had happened. -- Winston Churchill

Working...