Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Merced Design Completed 140

NoWhere Man writes "Merced's design is complete and are due to go into production mid-2000, but it is expected that McKinley, Merced's successor (due late 2000), will likely be the most popular in Intel's 64-bit chips. "
This discussion has been archived. No new comments can be posted.

Merced Design Completed

Comments Filter:
  • Win2k 64bit will be shipped whether its ready or not, at a cost to those who really need it ready. Then Janus will come out, promising to fix all the problems in the universe.
  • by Anonymous Coward
    You know, this is the one truly amazing technical thing that Apple did that it never gets credit for. When I was in college and got my first PowerMac 95% of the older software just worked (except for a few really cool hacks - but I knew they wouldn't).

    They moved an entire OS, an entire platform and all of the applications that ran on it from one chip to another almost completely seamlessly.

    It went so well that no one talks about it. I think that that is Apple's best technical achievement ever.

    - AC
  • by Anonymous Coward

    As anyone in the hardware business knows, there is a huge gulf between what Intel's just announced (preparing the design to be fabricated into physical chips for the first time, traditionally called "tape-out" becasue the design data used to be shipped to the people making the masks on spools of tape) and actually being ready to ship product to customers (i.e. OEMs).

    Keep in mind that this means they don't have a sungle physical Merced chip yet, which means they've only been simulated, never really tested. Although I don't know anything about Intel's internal simulation tools and methodologies, I'd be surprised if they've even been able to boot an operating system on their simulated design. I'd be amazed if they've simulated significant real-world applications. In other words, at this point Merced has had very little real-world testing. That's what the next step ("first silicon") is all about: testing out whether the design actually works in practice (and systems) rather than just in theory (under pre-silicon simulation).

    That's not to say that Intel hasn't expended a huge effort already in working to eliminate as many bugs as possible before taping out (in fact, I'm sure they have), but there are always more bugs when the silicon arrives. In fact, many of the problems encountered in first silicon are effects which simulation can't (or at least doesn't) effectively capture (race conditions that earlier analysis missed, unanticipated electrical and electromagnetic effects, yield problems, etc.). These problems are worse when you're talking about a new from-scratch design. There are also bound to be other problems when you're talking about a totally new processor architecture (such as IA-64).

    Usually the process between tape-out and shipping to clients goes like this:

    1. Tape out
    2. Wait for the chips to come back from the fab
    3. Attempt to run some real software (OSs, applications, etc.) on the silicon you have
    4. Discover bugs in the design
    5. Update the design to fix the bugs
    6. Go back to step 1 and repeat until it actually works

    To me, it seems that the estimate of one quarter between tape-out and shipping samples to OEMs seems extremely optimistic. Of course, I suppose Intel could be planning on using the OEMs for debugging their first or second pass silicon (before they've made it really robust), but somehow I doubt that. I would expect that it might be more like two quarters before anybody outside Intel actually sees a physical Merced chip.

    But then again, hey, what do I know.

    Kenneth C. Schalk < kenneth.schalk@compaq.com [mailto]>
    Software Engineer
    CAD & Test Group
    Alpha Development Group
    Compaq [compaq.com]
  • by Anonymous Coward
    I just had to reply to all these random comments about Merced to answer a few questions. I don't know everything because I didn't design it but I've heard bits and pieces. These first concern was when is Merced going to be out. I noticed there was on reply that explained that pretty well. They just finished the DESIGN of the chip. You are going to have to wait until the processor is taped out, and silicon is ready for system validation. After the initial chip is produced which takes a few weeks, it will be hammered for bugs. I also wanted to comment about it being buggy. Intel learned from the first pentium that had problems. You don't make a mistake like that twice, but there is no such thing as a bugless chip. Some bugs will never be found. Sometimes you have to run a single test 100,000 times to find one bug. But you better bet this chip isn't going to be the Windows 95 of microprocessors. I also wouldn't get your hopes up to actually get one of these in your hands. They are going to start out like the Xeon. Really expenisve for businesses. There is no way everything is going to jump from 32 to 64 bit architecture over night. So keep your pants on, it will be a while. Anyway, I can't wait until they are affordable but it will be a long time. Faster games are always good, but they don't rely on CPU's as much anymore. Processor speed isn't the bottle neck anymore. Think about it. I have a Pentium 400 at home, but the bus speed is only 100. Anyone see the problem? Oh yeah before I forget. I don't know what you are all talking about the Alpha chip for. If you mean the really fast one, that Digital designed... It doesn't exist anymore. Digital is dead. It was bought by Intel and Compaq. That is what I do. Intel got the StrongARM chip from Digital. It will be used in top set boxes. Really useful when digital television becomes a standard. expect to be able to surf the web on your tv pretty soon. I think some of the technology that went into the StrongARM cames from the Alpha chip. Its a risc processor, the 2nd chip due out should give 600 MHz at half a watt. Check it out on ARMs homepage. :)
  • by Erich ( 151 )
    That the initial version of the IA64 chip won't be all that fast... stick with IA32 unless you need to develop for IA64...
  • Linux will not be a 32-bit OS on Merced. If it can be made to run natively at all (and Linus has been quoted as saying that this is a "done deal"), it'll be full 64-bit, just like it is on Alpha and UltraSparc.
  • And it's not '3dfx' stuff- yes, Altivec is vector registers, i.e. four 32-bit values can be changed with one instruction, but these are general purpose registers, and Apple is already trying to figure out ways to use these 'fscking huge' registers in anything from screen update to memory management to Quicktime to yada yada yada- sky's the limit! If you can get a 128-bit chunk of it into the register, you can do it faster. One convolution kernel ran something like 200x faster on Altivec because it happened to coincide with register operations.
    Because these registers are general purpose, there's every reason to expect that gcc/egcs can be told about them, and hackers can find ways to make use of them- ideally by telling compilers how to optimize so the vector processors/huge registers get used. At that point, any PPC Linux users can simply recompile favorite applications to get them running markedly faster- and recompile the kernel to get that running markedly faster- something that MacOS users will only see indirectly, with system updates and with Quicktime.
    Personally, I'm looking forward to hearing about this...
  • Cyrix dead, Winchip dead, AMD has had the COO bail out and is hemorrhaging money like mad. All this because of Intel and their basic viciousness and inability to coexist with competition. And you think everyone should make nice with them _now_ that they are finally stepping into almost complete monopoly for CPUs?
    I do have one of their CPUs in the house. :P Brought home the pet 486 from work that I've been putting linux on. However, the main computer is non-Intel. It's a Mac. I like Macs, but isn't it kind of pathetic that the only choice you have other than Intel is to go use Macs? (yes, I know, Athlon. Yes, please go and buy lots of them. AMD deserves better than the stomping they are getting. I don't think being better will save them, so buy _now_ before it's too late.)
    I've never been so pleased to be running a 604e as now, when the whole x86 market is being systematically exterminated. I'll just keep on supporting PPCs, seeing as I already have the Mac and the software to run on them. Things are already nasty on x86, good luck being able to afford Alpha machines (and people say Macs are expensive!) and if you're still supporting Intel, well, just think about what your money is doing, won't you? One hand supports Linux, for now- the other's killing all your hardware choices as fast as it can.
  • The first few instances of each new product line from Intel have always sucked:

    • Pentium 60 & 75
    • Pentium II 233 & 266
    • Cacheless Celerons
    • Pentium III (not much faster than a Celeron at the same clock speed)
    Later revs will probably kick butt.
  • I'm surprised noone seems to talk about Transmetta anymore....

    I havn't heard a good Transmetta gonna whoop Intel rumor in months.
  • This is because the biggest bottleneck isn't the WARP engine setup details, or anything chip-related, it's the fact that GLX (the OpenGL framework the G200/G400 and nvidia drivers use) relies on the X protocol, and moving that much info through sockets is gonna slow it down. I would not expect to see anything better than 10-20 fps on X at all until XFree86 4.0 and DRI are out later this year.

  • for a good example of how to dump legacy support in a chip look at what apple did when they moved from the 680X0 to the PowerPC... the PowerPC has no support for 680x0 instructions so apple built an emulator... works flawlessly and is a very elegant solution

  • I'm running two E250s with 400MHz Sparc chips and 1GB of RAM each, and man, they're nice... :)

    The newer, better, faster Sparcs on the horizon should be even sweeter.

  • >Look at Digitals compilers under Digital Unix.
    >Produces much faster code than \1. But does
    >that matter one bit for the \1 community? No.
    >Didn't think so.

    Speak for yourself. I work daily from my FreeBSD box to my boss's linux box running a commercial fortran compiler (g77, etc., don't even play in the same league). We have absoft fortan, but it would have been digital if it were available.

    We were even willing to pay the extra cost for the alpha box, but the costs of DU itself, both for purchase and the risk of getting sucked into the university system and fee'd to death there, mean the x86/linux/absoft solution.

    A year an a half later, I've hit the price. I never thought I'd see the day I *needed* a 64 bit operating system, but now I do: I need an array with more thatn 2^32 bits, and more than 2^32 bytes would be nice, too. Absoft uses Cray code that bit addresses, leaving the size limit on an array of derived type at about .25G.
  • >Think about it: if a kernel compiled under,say, Sun compiler will run >twice as fast as one compiled under gcc,what will happen to gcc?

    Absolutely nothing. People will still keep on using using GCC as they have always done. Why? Because GCC is multi-platform. Compilers like Sun's aren't. End of story.
  • Actually, you have it backwards; Motorola will be supplying the Altivec-enabled chips, while IBM is concentrating on increacing the clock speed rather than the number of instructions.
    Phil Fraering "Humans. Go Fig." - Rita

  • Never, if the gaming companies have anything to do with it. As processors and memory become faster, games become more bloated and less well-written.

    Added to that, processor design has become more bloated, moving deeper into a large, complex instruction set. Simpler processors, such as the ARM, outpaced the Intel chips even at a fraction of the clockspeed, because they were better designed.

    Finally, throw in that most modern OS' are bloated and top-heavy, Linux being one notable exception, and you've a recipe for a horrible quagmire from which REAL games and gamers may never escape.

  • by jd ( 1658 )
    I doubt 64-bit support will be robust. I think there will be a lot of heat-related reliability issues, especially when run flat-out. I think there's likely to be at least one bug in the arithmetic unit.
  • There's also something bigger, better and eventually cheaper on the horizon. If a system meets your needs now then get it.
  • So what kind of nifty bugs in these chips are we going to be anticipating?
  • True! But design should be complete before production starts, and this is what they claim. It is just another link in the chain. For those who know which link they depend on, it gives you something to think about ;-).
  • I hate to assume things, but you were talking about Pentium III instead of Merced. Right?

    I just like to see new announcements ;-)

  • both merced and w2k faces uncertain future (will NT4 users upgrade to w2k soon? will enterprise pentium users upgrade to merced?).

    and if you merge those two (while they are still trying to make a duo/coalition) their future is even more uncertain.

    but anyway, whether they both will suceed ot not (or both fail) it can be good: it'll show the people that past years of "inovation" as performed by wintel coalition has been mostly result of marketing (because real inovation is done in laboratories, not on papers containing press releases).

    it can also cost us (or them? or some other users? ot some other developer? ...) lot.

  • merced as competition to aplha will lead alpha to be better and better thus i'm very happy intel is working on it (even when they are (for now) far worse than alpha).

    just encourage competition so we can get better products at better prices and with a lot of choises.

  • >>>wait and see boys and girls IBM bought sequent for a reason

    To get their NUMA work?

  • Although I don't know anything about Intel's internal simulation tools and methodologies, I'd be surprised if they've even been able to boot an operating system on their simulated design.

    I infer from "simulated design" that you mean a simulation of the Merced implementation of the IA-64 architecture, not just a simulation of the IA-64 architecture; other followups have said "they booted {NT, HP-UX} on a simulator", but that might have been a simulation of the IA-64 instruction architecture, rather than simulating Merced at, say, the gate level, and such a simulation wouldn't have tested the Merced design, it'd just have tested the software changes needed to make the OS run on an IA-64 processor.

    (I.e., I'm actually replying, in bulk, to the folks who said "but they booted XXX on a simulator" in response to you; booting some OS on an IA-64 simulator doesn't necessarily test the design of a particular implementation of IA-64, and thus doesn't necessarily ferret out bugs in that implementation.)

  • Alpha: Born 1992. Merced: Born 2000? Who needs yet another 64-bit architecture, anyway, especially since Alpha is rumored to be at 1400 MHz in 2000?
    there are 3 kinds of people:
    * those who can count
  • Computer programmers have lost one of the original coding techniques that they all used to follow; "Use efficient coding methods".

    Most good programmers went to "get it right, and then get it fast."
  • I was talking with my sys admin (an HP fan). I mentioned getting an alpha for some number crunching which I need to do. Take a look at this

    http://www.hp.com/visualize/products/cclass/c300 0/tech_specs/index.html

    I'm not sure where the alpha is not but this thing is turning out some impressive numbers.

  • Because, if there isn't a stable, scalable 64-bit version of Windows Server ready by the end of next year, the path will be clear for the *nixes to severely dent Microsoft's marketshare.

    The Dodger

  • My "To Do" list includes building an E3500 with
    four 400MHz UltraSPARC II's and 4 Gb of RAM, with a A5200 FCAL storage array.

    And I get paid to do this!!! God, sometimes I really love my job... Pity I have to ship it out after build and installation. :+/

    But there is some consolation in the fact that I'm going on the Sun Performance Turning course soon. I'll be able to squeeze even more performance out of these little beauties. Before the end of the year, I'll be doing clusters... Hey, maybe Santa will bring me a Starfire! :-)

    The Dodger

  • Yeah, aren't Intel providing support for Linux on IA-64?

    The other unix vendors are also planning to bring their OS's to IA-64.

    I think that Microsoft's bid to position Windows as an "enterprise" OS could well fail miserably.


  • Yes... but for Alpha, if I recall correctly what I read about Compaq a while ago. :-)
  • It is a pitty that we'll have to forget that old machine code, which some of us got so attached to.

    Does it also mean that it is a bad idea to buy an "old" PentiumIII processor ?

    Software written for the Merced may not run on a PIII, but software written for older x86s will still run on the Merced. Just as current x86 chips can emulate "real mode" for older apps, so the Merced will be able to use and/or emulate the processing modes used in current x86s.

    A dedicated x86 clone - like the K7 - will be able to run these applications faster. However, _if_ they did a good job on the Merced's core, applications written natively for the Merced will run faster than applications written natively for the K7, as the K7 will still be hampered by the x86 instruction set and register structure.

    _If_ Intel did a good job on the Merced core, it will be fast but still 1.5x as expensive as other RISC solutions due to the extra silicon needed to support x86 legacy features.

    However, I gather that they may not have done such a good job on the core. We'll see when prototypes are benchmarked.

  • What is the point in making faster computers? Just gives software companies the ability to slack off a bit...

    Please click on "user info" above and see my previous response in this thread. It is a reply to another poster who presented almost identical arguments.

  • I am fully aware that games are getting more complex then those of earlier years, but there is still no doubt in my mind that code still could be optimized. And my comment goes beyond just games; but applications and Operating Systems as well...

    Again Linux is the exception....

    Correction: Windows is the exception. See the post that I referred to.

    Re. games, I realize that most game software isn't perfectly tuned, but all of the "boost FPS by 50%" optimizations will already have been made, because it is in the game company's financial interest to do so - as a result of this optimization, they can either lower the system requirements or keep the frame rate and jack up the graphics detail. Both correlate directly to better sales.

    From where do you get the impression that games are horribly written?

  • by Christopher Thomas ( 11717 ) on Wednesday July 14, 1999 @07:52AM (#1802902)
    Never, if the gaming companies have anything to do with it. As processors and memory become faster, games become more bloated and less well-written.

    Um, no. New games require more hardware because they have fancier special effects and more detailed models. This is not really related to code complexity. It makes the _data_files_ larger, naturally, but that's about it.

    Granted, there are some game writers who consider special effects a reasonable substitute for gameplay and plotting. These writers' games will sink, however, because consumers do want games that are actually fun to play.

    Re. hardware vs. games, game hardware requirements will plateau when cheap hardware exists that can handle just about all of the special effects in the OpenGL feature set for photorealistic models at high resolution in real-time. Beyond that, there isn't anything left to add hardware load on the graphics side of things.

    Things like AI and physics may continue to develop after that, but physics at least won't add much more load if you have hardware that powerful.

    Added to that, processor design has become more bloated, moving deeper into a large, complex instruction set. Simpler processors, such as the ARM, outpaced the Intel chips even at a fraction of the clockspeed, because they were better designed.

    Um, no. Look at just about any non-Intel processor. Intel chips are bloated because Intel continues to support and extend an instruction set that wasn't designed to be extensible. They're about the only major microprocessor manufacturer that made this mistake.

    Also, didn't ARM not _have_ a floating-point unit? With more silicon to devote, of course they'll be faster at integer operations.

    Finally, throw in that most modern OS' are bloated and top-heavy, Linux being one notable exception

    And *BSD and BeOS and...

    Microsoft is the primary culprit for slow OSs. This is because Microsoft is purely market-driven, and the market that they cater to would rather buy a new version of the OS with more features than a new version of the OS that works more efficiently.

    OSs and chips can be designed cleanly - and _are_, with only a few exceptions. Take a look around at what's available, and you may be pleasantly surprised.

  • I saw a report earlier this year sometime that they'd managed to boot NT on their simulation.

    Which must give them some hope.
  • > Because, if there isn't a stable, scalable
    > 64-bit version of Windows Server
    > ready by the end of next year, the path will be
    > clear for the *nixes to severely
    > dent Microsoft's marketshare.

    Nope. w2k is not 64 bit capable. In fact, windows will probably run in a 32bit emulation mode. Much like what NT4 does on the Alpha. From what I understand, NT4 is _not_ easily ported to 64bit, for some reason or another (otherwise, why isn't the alpha version of NT running in 64bit?). This means it will require significant time on the part of M$ to either port their OS to 64bit, or write a totally new version of windows...

    As soon as Linux gets a compiler, we will take full advantage of the architecture. You know how quickly we work... Another factor is that Intel has been making friendly gestures at the Linux community. They were even so audacious as to donate some compiler optimizations. Personnally, I think Mickeysoft and windows are heading into some dark days, which I don't think they will survive.

  • It must be nice to have a major website quote you without any challenge whatsoever. Where's the skepticism? If Intel has missed all of its other targets, why does anyone think Merced by middle 200 will be any different? I'm not knocking what they are trying to do, but they are still along way from a product anyone can use, and there's no guarantee that this will be a success.

    This article from Byte [byte.com] goes into some of the problems Intel has from this stage forward. A little low-tech for /. but you're young, you can afford to slum a little.
  • I have a friend who works at Digital/Compaq. The 1.5 GHz chips are a reality, although they may not be quite into production. Digital is putting out prototype systems that, when the design is completed, will support 256 1.4 GHz Alphas with multiple gigs of ram per chip. That will be one sweet assed machine!
  • IA64 is not desighned to be 64bit totaly but expanderable easyly

    what this says to me is, to get press on server stuff so K7 doesn't !?

    they are still signifactly tweaking the compiler !!

    the actual silicon is not the hard part its the software IE the compiler

    IA64 needs a *GOOD* compiler to even compete with alphas but a realy good compiler (years off) would kill an alpha at the same clockrate HP are doing TRIMERAN and many supercomputer centers have done research on how VLIW (sorry EPIC) work and could be better

    >>>wait and see boys and girls IBM bought sequent for a reason

    john jones

    a poor student @ bournemouth uni in the UK (a deltic so please dont moan about spelling but the content)
  • Who wants to bet that M$ will be pressuring Intel to delay the release of the Merced until they can finish throwing something together?
    The Merced is supposed to run IA32 code, but will it run a 32-bit OS like Linux or NT out of the box (or straight of the net in the case of Linux)?
  • no.

    PIII != Merced and i said Merced.

  • First off, i have some pics of the Athlon, Merced and G4 here [zebulun.org]. TheRegister [theregister.co.uk] has a lot of info on whats going on in the CPU market.

    Basically: Merced has been on the brink of failure for quite a while now. The performance of the ones made so far are considerably less than those of the PII at lesser Mhz's.

    The development of the Althon (aka K7 by AMD) has been quite secret. It is actually a super powerful chip and is using something like 256k cache to bring down price and still will whoop the PIII at equal Mhz (and 512 cache, in FPU benchmarks too!). Rumor has it that they will be releasing a 512k and 1mb cache intel killer Athlon shortly after debuting it at 256k. The Athlon will be using a slot A which makes sense as they have been in bed with Alpha Processors Inc., Samsung's processor company. (Those of you who still think that Digital owns API, you're mistaken as they made a deal to sell off their majority in the company to samsung). So as we see 1ghz Alphas debut without cooling you know that they are sharing that technology with AMD so the future is really bright for AMD. One nifty thing is, on the register, i saw an 8 proc. motherboard being made for the Athlon. Hello low-cost supercomputing.

    Motorola, makers of the PowerPC processor line will be introducing the G4. Rumor from the mac side is that due to a dispute between Motorola and IBM, who share in the production and design of the PowerPC processor line, there will be two different versions of the G4 coming out. One, which will be made by IBM will include a special instruction set that Mac OS X can/will be optimized for that will increase 3d rendering (sorta like 3dfx from what i understand of it) wheras Motorola will make non-optimized G4s that cost much less than the IBM manufactured ones. This probably means that lower cost macs, such as the iMac will use the Motorola G4 and upper end Macs like the PowerMac will use the IBM one.


  • Intel themselves have said in press releases that IA-64 is intended to be a server architecture. It may happen somewhere down the line that they slowly migrate into the consumer realm, but not even Intel is pushing that concept. If it's any indication of their plans, the "Willamette" architecture has been handed the "P7" designation, not Merced. Maybe they'll call it Pentium !V or something like that.
  • Too bad its crippled. Even Carmack's mad skills can't save it. Without the info needed to program the triangle setup engine (which matrox has "no plans to release" in the near future), you have to treat that reasonably fast hardware like an old scan-line only card. If you want fast opensourced linux 3D, it would appear that nvidia is your only option.
  • Don't you have to be a bit of a sicko to have gotten attached to x86 machine code? :-) Maybe I've just been programming ARMs for too long...

  • and we care because...? I love the athlon and all, but the alpha already can do 1ghz, at room temperature with no extra cooling. That just blows everything else out of the water, including the most recent vaporware crap from intel. Isn't this obvious? They are trying to kill the buzz about the K7 by saying they might finally almost possibly be close to thinking about making merced. Gimme a break, I'll stick with my alpha thank you very much.
  • And McKinley is going to get a nice ass whooping by the 21364 or even the 21464 alphas. I have more trust in Compaq/Samsung/API than Intel. Especially now that we are going to see IBM helping to make copper alpha's this could be a good time. The alpha has always had the speed advantage over most other platforms, the only exception being hp's pa-risc that it occasionally shares the top slot with, but since hp is dropping that to make the McKinley crap there really isn't any cause to worry.
    Because alot of code re-writing/re-compiling is going to be going on, and because everything will have to be moved to 64 bit hopefully we might even see more apps supporting the alpha, not only open source, but closed source commercial stuff, which as much as we all hate it, is necessary for some things.
    Long Live the Alpha.

    as I think I saw in someone's sig,
    Intel is the question, Alpha is the answer.
  • On this point I hope everything is true, if choosing a processor becomes a no brainer then where is the innovation? For too long intel has had that lock on the desktop market and look at what its done, as you said, milked the original ppro architecture for as long as possible. Where when you look at the workstation market the performance is amazing, and grows in leaps and bounds. I haven't been watching the higher class workstation/server market long enough to know if this has been true for a long time, or just recently, but in either case hp's pa-risc is pushing the alpha to new hights and the reverse is also true, and also the sparc is included in all of this. Basically it should only lead to good things for fast processors.
  • Well nothing interesting besides give them a 100 million dollars. Granted its not all linux, but it is on the alpha so its not a bad thing.

    http://www.techweb.com/news/story/TWB19990706S00 06
  • Come now! I see many people espousing the myth that faster computers make lazy programmers. Do any of you people claiming this write code? Have you looked at other people's code? I don't know about you but where I work stuff is tight. Not a bubble sort to be seen. If the code gets sloppy it's not lazyness, it's MARKET DRIVEN. Sheesh...get a clue. ANd put those VBRUN DLLs away before you hurt someone.
  • As long as companies like ID are around I don't think this will completely happen. :) In fact John Carmack has been actively working on the Matrox G200 GLX module... it's cool to have accelerated 3D in multiple windows using a totally free, sourced solution.

  • by Kaa ( 21510 ) on Wednesday July 14, 1999 @06:51AM (#1802920) Homepage
    Merced family is heavily dependent for performance on paralellizing compilers. I suspect that making the silicon will be the easy part (I'm a software guy, hardware guys may disagree with that), but making good compilers to take advantage of the chip will be a bitch.

    It seems that we are entering an era when the performance of your application is going to depend on the quality of your compiler/interpreter as much as on the actual hardware inside the machine. This is both good and scary. Good if the free compilers (like egcs) will be able to compete with and outperform commercial compilers -- that will be a great boost to free software. But there is also the scary part: if the free compilers fail to keep pace with commercial offerings, they will die. Think about it: if a kernel compiled under, say, Sun compiler will run twice as fast as one compiled under gcc, what will happen to gcc?

  • Actually, Merced isn't intended to be sold at all.

    Understand that this was never the original plan. Intel basically realized that Merced will not only cost a fortune and have no application support, but it's performance will also suck so bad they're not even going to try to sell it. Merced might have been good if it was released 2 years ago like it was originally supposed to be, but it simply can't compare to other high-end processors. Plus, its die is so huge that it would most likely have been more expensive. And it currently doesn't have a single OS that will run on it! Not exactly a good buy.

    So intel decided that Merced will be nothing more than a proof of concept, as well as something they can give to developers and tell them to write programs for the next version (afterall, the instruction set is the same). You won't be able to buy an IA-64 processor until McKinley (Which, ironically, is being developed almost exclusively by HP. So much for intel leading a 64 bit revolution.), and it will not be cheap. Even McKinley might not be that good, because it will have some tough competition with proven platforms by the time it comes out.

    Basically, don't think x86 is finished. Intel's IA-64 line is not very impressive at the moment. Remember that they are no longer competing in the x86 market with these, they are competing with well-respected and proven designs in a market that has a lot more competition than intel is used to. I do not think that the entire IA-64 line flopping is out of the question.
  • Your point being ? ( bar some pedantic machine code/assembler distinction... )
  • they should just tack MMX and SSE onto the 386 core, and then embed that in the Merced... then forget about it... it's running at like 10gigahz, or whatever. so it'll at least be as fast as regular chips...
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • I've heard rumors that Intel has rooms full of Quad Xeon PC's running Linux Beowulf to do chip simulations for IA-32 and IA64 designs...
  • Yeah, I took an HP class and we were using those, pretty quick! Except the they need to be GNUified. O and they'll only take your first born child as payment! :-0

  • IA64 is intended for high end servers, not
    personal computers or workstations. The first
    reason being cost. I suspect they will be
    shipping at around US$5000. The second is
    the fact that the IA32 emulator will be slower
    than a genuine P3, at least initially.
  • 'Undocumented features', surely?
  • I understood that the K7 would be released with 512K half-speed cache, with the chip able to support up to *8*MbL2 cache, theoretically able to run at clock speed.

    'Course, that would pull enough watts bake Pizza inna PC, but the idea of an 8-way array of 8mb K7s...
  • isn't this kind of like saying that the move from 16-bit processors to 32-bit processors was not necessary?
  • Sun went from 68000 to SPARC andf now UltraSPARC
    DEC went from some old MIPS (I think) to ALPHA
    IBM went from who knows what to PowerPC
    Apple went from 68000 to PowerPC

    All companies got to a point they dropped everything and moved on. UltraSPARC is still very similar with old SPARC V.8, but all companies got to a point that they said this old 70's and 80's CPU design methods just don't apply anymore. Will Intel ever do that and get rid of the Legacy patchwork? :)
  • Agreed. Working on Memory chips, I know the feeling. Not as complex per part, but we still get our share of pain. If its an old process, no problem. If you are dealing with something new, like Merced is, ouch! Its complexity is probably far grater than anything they worked on, so that adds to the problem. With the size of this thing they still can have several yield issues, and who knows what else. And then the final problem. Speed. With all of the delays, the chip performance can no longer be what it was originaly planned for when they started the design. At the end, the chip might require some design changes just so it is marketable at resnable speeds and yields.
  • backwards compatible.

    just it isnt going to do a great job at it.

    so just recompile everything for it ;)
  • The Alpha did not go to intel. Samsung has been keeping it alive and well, ever hear about the 21264?? You should check the specInt/FP's on this puppy if you want fast. The Mercede doesn't even come close on its estimates to the current real-world 21264. Hell, even the 21164 / 533 MHz that I am using now is faster than intels fastest pIII Xeon @ 550! (which is now about two years old, where was intel when the 500+ bandwagon drove by then, 550 is slow, alpha is already shipping 700Mhz plus, using a .25 core).

    Get with it!
  • >Wow, that will make a nice gaming machine. I wonder how soon it'll be before we get photorealistic FPS.

    Do we really want photorealistic quake? I meen if I saw a photorealistic head exploding, I think I'd be sick. Ick.
  • What reason will there be to buy an Intel IA-64 chip when the major reason business' buy Intel is backwards compatability. I will be able to get a 64-bit Alpha chip that not only outperforms but is cheaper than a 64-bit Intel. The reason people kept buying Intel is backwards compatability, and while this still maybe true it is irrelivant. The people buying 64-bit processors are usually corporate customers with large databases in mind. So unless, maybe you just want a kick ass game box, why would one buy it when there are far better solutions through other vendors.
  • In fact there will be 2 versions of the G4. One will be 32bit data path and the other will be 64bit. Look up "powerpc G4" at www.mcg.mot.com.
  • I can't wait to hear what a 600 MHz UltraSPARC can do!
  • here's the link for 1 GHz Athlons: Kryotech Super G [kryotech.com]
  • It is a pitty that we'll have to forget that old machine code, which some of us got so attached to.

    Does it also mean that it is a bad idea to buy an "old" PentiumIII processor ?
  • So you think it is all bloated and bad, huh? The problem is not the coders, its the language. The compilers really aren't that efficient. If you want really fast, really small programs, I guess we could all learn Assembly, or better yet, we could write directly to binary code. Yeah, that's it!
  • There will be NO BUGS in this chip or any other future Intel masterpiece.

    Only errata.
  • ...for any speed record breakthroughs. The IA-64 chip will rely VERY heavily on compilers and code quality to achieve great performance. Right there you can go ahead and scratch out windows. Linux will run nicely on it PROVIDED a good compiler is released with it.

    You want 1 Ghz? Look out for AMD, people.. the K7/Athlon will be there by Y2k (ok that's just my estimate). They are going to go hand-in-hand with Alpha. Intel's a great company, but they just got outclassed by AMD for the first time and the won't lead the race again for about another 2 years at least. Alpha? Here's their chance to make a break for it, too.
  • Isn't this the weakest link of all for Intel?
    I remember having read awhile back that the
    compiler development was going very poorly.
  • Thats a very good question. Will we see a
    repeat of the PPro? Ie it runs 32 bit code but
    not much faster than a true 32 bit processor?
  • Wouldn't it be nice if the masses started to see
    alpha for the great chip it is? Wouldn't it be
    nice if the masses saw Apple machines for what
    they are (a lot easier to use)? Alpha and Apple
    seem to be very similar - total failure to market
    the products properly. As a CPQ shareholder I
    would be more than happy to see them finally take
    some initiative and really push for Alpha sales.
    Maybe this is finally the opportunity? While you
    may be right that the current users of 64 bit
    may have the applications they need now, wouldn't
    it be nice also have the ability to also run all
    the current 32 bit software (which will likely
    be redone for native 64 at some point)?

  • Yes of course a typo, I meant 16 bit code :)
  • The faster computers get it seems the more bloated code seems to be. Computer programmers have lost one of the original coding techniques that they all used to follow; "Use efficient coding methods". Now it seems to be "Hey, this works..". When you compare the speed of todays computers with todays software and that of 10 years ago...computers are running no faster then they did back then. Sure you could load the old software onto the computer and it would run fast as ****. But it is all obselete now. What is the point in making faster computers? Just gives software companies the ability to slack off a bit...

    Like in the case of M$...give them a faster chip and they can add a few more million lines of code to their already bloated OS's...

    (Of course Linux is the one exception to this statement..)
  • I am fully aware that games are getting more complex then those of earlier years, but there is still no doubt in my mind that code still could be optimized. And my comment goes beyond just games; but applications and Operating Systems as well...

    Again Linux is the exception....

  • Software written for the Merced may not run on a PIII, but software written for older x86s will still run on the Merced. Just as current x86 chips can emulate "real mode" for older apps, so the Merced will be able to use and/or emulate the processing modes used in current x86s.

    No. No! NOOOOOOO!!!!!!

    I realize there's a huge existing application (and talent) base in supporting legacy CPUs and OSs. But this has really got to stop, folks!

    Can't Intel start with a clean slate for the CPU, sans all x86 baggage and then provide a software solution for legacy apps? I never had the pleasure of using an Alpha box, but didn't hey have something called FX!32 (or similar) for the NT Alpha which ran Win32 binaries? Did it work well?

    Same damned thing with Microsoft. Each new OS carries tons of crap from the previous one.

    Even MS's "32-bit" apps carry old "16-bit" junk around.

    Once I made the following leap of logic, running NT Workstation 4 at home: Since I'll only be running Win32 apps, NT shouldn't need to create/support short filenames. I turned off the registry key for 8.3 filenames. I installed Office 97. And then... it just didn't work right. Various errors, complaints about not finding files, etc. convinced me that it just wasn't rid of Win16 baggage.

    What a joke.
  • I am going to build Sun Cluster 2.2 with two 400MHz E250. Now I am already have fun with RSC.
    This is a very nice feature if you do support from home.

Lavish spending can be disastrous. Don't buy any lavishes for a while.