Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel

Intel Delays Dual-Core Processor, Plans New Server Chip 156

Kajakske writes "Intel said Thursday that it is pushing back the release of its first dual-core processor by a year to 2005 and adding a new microprocessor for servers to its Itanium II lineup. On the other hand, Intel is moving forward in the area of new technologies."
This discussion has been archived. No new comments can be posted.

Intel Delays Dual-Core Processor, Plans New Server Chip

Comments Filter:
  • by CountBrass ( 590228 ) on Friday January 17, 2003 @06:17AM (#5100847)

    Interesting, especially given the lack-luster products produced by Motorola and the relative lack of success of AMD (I use an XP1800+ and think its great, the company just doesn't seem to do too well.) I wonder if this lack of competition is a major factor - Intel doesn't need to keep spending money researching new chips if it's current generation are so far ahead of its competitors.

    I also wonder if the economy is a factor compounding that - ok you can research your way into new demand but why bother when you're that far ahead (see above) ?

    All I can say is, hurry up IBM and get those new PPC chips out the door (and into my Mac ;-).

    • by kahei ( 466208 ) on Friday January 17, 2003 @06:28AM (#5100869) Homepage
      Intel doesn't need to keep spending money researching new chips if it's current generation are so far ahead of its competitors.


      The thing is, this isn't a chip technology race. It's a chip fabrication/distribution/pricing race.

      Intel's chips are not technologically superior to AMDs (I know Intel has some major technology assets, but they mostly don't affect the chips in production now). On the other hand, Intel's capital, fabrication capacity, distribution, and market clout are far superior to AMDs. Intel is concentrating on the areas where it has the advantage, which are also the decisive areas.

      If only this *was* a technology race. But that's market forces for you.
    • I don't think so. Remember they talk about the Itanium II type chip, not a Pentium. And Itanium could really need a speedup in the fight against sparc and mips

      Martin
      • Good point, but I would expect any (succesful) technology that appears in the server line to make it into the consumer line at some point. So dropping something from Itanium today means it's unlikely to appear in the Pentium tomorrow.

    • by nehril ( 115874 ) on Friday January 17, 2003 @06:53AM (#5100927)
      I think they will continue to spend just as much money on research. they just wont *release* new tech unless competition forces them to.

      they may have the technology right now that doubles or triples current performance, but why play that card now? keep the tech in reserve, and let it roll out at a "natural" moore's law rate in order to keep the investors happy.

      if motorola should happen to shock the world and release a 4 ghz multicore G5 running with 800mhz DDR RAM (we can dream, can't we??), then intel can roll out whatever they have in reserve a bit earlier.

      Remember, Intel is run by businessmen, for businessmen. Technology to them is only a means to generate cash.
      • by Kourino ( 206616 ) on Friday January 17, 2003 @07:57AM (#5101028) Homepage
        Remember, Intel is run by businessmen, for businessmen. Technology to them is only a means to generate cash.

        Sigh. I suspect that's exactly it. And that's what pisses me off.

        Because, as a paying customer, technology to me is a whole hell of a lot more than a way to generate cash. It's a way to do interesting things, and also an end of its own, in a way - exploring the technology is really fun. Anyone remember sitting down with 16/32-bit assemblers and triple-faulting your processor until you got "protected mode" down?

        I haven't had that much fun directly with a CPU in years. When I get time to play with my EV56 machine, I'll have some of it again; it'll be my first architecture after IA-32 (I haven't done that much interesting low-level on IA-64 besides performance counters).

        And ... waxing philosophical here, so feel free to ignore the rest of this comment. But someone in a different thread recently (don't remember which ... ) commented on the mishandling of the Alpha IP by Compaq, then HP, then its more or less non-use by Intel. And basically said "these people are keeping the market down with their competition, and limiting our future technological growth as a society." I'm not sure how accurate or fair that is (I suspect I'm just getting bitchy now) ... but it's really fscking creepy to think about.

        Although really, this is partially because DEC couldn't market the Alpha to save its life. In fact, it didn't.

        • Remember, Intel is run by businessmen, for businessmen. Technology to them is only a means to generate cash.

          Sigh. I suspect that's exactly it. And that's what pisses me off.


          The problem is that new fab lines cost billions of dollars and laying out a new iteration of a microprocessor is not cheap either, so the chip manufacturers need to be pretty brutal about producing money-makers on their fab lines.
      • by dpilot ( 134227 ) on Friday January 17, 2003 @08:56AM (#5101109) Homepage Journal
        I wouldn't count this too far. Unless it gets tested in the marketplace, new tech tends to get rather...inbred. Too many generations of "new tech held internally" and you'll find it simply can't be put to market, because it turns out to be irrelevant, or not well adapted to the current situation, or...

        Been there, done that.
      • If they could double or tripple the current performance at less than 5 times the cost they would wipe most othe companies out of existance. Get no end of ACSI programs instead of IBM getting them etc....

        So either they can, and it's toooo expensiveor they can't (except by sticking 20mb of cache and 5 cores on the die)
      • by cp5i6 ( 544080 ) on Friday January 17, 2003 @11:17AM (#5101698)
        Intel is acutally run not by business men so you are very wrong.

        Andy Groove and Gordon Moore (two founders of intel) are by far two of the most prominent semiconductor scientists of the 20th century.

        Dr. Grove himself has written over 40 technical papers and holds several patents on semiconductor devices and technology. For six years he taught a graduate course in semiconductor device physics at the University of California, Berkeley.

        How many people here can say they have taught 6 years in a graduate course at Berkeley?

        Craig Barrett the current CEo of intel himself is nothign to scoff at either. He's a fulbright fellow that received his PHD in material science at stanford. He has 40 technical papers dealing with the influence of microstructure on the properties of materials.

        So before you knock on Intel about how businessmen is run by businessmen do your homework.

        These guys are Far from business men. They are first and foremost incredibly talented scientist who happen to be good at business.

        Intel has one of the world's LARGEST cost in terms of research and development along with GE, MS and AMD.

        I'm sorry but you are sadly mistaken if you feel that Intel is run by businessmen.
        • And frankly this is one of the great things about our industry that people that know something about the products are running the companies. Microsoft is run by programmers, Intel by chip engineers, National Semiconducter by chip engineers, Oracle by a database designer.... In many other businesses you tend to see MBAs who don't know anything about their underlying products.

      • The problem with your theory is that it doesn't really make sense from a business point of view. The goal of business is to make money, you're right on that. But Intel's not making any money (they actually are, but it's very, very little in comparison to pre-AMD years). A secondary concern is to get the stock price high to keep shareholders happy (since shareholders own the company). But with increased competition, decreased profit margins, and the slump in the tech market, Intel's stock has tanked practically to 20% of where it used to be.

        If I were Intel's CEO, you'd better believe I'd go to bed hoping my researchers found some magic lamp that night. And if I were an Intel shareholder - which I am - I'd damn well want my company to take advantage of any competitive edge it found, especially improved technology.

      • Remember, Intel is run by businessmen, for businessmen.

        I suggest you read Andy Grove's book, Only the Paranoid Survive. Intel is run by engineers who don't differentiate between performance from an engineering or a business perspective. Whether it's optimizing a CPU to run faster or a business unit to produce more cash, it's the same to them.

        Technology to them is only a means to generate cash.

        You say that like it's a bad thing, but consider this: if you're in the 3D industry, games or movies, technology is only a way to generate pretty graphics. If you're in the telco business, technology is only a way to route other people's data from point to point. If you're a naval architect, technology is only a means to make your boat faster.

        See where I'm going with this? No-one apart from hobbyists sees technology as an end in itself - it's got to make their real task easier, or it has no point. If you're an investor, then of course technology is a means to make money.
    • Well, your point is very true.. intel has no competition at the low end (read: x86) market, their chips have much higher clock rates than AMD offerings, and are edging ahead in speed now, but the high clock is most effective in selling them to the masses.
      Where they need to develop and compete, is at the high end market, where they have a rather lackluster product of their own, the itanium... which is being completely blown away by alpha in the raw performance stakes, i think sparc and power4 might be nudging ahead of it too.. But when you consider the poor compiler and application support for itanium right now, they REALLY fall behind the others...
      And as has been stated before, itanium should never have existed... hp should be concentrating on the alpha, which already has the software support, performance and reputation that itanium is still striving for.
    • The article does not spell it out very clearly but they are talking about duel-core Itanium, not X86.

      AMD is not the competition here. IBM PPC and Sun Sparc are.

    • I worked for Intel for 3 years and that isn't their attitude towards things.
      I was glad to see that they didn't rest on their laurels so to speak, they are forever looking ahead...at least until the current top dogs get replaced by younger people, then you may see something like what you're talking about.
    • > and the relative lack of success of AMD (I use an XP1800+ and think its great,
      > the company just doesn't seem to do too well.)

      Well, part of that is crappy management, but a large portion of their troubles are simple due to the fact that Intel is given the benefit of the doubt by the OEMs and the consumers. Even during the year or two when AMD consistently had faster chips with fewer bugs than Intel had, Intel made tons of money and AMD merely made enough to recoup past debts. People buy Intel because they're Intel. This will happen whether Intel is doing a good job or a bad job. Thankfully, they're doing a good, honest job and earning those buyers now, but from 1998 to 2001, they were not doing their customers honour.

      > Intel doesn't need to keep spending money researching new chips
      > if it's current generation are so far ahead of its competitors.

      They aren't. Intel's Pentium 4 is pretty much on par with AMD's Athlon. But Intel has five or so x86 plants that they can leverage to test different ways to most optimally ramp their chip frequencies. You don't just throw a design and a fab process into a bucket, shake it, and come up with the resultant chip speed. You have to devote a substantial part of your manufacturing resources to the research needed to optimally match your current chip design to your current manufacturing technology.

      In addition to this, Intel happens to be something like a year ahead in base process technology. They moved to 130nm six months before AMD did their equivalent move. This means they're very much ahead in that respect. So if their chips were a generation *behind*, Intel would be competitive in chip performance (this is was almost happened with the Pentium III and the early implementation of the Pentium 4). As it is, the current P4 is a competitive design coupled with a slightly more advanced manufacturing process, so Intel is a couple speed grades ahead.

      Intel has to keep researching constantly. AMD does a surprisingly good job at ramping technology at approximately the same rate as Intel, despite having about a twentieth of their capital resources. If Intel stopped researching for just a few weeks, they'd lose the leverage they have to stay superior in the current climate. And that's not counting on the outside possibility that K8/Hammer might exceed performance expectations and outperform the top Pentium 4 upon release.

      -JC
    • For most people the question isn't whether to buy a new PC or a new Mac, or even if their new PC will have an Intel or an AMD CPU, it's whether to replace the PC they have with an Intel CPU in it with a new PC with an Intel CPU in it.

      If Intel crams the market with everything it has all at once, that upgrade cycle is going to be longer.

      So, unless there are other market pressures, Intel does well to time its technology introductions to maximize its profits. This may not be the best thing for consumers, but, hey, monopolies usually aren't.
  • G5 race? (Score:2, Insightful)

    by Sh0t ( 607838 )
    Well this should make some maclots happy.

    This may give Apple the time it needs to roll out that mysterious market shattering "g5" processor we keep hearing rumors about.

    Maybe it's strategy to ride the tide and invest in long term goals rather than trying to get marketshare now will pay off.

    Maybe not
    • Or maybe they just decided to go play golf instead. Or poured some coffee on the prototype.

      Maybe not.

      If you look for meaning, you'll always find it :-)

      Daniel
    • Apple don't make processors. You might have confused the names "Motorola" or "IBM" with "Apple", easily done.

      And AFAIR the G5 is headed for use in PDAs, mobile phones etc not "real" computers.

      • I know apple doesn't make the processors but they market the machines. I was just speaking of the systems as a whole. Which I'm sure you understood,
        • "Which I'm sure you understood"

          Actually, no I didn't, because I don't believe Apple have ever raved about a G5 machine or even in more general terms their next generation machines at all.

      • Re:G5 race? (Score:3, Informative)

        by iNub ( 551859 )
        The Motorola 85xx chip might be going for use in embedded devices, but I can almost guarantee you that it will *not* be used in a cell phone. A PDA? I doubt it, unless it's compatible with the chips that are already used in PDA's. This chip is more for things like network hardware, cable boxes, cars, and the like. It draws too much power to put it in a cell phone, and it's not quite powerful enough to put in a desktop.

        If you're looking for the next generation of PowerPC chips, look to IBM's PPC970.
  • Intel is in trouble (Score:4, Interesting)

    by g4dget ( 579145 ) on Friday January 17, 2003 @06:27AM (#5100866)
    Hyperthreading and other tinkering isn't going to help Intel. The Itanium is a dud: systems based on it are hugely expensive, have iffy performance, and are not usefully x86 compatible.

    If AMD manages to stick to their schedule on the 64bit chips, they are going to have a big winner on their hands: systems that can address more than 4G in a single process and yet are backwards compatible.

    • The Itanium is unusable. Have you ever seen one production system with it? I have not.

      Itanium II is now out and is said to be OK. For the price of an Itanium II system you could buy a car/house/small country.
      • > The Itanium is unusable. Have you ever seen
        > one production system with it? I have not.

        I know somebody whose workplace (a research place of some sort) got a cluster of five hundred of them.

        > Itanium II is now out and is said to be OK. For the price of an Itanium II
        > system you could buy a car/house/small country.

        Um. High end servers are supposed to be that expensive. Ever try shopping for a high end UltraSPARC or Power4 machine? Didn't think so.

        IPF isn't supposed to be a replacement for x86 (well, it was originally eventually supposed to, but that's when various Intel execs were drunk on monopolistic stupidity). I do not agree with the means by which Intel has penetrated the market (eg, they coaxed several prominent chipmakers, such as HP, DEC, SGI, and so on, to dump or devalue their existing lines and support the Merced far before it reached A0 stage), but on the engineering side, the McKinley (or "Merced II", if you like fugly names) seems an excellent implementation of what was perhaps a not too well thought out ISA. Just my opinion, of course, and I'm merely an armchair designer (no, no, I don't design armchairs ... you know what I mean!).

        -JC
    • by brejc8 ( 223089 )
      It doesnt matter that itanic is a sinker. All the companies like HP, SGI etc. are gonna keep it afloat, even if it means killing their own children. e.i. Alpha :(
    • Infact, the SOFTWARE emulation of x86 that`s used by the alpha could probably beat the x86 emulation abilities of the itanium, and i`d like to see how the virtualpc programs for macs (and i think solaris/sparc?) stack up. A 64bit arch that offers compatibility with x86 through emulation, but which beats the itanium`s performance, could prove successfull.
    • Wow - way to make up stuff.

      The Itanium II is certainly not a dud - it's in some of the highest performing systems money can buy. Of course it's expensive, it's not for Joe Buck or Tim Small Business.

      x86 compatibility is worthless ina high end 64-bit machine, somethin AMD doesn't seem to grasp. They're marketing a high end technology (consumers and normal business users don't need 64-bit technology and won't for a while) to the mainstream market. Morons.

      And you seem to be ignoring the numbers (remember that 'reality' the rest of us live in matters to us, if not to you). AMD is going broke. Intel isn't.

      All in all you seem to be engaging in wishful thinking mixed with a little delusion.
      • by JCholewa ( 34629 )
        > Wow - way to make up stuff.
        > The Itanium II is certainly not a dud

        Agreed. From an engineering standpoint, it's quite a nice chip. I don't agree with some philosophical stuff in the ISA (I'm not that much of a VLIW-for-general-purpose fan, but hey), but the microarchitecture and implementation seems very nice. I do wish that it was easier to implement OOE on IPF, though. :(

        > x86 compatibility is worthless ina high end 64-bit machine, somethin AMD doesn't seem to grasp.
        > They're marketing a high end technology (consumers and normal business users don't need
        > 64-bit technology and won't for a while) to the mainstream market. Morons.

        Feh. A big "screw you" on that. AMD isn't catering to the high end server group. They obviously can't just teleport into that market. Their catering to the smaller business that uses Xeon servers. Backwards compatibility with x86 is of the utmost importance in this market. Basically, they're marketing x86 workstations and x86 servers that happen to allow you to enhance performance of some types of programs with simple recompilations. There is a good chance that I might get the lower end version of this product when it comes out, as I use Linux, which may strongly benefit from those extra registers in x86-64, on my home machine. We'll have to see, of course, before I pull out the green.

        > And you seem to be ignoring the numbers (remember that 'reality' the rest of us
        > live in matters to us, if not to you). AMD is going broke. Intel isn't.

        That's a bad measure to use. You don't have any controls in this analysis. There are a lot of reasons why AMD is losing money (poor management a la Hector Ruiz, inability for a relatively small company to handle a very harsh recession, etc..), and there are a lot of reasons why Intel is still doing phenomenally (people buy Intel no matter what, currently excellent execution, they can afford to strongly diversify). Many of these reasons have nothing to do with the technical/engineering side of the equation. IMHO, both AMD and Intel have incredible engineers, and frankly AMD especially warrants respect for being able to ramp technology at *approximately* the same rate as Intel despite having a very, very miniscule fraction of their resources. That is why I was a big AMD fan a couple years ago, at around the time when the company was dominated by the excellent triumverate of Sanders, Raza, Meyer as well as a couple critical folks like Norbert Juffa and Paul Hsieh. At this point in time, AMD was a quantum of a company that somehow managed to produce a piece of engineering that allowed them to, for a brief time, outdo the capabilities of a company fifty times their size. I am somewhat dismayed that AMD turned into a more traditional company over the last two years or so.

        -JC
      • The Itanium II is certainly not a dud - it's in some of the highest performing systems money can buy.

        First of all, 64 bits isn't about speed, it's about address space.

        Furthermore, even if speed is the issue, many of the people (like myself) who care most about it build compute clusters. The calculation is simple: (1) does it do 64 bit, and (2) how many FLOPs do $70000 buy me. The Itanium doesn't do very well on that metric.

        Of course it's expensive, it's not for Joe Buck or Tim Small Business.

        It is: the prices of memory have come down, and it makes sense for people to be able to use more than 2G in each process. It's great for databases, web server, video editing, and games, to name just a few mainstream applications.

        x86 compatibility is worthless in a high end 64-bit machine, somethin AMD doesn't seem to grasp.

        Quite to the contrary: x86 compatibility greatly reduces the risk of migration. I know that all my applications will continue to work as they do now, with no recompilation or bugs, but in addition, I can migrate individual apps to the 64 bit architecture. That's much better than what Itanium gives me.

        They're marketing a high end technology (consumers and normal business users don't need 64-bit technology and won't for a while) to the mainstream market. Morons.

        Yeah, right. You sound like a marketing guy. Oh, wait, you probably are a marketing guy. The only reason to buy 64 bit chips from Intel is that they are from Intel, and that Intel will probably manage to kill off the competition again, no matter how awful their chip is. But with AMD's backwards compatibility, that doesn't matter: AMD doesn't need to take over the 64 bit market to win, all they need to do is deliver good performance and value in their 32 bit mode and 64 bit functionality for a handful of custom applications.

      • "x86 compatibility is worthless ina high end 64-bit machine" - Indeed. That's why AMD is targetting the low end. The medium to high end has already been 64-bit for the best part of a decade with well established, mature technologies, especially UltraSPARC. itanium just can not hope to take any significant portion of this market simply because it is 10 years too late. No one wants to throw out their entire working, reliable infrastructure for something new and unproven. They'd have to change OS, port all their software, re-test everything, retrain their technical staff, buy new middle ware and applications, upgrade their air-conditioning etc. etc. and all for what? Intel's ambitions? Where is the benfit to the customer?
  • by gripdamage ( 529664 ) on Friday January 17, 2003 @06:28AM (#5100871)
    The Santa Clara, California-based company is the leading maker of processors, which serve as the brains of computers.

    And then there are the customers, who consume these processors like living dead zombies animated by radiation from outer space.
  • by pla ( 258480 ) on Friday January 17, 2003 @06:28AM (#5100872) Journal
    At first, it bummed me out to read this headline, since I would *love* such a toy.

    Then, following the link, I realized they only plan this dual core toy for the *Itanium* line, anyway. Bummer. I do like how the article says Intel hasn't sold as many of them as they planned, though... Can we say "DOA"? I thought they had all but abandoned the mega-flop (in the movie sense, not the CPU sense) Itanium.

    Anyway, back to my point...

    I don't want a CPU with 6MB of cache (the reason they give for pushing back their SMP-on-a-chip). I don't want an Itanium. I don't even want a P4.

    I would *run* to the store, however, to buy a quad (since at their current fabs, they could fit four in the same space as a single P4, so why only go dual) P-III somewhere around 1.5Ghz (like the chip they plan to release with 6 or 9MB of cache). Not an inconsiderable amount of CPU power (My current machine has "only" a dual PIII/933, and I have yet to find my "killer app" reason to upgrade).

    So, listen up, Intel - the server market may pay more per chip, but we "mere" home users buy a HELL of a lot more of them. So throw us a bone, 'kay?

    Because if you don't, AMD will (eventually). ;-)
    • So, listen up, Intel - the server market may pay more per chip, but we "mere" home users buy a HELL of a lot more of them Server chips are sold at an unbelievable markup, though, so they make more profit from them. It's not uncommon to spend a couple grand on a CPU module for a server.
    • The 6 MB cache is not aimed at you. They mainly go to The idea has its benefits in web servers etc. Where a lot of web content could be served from RAM/Cache. Also note that these machines can have a lot of RAM space which can be used in big servers which work almost entirely from RAM touching the harddisks only for explicit read/writes (any application server)
      Anyway a quad or dual p3 is not as good an idea as quad or dual p4 because p4s are designed to support these features much more than an p3 is.Though I wont mind having a quad p3 NOW.
      • WTH are you talking about cache? Servers need RAM I/O. Do you even know how many times data is copied in memory before it actually goes out the ether port?

        In w2k Task Mangler will fit in the 6 meg cache of the Itanium II and that is about it!
        • Do you even know how many times data is copied in memory before it actually goes out the ether port?


          Well, in Win2K/XP, there is supposed to be zero memcopies when writing data out the network stack. Though if you have a crappy application, all bets are off ;)
    • by Sycraft-fu ( 314770 ) on Friday January 17, 2003 @09:24AM (#5101189)
      What is this, AMD cheerleader day or something? Ok, understand that new technology, espically something as big a change as the Itanium takes TIME to develop. The Itainum is NOT for desktops right now, and if you think that's who Intel is targeting, you have a poor understanding of business.

      The Itainum 1 was mainly a research chip, a first generation to let people start to develop and test on real hardware. MS took advantage of this and rolled out an IA-64 version of Windows. Intel was hoping for some server sales, but the real goal was getting the new IA-64 system into production silicon.

      The Itainum 2 is a much more practical chip. It is something that peopel will probably seriously look at for high end server as it is competitive with 64-bit chips from Sun. You may see it in a few workstation, but probably not many, it's mostly a server chip. Remember, we are taling competiton with big iron here, not desktop system.

      Now, as time goes on, the technology will become much more mature and cheap and will eventually filter into desktops. Hopefully, that will happen before we start to hit the 32-bit crunch.

      The idea here is not wait until the last second for people to need a 64-bit chip, but to get it to market sooner so you can start working on it.

      This, by the way, is not the first tiem Intel has done something like this. The Pentium Pro was blasted when it came out because it's 16-bit performance sucked. Sure it did great for 32-bit but who teh hell used that? Well tehn along came Windows 95 and teh PPro architecture was refined into the PII and it was a great chip since 32-bit was rising rapidly and it smoked at that. The P3 is the third incarnation of the PPro architecture. It's optimised and enhanced (ala SSE) but the same fundimental architecture. The P4 is the first brand new architecture since the P3.

      The Itainum is a much larger change than the P4 since it is not only going to 64-bit but a new ISA (EPIC instead of CISC). It needs time and testing before it will be real.

      However, Intel is certianly NOT ignoring the home market. The P4 is going to continue to be refined (we are on the 3rd revision of P4s and a 4th is soon comming) and should scale up to around 10ghz. There is plenty of life left in it (and probably subsequent chips based on its architecture). Then, by the time it is getting ready to be replaced, the then current Itanium chip should be ready for prime time.

      So quit your bitching. If you don't want a P4, fine, stick with a P3. Why the hell do you care WHAT Intel is doing if you don't want a new chip? When you do decide you want one, get a P4, you ahve no lack of options with them and they scale to rather high speeds already and are not stopping.
      • The Itanium 2 is a great chip. I haven't done the calculation in a few months but at least back then the cost Itanium 2 itself was actually cheaper than Pentium 3/4 configured with similar amounts of cache. That is there is nothing stopping Intel from releasing a desktop version today. On all sorts of benchmarks the Itanium 2 has done wonderfully and even with a reduced cache configuration it would probably outperform the 3 ghz Pentium 4 (though not by that much). I can definitely see the chip being useful for Linux which gets it closer the mainstream.

        Originally I had argued that Apple should switch to a desktop Itanium 2 if they weren't going to go with a stripped down Power 4; once the rumors of the 970 got confirmed....

        As for the original poster's comments regarding quads and speeds and so forth it didn't make any sense to me.
        • As for the original poster's comments regarding quads and speeds and so forth it didn't make any sense to me.

          For most applications, you will only see as much "speed" as one CPU can offer you, no matter how many CPUs you have. This of course does not hold true in efficiently multithreaded apps (as opposed to the majority of multithreaded apps out there which would actually perform *better* as a single thread), but for the things we actually upgrade for (killer new game), the speed of any one of your CPUs matters more than how many of them you have.

          Now, Let's say CPUs powerful enough to reasonably do anything you want exist (short of intense number-crunching research, for which enough CPU power will *never* exist to satisfy the demand). I personally believe we passed this point somewhere around the PIII/800, though *certainly* the newer PIII/1400's have reached this point.

          Once you have that level of performance, a faster CPU doesn't mean *you* can do things faster. Think back to the mid 1990's... Just how fast of a CPU did we need to run Word or Excel such that it would *never* exhibit an observably delay? A PII/300? Even that high?

          So, what do we do now? Well, we can run any one thing as fast as we want. How about putting things in the background? Let's say I want to encode a movie to Xvid. At full CPU on a 1Ghz machine, this takes around 6 hours, and your machine grinds to a halt in terms of responsiveness. You could set the priority low, but the machine will still "feel" laggy, and if you do something else CPU intensive, the video encoding will drag out for *FAR* more than 6 hours.

          So now on to the point.

          Have you ever used a dual-CPU desktop machine?

          The first thing you will notice, the UI *almost never* feels laggy. You can have a load of 50 CPU's worth of processes running, and the desktop will still respond when you click something. Your foreground task will act reasonably responsive. Yeah, with 50 CPU-sucking processes, you won't do anything in the background very quickly, but if you try, you can still use the machine.

          So, to go back to the Xvid idea, try this on a single-CPU machine: Queue up a movie to encode, set it to low priority, then start playing your favorite CPU intensive 3d shooter. Wow, lag sucks, huh? And look, a framerate of 10. How nice.

          On a dual, even with the same "total Mhz" (fairly meaningless, but just to stay fair), you wouldn't even need to set the video encode to low priority. Just fire it up, then your game, and enjoy your game at a "normal" frame rate.

          But the benefit doesn't stop at "one extra CPU intensive task per CPU". With thoughtful management of process affinities, you *really can* run those 50 CPU sucking taks, confine them to CPU1, and play your game (still with no noticeable slowdown) on CPU0.

          So, how does this relate to my oriinal point?

          We don't *need* a 10Ghz chip with half a gig of cache. We don't even need 64b chips yet, though I agree that, for the sake of an increased per-process address space, it wouldn't suck and we'll need it within a few years. Why don't we need this? Because we can't use it. No single interactive application needs even as much CPU as the current average single-CPU desktop machine can throw at it.

          So what about multitasking, you say?

          Well, I've already answered that one. If it costs less, and takes less complicated technology, to make four 1Ghz processors than it does to make one 4Ghz processor, why would we go with the single 4Ghz processor? And, for the reasons I've addressed above, I would even pay somewhat *more* for the quad setup than for the total-Ghz-equivalent single CPU setup.
          • OK now I understand you. And pretty much I agree. Especially regarding the dual processor comment. I actually share your opinion that while 2 x Processor P may only benchmark out at 1.5 x P's speed it feels more like 4 x P's speed because background apps don't chew up your performance. I see no reason not to do go dual in a desktop; the extra cost is low and the extra comfort is massive.

            However I'll disagree regarding the idea we've hit a long term beachhdead.

            - Large Java apps definitely tax the CPU. Normally its the slowish harddrive that is the bottle neck; but with these guys the GUI becomes noticably sluggish with actual wait times "check your watch" wait times. And the harddrive isn't really going so this is the Java. I use a PIII 1ghz and the Java apps are Oracle interface stuff. Being primarily a Perl programmer I can understand the appeal of languages that execute of VMs rather than compile in terms of programmer productivity. I have to say that if these become the norm we are going to need much faster CPUs.

            -- Video I'm still feeling CPU related constraints there. I've never worked on a system with a GeForce Ti4... so maybe that solves the problems but at least on my current system... I've heard from others that nothing on a PC handles HDTV video properly yet (though its mainly bus issues) and you still need to go SGI for this.

            I guess the other thing I don't see is how you end up taxing large numbers of CPUs in a home system. Generally I've got 1-2 forground tasks and at most 1 background task that are intensive. Assuming something like 1 CPU for OS+hardware that means under the very worst consitions I'd rarely (ever?) see a difference between 4 and 50 for my home / workstation setup.

            Don't get me wrong I've definitely seen the difference between 6 and 12 on servers but that was with dozens of highly active and hundreds total threads (and god help us if we are still using MSFT operating systems with that number of threads).

    • Home Q User enters shop:

      - So you say that super-duper 4 in 1 chip is only 1.5 GHz, right? [Thinks he does not run 4 MSWord's or 4 Quake's at once]. Wrap me that 1._7_ GHz Celeron then! [Vroomm!!!]
  • by iNub ( 551859 ) on Friday January 17, 2003 @06:29AM (#5100875) Homepage
    With a P4 killer on the way from IBM, who already has a 90nm/300mm plant in operation, I've been expecting Intel to announce that they have smaller, more efficient processes already in operation. But, what's this? Intel is *behind* IBM in the chip fabbing technology? This might bode well for my next Apple purchase. (Assuming my jobless, broke ass finds a job by the time Apple moves to this new CPU.)

    Obviously, I don't keep up with this part of the computer world. Is IBM normally ahead of the game when it comes to new chip processes? It seems to me like Intel, whose main priority is processor manufacture and distribution, would be ahead of IBM, who have diversified to the point that I don't even know what their primary product is.
    • "IBM, who have diversified to the point that I don't even know what their primary product is."

      Intel principally produce products (chips), IBM don't they're a services company - that's the big, big change Gerstner made in turning the company around. Hope this helps with your confusion :-)

      • It does. Thanks. But, I still don't get why IBM is beating Intel in the chip fabbing business. Is IBM that much bigger than Intel? I've thought for a while now that Intel is bigger than IBM. Maybe I should actually research this stuff, it's better than "Days of Our Lives." (Score:-1, retarded)
        • by CountBrass ( 590228 ) on Friday January 17, 2003 @06:47AM (#5100913)

          IBM ($81.19billion FY 2002) is four times the size of Intel ($26.76billion FY 2002) in revenue terms.

          • I guess Gerstner did his job then. ;-)
          • I guess the real question is how much research money goes into chip development? IBM sell many different products (including intel servers, Unix servers, software products, consultancy, mainframes etc, etc) while Intel sell (mainly) Pentium CPUs with some sidelines in graphics etc. So while IBM is 4 times the size of Intel, I'd imagine Intel probably spends more on CPU development.
            • I guess the real question is how much research money goes into chip development? IBM sell many different products (including intel servers, Unix servers, software products, consultancy, mainframes etc, etc) while Intel sell (mainly) Pentium CPUs with some sidelines in graphics etc. So while IBM is 4 times the size of Intel, I'd imagine Intel probably spends more on CPU development.

              Except every machine IBM sells (excluding their x86 systems, which just buy in Intel chips) is based around a single CPU architecture - POWER, the heavy-duty PowerPC variant. So, everything IBM does in 'CPU development' is going into the POWER/PowerPC core, although they seem to share a lot of generic fabrication advances (copper interconnect, silicon-on-insulator etc) with AMD for the Athlon/Hammer line.

              Granted, IBM do a lot more than just CPU design, whereas Intel are almost exclusively CPU vendors (although Intel divide their efforts between IA-64, x86, i960 and StrongARM/Xscale) with some sidelines (NICs, switches, chipsets). Overall, I'd say IBM put a lot more muscle behind POWER/PowerPC than Intel can behind IA-64 and x86.

              • Can you expand on this? I'm not sure what kind of processors they use in the i-series and z-series and their terminology on the website is less than clear. For the p-Series they openly talk about the Power architecture so I would have assumed they do the same on the i and the z.

                • PowerPC is derived from the POWER architecture, but they are not the same. Very similar, though. http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?Power PC
                • The Z-Series systems are the Mainframe class boxen. They've never run an Intel chip, and likely never will, for a variety of reasons. One of the primary ones being that Intel, and Intel compatable parts, can't give the level of HA needed in this kinds of machines (ie. shutting down a single CPU card/slot for online replacement, ditto with memory cards, etc).

                  IOW: Z-Series = S390

                  The -I-Series- are what we used to call the AS400 line of machines. They use the IBM PowerAS CPU.
                  They, also, will never run a chip from Intel, for many of the same reasons (true HA)

                  These machines, incidentally, are the kinds of things that Intel is actually trying to move up either into, or against, with the Itanium. And like I said, I don't think they can...at least not as "only" a chip-house.
                  • I know what the z-series and i-series are; and agree that Intel is going for the "i-series" market with the Itanium (though given its benchmarks it really should be replacing the x86 line...).

                    But my question was the original poster's comment that the Power 4 was used in all IBM's products. I don't know much about the CPUs IBM is actually using on these two lines but it didn't seem to be the Power line.

                    BTW HA=?.

                    • HA = High availability. The kind of things that banks and such absolutely require for their business processes.

                      The Z-Serious are using IBM POWER processors, as are the AS400 and RS6000 class machines, AFAICT.
    • Err, when exactly is it that IBM plans on making a "P4 killer"? The PowerPC 970, which is scheduled for release late this year is expected to manage a SpecInt score of 937, and a SpecFP score of 1051. By comparison, the Pentium4 at 3.06GHz, which is available NOW, scores 1085 and 1092 on SpecInt and SpecFP respectively.

      In other words, that "P4 killer" that's on the way from IBM is going to be nearly a year behind the P4 in terms of performance.

      As for IBM's 90nm fab process, how many 90nm chips do you see from IBM now? None. You know why? Because the 90nm process isn't finished yet! They've got the equipment, but they haven't actually started volume production of chips on that process yet, and probably won't for at least a few more months. I wouldn't count on being able to buy a chip produced on an 90nm IBM fab process before the end of the year, probably several months after you can buy a chip produced on an Intel 90nm fab process.
  • Who needs the Intel chip when the one manufactured by Fukencomputen in Germany is about 10 times faster and uses less than 1/3 of the power? It seriously doesn't make sense to continue supporting the giant squid Intel while they crush their competitors who make superior products.
  • by watzinaneihm ( 627119 ) on Friday January 17, 2003 @06:38AM (#5100892) Journal
    I think HP and Intel are putting all bets on their child [liveworld.com] Itanium.
    First HP holds back [slashdot.org] on their alpha line, then Intel does this....
    The important question is, Is it good for the consumer by letting others into the market (lesser competition, flatter market etc.) or does it harm him by slowing down technology?
    • I think that largely depends on what the application is going to be. If you need bleeding edge numbering crunching performance or have such a need for an ultra spec 64-bit cpu cluster, this is obviously a let down. If you are more of the gormound mindset and are looking to upgrade/replace existing servers with more price competitive options, this may be a blessing in disguise(not even so hidden).
    • by dpilot ( 134227 ) on Friday January 17, 2003 @08:53AM (#5101106) Homepage Journal
      AFAIK, it is *the most proprietary* processor on the market.

      When they began the IA64, Intel and HP set up a company to hold the IP related to the new architecture. That company owns the IP, Intel and HP get rights to use it. None of Intel's or HP's cross-licensing agreements apply to any of the IA64 IP.

      AFAIK, every other major CPU ends up have some amount of cross-licensing, except the IA-64. They own it lock, stock, and barrel. The only chink in the armor seems to be Intergraph.
  • by dmeranda ( 120061 ) on Friday January 17, 2003 @06:46AM (#5100909) Homepage
    How does Intel's Hyperthreading Technology [intel.com] differ from the dual core? I realize the obvious, such as one in in the Pentium line and the other in the Itanium, and the physical differences of packaging.

    But how how different will the architecture of a dual-die chip differ from hyperthreading, such as which CPU components will be shared (like cache, instruction decode/scheduler, etc.)?

    Also would the Linux kernel's logical processor abstraction used to enable hyperthreading support (see IBM developerWorks Article [ibm.com]) also continue to work effectively with a dual-die chip?
    • Using hyperthreading you have one real CPU and a virtual one. Which means tasks cannot run as much paraller as in a real dual system. This is the reason hyperthreading is just a hype, IMHO. A CPU with hyperthreading enabled will never complete a task faster than two of the same CPUs running parallel with hyperthreading disabled.
      • by iNub ( 551859 ) on Friday January 17, 2003 @07:29AM (#5100977) Homepage
        A CPU with hyperthreading enabled will never complete a task faster than two of the same CPUs running parallel with hyperthreading disabled.

        Well, of course 2 processors will outperform a single one. Processors have a finite pool of resources. The point of HT is not to perform like dual processor, rather to act like them, increasing the performance of a single CPU at a negligible cost.

        Buying 2 processors would cost you twice as much as a single processor, even more when you consider the cost of a motherboard and enough memory to make dual processors a worthwhile investment. You would get roughly double (OK, 1.6x) the performance at double the cost.

        Buying a single HT-enabled processor, however, would only cost you 10 or 15% more, and you would be seeing a 20-30% performance increase across the board. I think that's a better deal.
        • by nuintari ( 47926 ) on Friday January 17, 2003 @09:42AM (#5101242) Homepage
          So, can I have both? Buy two hyperthreaded chips and stick em int he same board, and get an even weirder inneficient speed increase. Or would writing a scheduler to handle it be too hard, virtual chips on top of two real chips, I imagine it could appear to look like 4 way smp when in reality its 2 way weird smp. I unno, I want one!
          • So, can I have both? Buy two hyperthreaded chips and stick em int he same board, and get an even weirder inneficient speed increase. Or would writing a scheduler to handle it be too hard, virtual chips on top of two real chips, I imagine it could appear to look like 4 way smp when in reality its 2 way weird smp. I unno, I want one!

            I'm guessing that this guy's spelling and grammar checkers are using 100% of his two CPUs right now. That's why he wants four :)
      • As the recent AnandTech (slashdot link) article pointed out, a properly aware kernel can recognize that the latency between the real and logical CPUs is much less than that between two CPUs.

        The performance increase isn't shaped quite the same way as that of a true dual-CPU setup because an instruction on one logical CPU in an HT setup may have to wait for a resource in use by an instruction on the other logical CPU to be released.

        As I understand it, it's kind of like resource locking...it's unsafe for two things to use the same resource simultaneously, so one waits for the other to release the resource.

        I don't know if a two-CPU or a one-HT-CPU setup would work better for every situation, or if they'd each have their uses. a two-HT-CPU setup would have the advantages of both, though. :)

        I hope AMD can fit in something like this. Even if it's patented, one could argue "prior art" since resource locking has been used in computers since the dawn of fileservers.
    • The difference would be that you *double* the amount of raw proccessing power.
      Then you'd probably have ht on them too, so that you'd have one physical cpu, seen as two by any smp os, and then you'd hyperthread them so that you get *four* virtual cpu's if you activate ht too.
      You'll probably see a much bigger performance boost by putting two separate cores in one package than ht'ing one core.
      And if you put two or more of these in one box, well do the math. =)

      I haven't actually read the entire articles so I don't know what resources the two cores would share, but it must be less than what is shared when doing hyperthreading.
      The major drawback is probably that the two cores would need to share the same data and adress bus to the main memory of your machine.
    • Don't think of hyperthreading as an alternative to dual-core or 2-way SMP. It isn't. Think of hyperthreading as a way of squeaking out more usage from the existing execution units in your CPU.

      Lets say you have a hypothetical CPU with n execution units. (For simplicity, we won't distinguish between types of execution unit, such as integer, floating point, branch, etc).

      You fetch and decode a bunch of instructions, and then issue them n-at-a-time to these execution units, for maximum performance.

      But, the instruciton stream has some inherent limitations on which instructions can be issued concurrently, due to dependencies between instructions, instruction type mix mismatching available execution unit type mix, and instructions waiting on loads, etc. Even with control and data speculation, there may be fewer than n instructions READY to issue on the next clock cycle.

      So, you have three choices:

      1. Just issue the ready instructions, and let the other execution units go to waste.

      2. Switch to another thread, maybe it has n instructions READY to run. (This is usually called on-chip multithreading).

      3. Issue a mix of READY instructions, some from one thread, and some from another thread, which combined together use all n execution units. Both threads get to make some forward progress, and no execution units are "wasted". (This is usually called on-chip hyperthreading).

      So, back to the big picture: Hyperthreading isn't a replacement for a second CPU or core, because it does not provide any more computation resources. It's a way of using the available resources in a CPU more efficiently, so that fewer computation units are likely to go to waste on any given clock cycle.

      A dual core chip typically duplicates almost ALL the circuitry on the chip, often even including the caches. Big chips have low yields and cost a lot. Dual core is a way of throwing a lot of money at getting more parallelism. Kind of like having multiple CPUs in separate sockets, but with both advantages and disadvantages coming from the closer coupling. Hyperthreading is a way of throwing far less money at the problem of squeaking out some of the wasted performance in an existing CPU design.

      It isn't free, by the way. Hyperthreaded CPUs do have to duplicate some hardware on a per-thread basis. Obviously, thread context registers like program counter and stack pointer have to be duplicated, as do application registers. But they share caches, execution units, decoders, memory management units (mostly), bus interface logic, etc.

      Hope this paints a clear picture.
  • by t0qer ( 230538 ) on Friday January 17, 2003 @06:47AM (#5100915) Homepage Journal
    "Intel said Thursday that it is pushing back the release of its first dual-core processor

    So now instead of virtual processors (read hyperthreading) intel is going to release a chip that does hyperthreading for real?

    Damn i'm confused.

    (BTW tried hyperthreading, marginal increase for some apps, slowdowns for others)
    • by iNub ( 551859 ) on Friday January 17, 2003 @06:59AM (#5100937) Homepage
      From what I've read, HT doesn't even have a possibility to slow things down. Do you know how multithreading works in an SMP environment?

      What HT does is allows this single CPU to pretend to be 2 independent CPUs, effectively splitting it in half (but not necissarily down the middle). The upshot of this is that it can more effectively deal with cache bubbles and all those horrible performance-draining problems Intel chips, with their insanely deep pipelines, are vulnerable to.

      Basically, if you only throw a single thread at the processor, only the first virtual processor does the work and the other virtual processor is idle, allowing the entire processing power of the computer to deal with one problem, instead of half of it sitting idle. This is an advantage because HT only requires 5% more transistors, and the net benefit is something like a 20% performance increase. Of course, if you're not doing any work where you actually *use* multithreaded apps, you'll never understand why HT is a big deal.

      This post has gone way beyond what I originally intended to say, and instead of rescuing it, I'm just going to kill it now.
      • by larien ( 5608 ) on Friday January 17, 2003 @08:29AM (#5101076) Homepage Journal
        Bzzt! From what I've read, HT can and does slow down some applications

        For a good analysis, read this article [arstechnica.com] over at Ars. In particular, it does point out that the likely cause of slowdowns in some apps is down to cache contention. Near the end, it also says:

        With the wrong mix of code, hyper-threading decreases performance, just like it can increase performance with the right mix of code
        In short, sometimes it helps, sometimes it hinders.

        Finally, you don't need multithreaded apps to take advantage of SMP/HT; if you're running a cpu intensive application on one CPU, the other is free for interactive stuff. You do, however, get much more benefit in a multi-threaded application.

      • HT doesn't even have a possibility to slow things down

        Yes it does. The 2 'threads' of the CPU share the same bus and cache, in some scenarios pressure on the cache is such that it runs slower. I think anandtech had a good article on this a while ago.
      • There's a certain amount of overhead with HyperThreading, and some additional concurrency issues. Consequently, you can see some marginal (1-3%) slowdowns in some applications.

        In poorly behaved systems (such as the Linux ext2fs implementation), stupid locking can result in significant performance hits. This is primarily a result of one thread waiting on a result of another thread which is scheduled on the same physical CPU via a spin-lock. A spin-lock is really simple, it simply spins in a loop checking whether a condition is met or not. With normal SMP, this isn't too much of a problem because sitting in a spin-lock doesn't slow down the other chip, but with virtual processors a spin lock takes resources away from the other threads that are running. Tada, massive slowdown.

        But, this kind of situation is a result of bad design, and even then is unlikely to occur outside of critical operating system code. Most applications these days will experience a large performance improvement from hyperthreading.

        Despite how some anti-Intel people are trying to spin it, HyperThreading is for the most part a good thing. If nothing else it improves system response - even if one process is spinning, you don't have to constantly wait until the kernel preempts it before processing new user events, etc...

        And your statement that SMP can't slow anything down would be correct in a world without concurrency issues. Process synchronisation incurs overhead, and in a poorly designed application this overhead can be not insignificant when run on a SMP machine. If the application does most of its work in a single thread, this overhead could actually result in slowdown. But fortunately you don't see software this badly written very often.

      • What happens if you make a SMP with two hyperthreaders? Imagine the system sees 4 processors. The first two (CPU 1) get the most active processes assigned. The second two (CPU 2) run almost idle... Your machine is hardly faster then a single-CPU system.
  • by altgrr ( 593057 ) on Friday January 17, 2003 @06:57AM (#5100933)
    For most home (and, indeed, server) applications, I would have thought that having a dual core processor won't make much of a difference, just as processor speed doesn't - rather, what is important is the speed you can get data in and out of the processor.

    Overall CPU speed doesn't seem to make much of a difference when the bus speed is the same, certainly not in the systems I've tested. However, up the CPU bus speed, and you'll find your performance greatly improved, because you're getting data to the processor quicker.

    Some years ago, I tested this theory with a couple of old 686 chips - one 200, one 233. I benchmarked the 200 and 233 both at 75MHz bus - virtually identical results. Then I ran them at the same CPU speed, but 83MHz bus, and the benchmark results improved by exactly 83/75. What does this tell you? :-)
    • Some years ago, I tested this theory with a couple of old 686 chips - one 200, one 233. I benchmarked the 200 and 233 both at 75MHz bus - virtually identical results. Then I ran them at the same CPU speed, but 83MHz bus, and the benchmark results improved by exactly 83/75. What does this tell you? :-)

      That you were running a single thread computationally-intensive task as a benchmark.

      Dual CPUs are there to help parallism. They won't show great increases on pure number-crunching. For example, my previous machine was a dual-533 Celeron, and it would be nice and responsive whilst running multiple apps, even if one of them (say, my MP3 encoder) decided to max out one of the CPUs.

      Cheers,
      Ian

    • "Great - more processor speed. Do we need it"
      Yes of COURSE WE DO!

      Its this whole DRM thing, I thought they had just lost their marbles and were pushing something that could never sell - but no.

      They WANT YOU TO CRACK DRM, because cracking the keys will take a lot of processing power, and that means more high-spec machines.

      Think about it, what other reason would you need the juice for? Only code cracking really eats major cycles, so its all a cunning plan to sell hi-spec equipment. Damn they're clever :)

    • > For most home (and, indeed, server) applications, I would have thought that having a
      > dual core processor won't make much of a difference, just as processor speed doesn't -
      > rather, what is important is the speed you can get data in and out of the processor.

      > Overall CPU speed doesn't seem to make much of a difference when the bus speed is the same,
      > certainly not in the systems I've tested. However, up the CPU bus speed, and you'll find
      > your performance greatly improved, because you're getting data to the processor quicker.

      > Some years ago, I tested this theory with a couple of old 686 chips - one 200, one 233. I
      > benchmarked the 200 and 233 both at 75MHz bus - virtually identical results. Then I ran them at
      > the same CPU speed, but 83MHz bus, and the benchmark results improved by exactly 83/75.
      > What does this tell you? :-)

      It tells me that the benchmarks you use are not the same as the benchmarks I use. Here's my rundown:

      Civilization III: The complexity of this algorithm increases exponentially based on number of cities. I'm running a 16 civ game at the moment, and the game literally takes more than twenty minutes to cycle a turn, and it's only the nineteenth century! Granted, my 800MHz Duron isn't state of the art, but it's not state of the fart, either, and even the top of the line processors would buckle under this stress.

      PAR: I download very large binaries off usenet that are separated into multiple files. PAR is a tightly coded system that makes extra parity files that you can use to build missing files of a download set. It takes a *long* time to verify the parity on 800MB files.

      RAR: This probably would be helped somewhat by faster memory access, but I suspect that extracting an 800MB multipart RAR set would also be strongly enhanced by a faster processor. I mean, it'd be nice to not have to wait five to ten minutes to extract this stuff before viewing it. :)

      Qt: Compiling scales almost directly with processor speed, at least on some types of code. And I do a lot of compiling on Linux. And I do a fair amount of compiling on win32, as well. Compile of a large program can take many hours. When I type './configure && make' or 'qmake -project && qmake && make', I want to get up, prepare some coffee or munchies, come back with the yummes, and immediately test the newly created binary.

      There are other programs that I use that depend on the processor, but those are some of the biggies. There are also programs that would benefit from faster data access, of course. But memory isn't really getting faster. It's just getting higher bandwidth. When you request data from PC2100 or PC800 memory, that data doesn't start coming back any sooner than it did with PC133 memory. It's just really expensive to increase the frequency of the entire northbridge and corresponding devices. That's why the microprocessor has been doing the majority of the speed boosts. It offers the most increase for the least expenditure.

      -JC
      http://www.jc-news.com/
  • in the other news... (Score:5, Informative)

    by dunkelfalke ( 91624 ) on Friday January 17, 2003 @07:15AM (#5100958)
    amd opteron smokes the competition in the 4-way sap test:

    c't magazine [heise.de]

    translation of a short except: even early prototypes of amd opteron can win over all competition in four ways systems - either 32 or 64 bit - at the sap sd benchmark. and that with only 1.6 ghz (planned to launch at 2 ghz)

    i think the chart says it all. go amd!

    • Nice cheerleading. I think any company that needs a high end machine should go out and buy one now!

      Oh, wait. I forgot. They don't exist. And when they do the competition will have advanced a considerable degree (people who cheerlead always assume that their hero non-existent CPU will come out and the competition will be at the same place it is now).

      Also, one benchmark doesn't mean squat.
  • Grr. (Score:3, Interesting)

    by Kourino ( 206616 ) on Friday January 17, 2003 @07:26AM (#5100976) Homepage
    And the vaunted EV8 tech we've been guessing would be infused into later IA-64 products gets pushed farther away into the distance ...

    It's good to see at least they're on the road to 65-nm fabrication. But it'd be nice if they breathed some more life into their current architectures. IA-64 docs are interesting reads, but the hardware just isn't terribly impressive in practice yet. (At least, kernel compiles felt like they took forever on my professor's dual IA-64 research boxes compared to ... my P3 866 at home.) And. New Pentiums? Watch, as I leap for joy. Or don't, in fact, leap.

    I'd like to see Intel do something New[tm] and Exciting[tm] on the home market. IA-64 is that, I'm guessing they just need to tweak existing setups or something. I love the feeling of having a processor architecture before me to dig into. (That's why I picked up an old EV56 machine for ... hehe ... testing.) But are we non-server folk ever going to see something that's drastically different from the CPU in the computer we got a decade ago?
  • by msgmonkey ( 599753 ) on Friday January 17, 2003 @08:20AM (#5101061)
    I was under the impression that there was competing "schools of thought" with regards to how extra preformance was to be gained as we start to hit the manufacturing "wall".

    On one hand you have the VLIW type guys (or EPIC in intel speak) whereby you increase parallelism at the instruction level.. or the Multicore guys where you increase the number of number instructions executed by having multiple cores running different tasks.

    Whilst in principle I've got no problems with merging the two, I get the impression that by going the dual core route Intel are admitting that they wont be-able to get the kind of performance out of EPIC that they where promising.

    Just a thought to consider.
  • One of the most exciting prospects for open source computing is the fact that OSes can be ported to new archetectures relatively quickly. As Microsoft's dominence starts to slide (and there is only anecdotal evidence this is happening) it will become easier for non x86 compatible uPs to make it in the marketplace. The lack of competitive silicone to x86 has been one of the factors slowing the pace of innovation in the last three or four years.

  • from what i have read from now, it seems that some readers are looking at the itanium 2 as a chip for consumer. in this case, it might never be. some comments just seem to be flamebait.

    the itanium series is designed with special applications in mind including scientific work and datamining applications. keep in mind that 9mb of cache may be too big for the typical application but for those high end where you would want to let say analyize an entire database and get statistics to determine trends, then you might want to think again. faring the cpu even with a higher clock rate but with a small cache won't keep up with the competition.

    i would be pleased to see an amd opteron chip with at least 3mb cache in the market (maybe i can think about getting one of them.)

    with competition, i believe there are just three right now, with ibm's power, and sun's ultrasparc to make the rest. this is for the high end arena.

    and of course, the processor is just a variable to the equation. in the enterprise arena, you must need a good platform. that is it should be very scalable (with hundrends of processors in a system and upgradability) and reliable (with 99.999% uptime and hot swap components including cpu, memory, i/o cards, etc.). intel has good tools and partners for these and amd will take some time to catch up (but i believe they would.)

    intel has some good plans for itanium including the dual-core cpu and even the same pin compatibility (although it doesn't mean it can be fitted into the old ones.) the thing is, intel is already gearing a battle in the enterprise arena. with its resources, it will be able to deliver quite better products in the future.

    i believe intel has lots of technologies lying around that we do not even know. of course, currently, you will not put all your cards. wait for some threat and put it down one by one.

    with the latest results, intel is doing well financially compared to a greater loss for amd. their new hammer line will be a saving factor for them (question still to be answered this year - and i'm excited about this.) and i'm sure intel already has a pentium 4 running at 5ghz lurking around their labs. they are just waiting for the new processor before we start a new ghz revolution. :)

  • by javatips ( 66293 ) on Friday January 17, 2003 @10:22AM (#5101393) Homepage
    Humm, seems that the editor got it wrong...

    From CNet News [com.com] they are actually going to release it FASTER that the previous schedule.

    The double core itanium deadline is now 2005 instead of 2007 and adding a new chip for 2004.

    Maybe the confusion arise fromthe fact that "Originally, Montecito, due in 2004, wasn't a dual-core chip, but it was morphed after engineering and manufacturing teams concurred that a dual-processor chip could be mass-manufactured at Intel by 2005."

    It would be a good idea to change the headline!

    • Since I don't have any moderator points, I'm going to reply instead (woah) and thank you for the extra info. I saw your comment at the top of the list before even reading the article and its helpful.

      I'm going to go check if Tom's Hardware has their processor roadmap updated yet...
  • by grub ( 11606 ) <slashdot@grub.net> on Friday January 17, 2003 @10:31AM (#5101445) Homepage Journal

    The Santa Clara, California-based company is the leading maker of processors, which serve as the brains of computers.

    Ah, so that's what those things do..
  • That Murky News [siliconvalley.com] article had its facts a little mixed up. The real, though not as sensational (and thus not as slashdot-worthy), story is that Intel delayed the "Montecito" processor for a year so that it could make it dual-core. Read that sentence again (this means you). The original plan for Montecito was for it to be a single-core CPU. What they've just done is decided to make it dual-core and pushed back the schedule a year. Try reading a more accurate account in the EE Times [eet.com].

    <slashdork>Gee whiz, from my vast knowledge of the industry, I can see that Intel is going down the toilet. It takes them a whole year to design a dual-core processor! Egads!</slashdork>

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...