Intel Delays Dual-Core Processor, Plans New Server Chip 156
Kajakske writes "Intel said Thursday that it is pushing back the release of its first dual-core processor by a year to 2005 and adding a new microprocessor for servers to its Itanium II lineup. On the other hand, Intel is moving forward in the area of new technologies."
Not much competition ? (Score:5, Interesting)
Interesting, especially given the lack-luster products produced by Motorola and the relative lack of success of AMD (I use an XP1800+ and think its great, the company just doesn't seem to do too well.) I wonder if this lack of competition is a major factor - Intel doesn't need to keep spending money researching new chips if it's current generation are so far ahead of its competitors.
I also wonder if the economy is a factor compounding that - ok you can research your way into new demand but why bother when you're that far ahead (see above) ?
All I can say is, hurry up IBM and get those new PPC chips out the door (and into my Mac ;-).
Re:Not much competition ? (Score:5, Insightful)
The thing is, this isn't a chip technology race. It's a chip fabrication/distribution/pricing race.
Intel's chips are not technologically superior to AMDs (I know Intel has some major technology assets, but they mostly don't affect the chips in production now). On the other hand, Intel's capital, fabrication capacity, distribution, and market clout are far superior to AMDs. Intel is concentrating on the areas where it has the advantage, which are also the decisive areas.
If only this *was* a technology race. But that's market forces for you.
Re:Not much competition ? (Score:2, Insightful)
Martin
Re:Not much competition ? (Score:2, Insightful)
Good point, but I would expect any (succesful) technology that appears in the server line to make it into the consumer line at some point. So dropping something from Itanium today means it's unlikely to appear in the Pentium tomorrow.
Re:Not much competition ? (Score:5, Interesting)
they may have the technology right now that doubles or triples current performance, but why play that card now? keep the tech in reserve, and let it roll out at a "natural" moore's law rate in order to keep the investors happy.
if motorola should happen to shock the world and release a 4 ghz multicore G5 running with 800mhz DDR RAM (we can dream, can't we??), then intel can roll out whatever they have in reserve a bit earlier.
Remember, Intel is run by businessmen, for businessmen. Technology to them is only a means to generate cash.
Re:Not much competition ? (Score:5, Interesting)
Sigh. I suspect that's exactly it. And that's what pisses me off.
Because, as a paying customer, technology to me is a whole hell of a lot more than a way to generate cash. It's a way to do interesting things, and also an end of its own, in a way - exploring the technology is really fun. Anyone remember sitting down with 16/32-bit assemblers and triple-faulting your processor until you got "protected mode" down?
I haven't had that much fun directly with a CPU in years. When I get time to play with my EV56 machine, I'll have some of it again; it'll be my first architecture after IA-32 (I haven't done that much interesting low-level on IA-64 besides performance counters).
And
Although really, this is partially because DEC couldn't market the Alpha to save its life. In fact, it didn't.
Re:Not much competition ? (Score:3, Insightful)
The problem is that new fab lines cost billions of dollars and laying out a new iteration of a microprocessor is not cheap either, so the chip manufacturers need to be pretty brutal about producing money-makers on their fab lines.
...they just wont *release* new tech... (Score:5, Insightful)
Been there, done that.
Why play that card? (Score:2)
So either they can, and it's toooo expensiveor they can't (except by sticking 20mb of cache and 5 cores on the die)
Re:Not much competition ? (Score:4, Insightful)
Andy Groove and Gordon Moore (two founders of intel) are by far two of the most prominent semiconductor scientists of the 20th century.
Dr. Grove himself has written over 40 technical papers and holds several patents on semiconductor devices and technology. For six years he taught a graduate course in semiconductor device physics at the University of California, Berkeley.
How many people here can say they have taught 6 years in a graduate course at Berkeley?
Craig Barrett the current CEo of intel himself is nothign to scoff at either. He's a fulbright fellow that received his PHD in material science at stanford. He has 40 technical papers dealing with the influence of microstructure on the properties of materials.
So before you knock on Intel about how businessmen is run by businessmen do your homework.
These guys are Far from business men. They are first and foremost incredibly talented scientist who happen to be good at business.
Intel has one of the world's LARGEST cost in terms of research and development along with GE, MS and AMD.
I'm sorry but you are sadly mistaken if you feel that Intel is run by businessmen.
The industry (Score:2)
Re:Not much competition ? (Score:2)
If I were Intel's CEO, you'd better believe I'd go to bed hoping my researchers found some magic lamp that night. And if I were an Intel shareholder - which I am - I'd damn well want my company to take advantage of any competitive edge it found, especially improved technology.
Re:Not much competition ? (Score:2)
I suggest you read Andy Grove's book, Only the Paranoid Survive. Intel is run by engineers who don't differentiate between performance from an engineering or a business perspective. Whether it's optimizing a CPU to run faster or a business unit to produce more cash, it's the same to them.
Technology to them is only a means to generate cash.
You say that like it's a bad thing, but consider this: if you're in the 3D industry, games or movies, technology is only a way to generate pretty graphics. If you're in the telco business, technology is only a way to route other people's data from point to point. If you're a naval architect, technology is only a means to make your boat faster.
See where I'm going with this? No-one apart from hobbyists sees technology as an end in itself - it's got to make their real task easier, or it has no point. If you're an investor, then of course technology is a means to make money.
Re:Not much competition ? (Score:3, Insightful)
Where they need to develop and compete, is at the high end market, where they have a rather lackluster product of their own, the itanium... which is being completely blown away by alpha in the raw performance stakes, i think sparc and power4 might be nudging ahead of it too.. But when you consider the poor compiler and application support for itanium right now, they REALLY fall behind the others...
And as has been stated before, itanium should never have existed... hp should be concentrating on the alpha, which already has the software support, performance and reputation that itanium is still striving for.
Wrong competition. (Score:2)
AMD is not the competition here. IBM PPC and Sun Sparc are.
Re:Not much competition ? (Score:1)
I was glad to see that they didn't rest on their laurels so to speak, they are forever looking ahead...at least until the current top dogs get replaced by younger people, then you may see something like what you're talking about.
Re:Not much competition ? (Score:2, Insightful)
> the company just doesn't seem to do too well.)
Well, part of that is crappy management, but a large portion of their troubles are simple due to the fact that Intel is given the benefit of the doubt by the OEMs and the consumers. Even during the year or two when AMD consistently had faster chips with fewer bugs than Intel had, Intel made tons of money and AMD merely made enough to recoup past debts. People buy Intel because they're Intel. This will happen whether Intel is doing a good job or a bad job. Thankfully, they're doing a good, honest job and earning those buyers now, but from 1998 to 2001, they were not doing their customers honour.
> Intel doesn't need to keep spending money researching new chips
> if it's current generation are so far ahead of its competitors.
They aren't. Intel's Pentium 4 is pretty much on par with AMD's Athlon. But Intel has five or so x86 plants that they can leverage to test different ways to most optimally ramp their chip frequencies. You don't just throw a design and a fab process into a bucket, shake it, and come up with the resultant chip speed. You have to devote a substantial part of your manufacturing resources to the research needed to optimally match your current chip design to your current manufacturing technology.
In addition to this, Intel happens to be something like a year ahead in base process technology. They moved to 130nm six months before AMD did their equivalent move. This means they're very much ahead in that respect. So if their chips were a generation *behind*, Intel would be competitive in chip performance (this is was almost happened with the Pentium III and the early implementation of the Pentium 4). As it is, the current P4 is a competitive design coupled with a slightly more advanced manufacturing process, so Intel is a couple speed grades ahead.
Intel has to keep researching constantly. AMD does a surprisingly good job at ramping technology at approximately the same rate as Intel, despite having about a twentieth of their capital resources. If Intel stopped researching for just a few weeks, they'd lose the leverage they have to stay superior in the current climate. And that's not counting on the outside possibility that K8/Hammer might exceed performance expectations and outperform the top Pentium 4 upon release.
-JC
Intel's Competition is Intel (Score:2)
If Intel crams the market with everything it has all at once, that upgrade cycle is going to be longer.
So, unless there are other market pressures, Intel does well to time its technology introductions to maximize its profits. This may not be the best thing for consumers, but, hey, monopolies usually aren't.
G5 race? (Score:2, Insightful)
This may give Apple the time it needs to roll out that mysterious market shattering "g5" processor we keep hearing rumors about.
Maybe it's strategy to ride the tide and invest in long term goals rather than trying to get marketshare now will pay off.
Maybe not
Re:G5 race? (Score:1)
Maybe not.
If you look for meaning, you'll always find it
Daniel
Re:G5 race? (Score:1)
Apple don't make processors. You might have confused the names "Motorola" or "IBM" with "Apple", easily done.
And AFAIR the G5 is headed for use in PDAs, mobile phones etc not "real" computers.
Re:G5 race? (Score:1)
Re:G5 race? (Score:1)
"Which I'm sure you understood"
Actually, no I didn't, because I don't believe Apple have ever raved about a G5 machine or even in more general terms their next generation machines at all.
Re:G5 race? (Score:3, Informative)
If you're looking for the next generation of PowerPC chips, look to IBM's PPC970.
Re:G5 race? (Score:1)
I stand humbly corrected :-). I'd always assumed that embedded devices was a super-set that included PDAs and mobile phones.
Intel is in trouble (Score:4, Interesting)
If AMD manages to stick to their schedule on the 64bit chips, they are going to have a big winner on their hands: systems that can address more than 4G in a single process and yet are backwards compatible.
Re:Intel is in trouble (Score:1)
Itanium II is now out and is said to be OK. For the price of an Itanium II system you could buy a car/house/small country.
Re:Intel is in trouble (Score:1)
> one production system with it? I have not.
I know somebody whose workplace (a research place of some sort) got a cluster of five hundred of them.
> Itanium II is now out and is said to be OK. For the price of an Itanium II
> system you could buy a car/house/small country.
Um. High end servers are supposed to be that expensive. Ever try shopping for a high end UltraSPARC or Power4 machine? Didn't think so.
IPF isn't supposed to be a replacement for x86 (well, it was originally eventually supposed to, but that's when various Intel execs were drunk on monopolistic stupidity). I do not agree with the means by which Intel has penetrated the market (eg, they coaxed several prominent chipmakers, such as HP, DEC, SGI, and so on, to dump or devalue their existing lines and support the Merced far before it reached A0 stage), but on the engineering side, the McKinley (or "Merced II", if you like fugly names) seems an excellent implementation of what was perhaps a not too well thought out ISA. Just my opinion, of course, and I'm merely an armchair designer (no, no, I don't design armchairs
-JC
Re:Intel is in trouble (Score:3, Interesting)
Re:Intel is in trouble (Score:2)
Re:Intel is in trouble (Score:1)
The Itanium II is certainly not a dud - it's in some of the highest performing systems money can buy. Of course it's expensive, it's not for Joe Buck or Tim Small Business.
x86 compatibility is worthless ina high end 64-bit machine, somethin AMD doesn't seem to grasp. They're marketing a high end technology (consumers and normal business users don't need 64-bit technology and won't for a while) to the mainstream market. Morons.
And you seem to be ignoring the numbers (remember that 'reality' the rest of us live in matters to us, if not to you). AMD is going broke. Intel isn't.
All in all you seem to be engaging in wishful thinking mixed with a little delusion.
Re:Intel is in trouble (Score:3, Informative)
> The Itanium II is certainly not a dud
Agreed. From an engineering standpoint, it's quite a nice chip. I don't agree with some philosophical stuff in the ISA (I'm not that much of a VLIW-for-general-purpose fan, but hey), but the microarchitecture and implementation seems very nice. I do wish that it was easier to implement OOE on IPF, though.
> x86 compatibility is worthless ina high end 64-bit machine, somethin AMD doesn't seem to grasp.
> They're marketing a high end technology (consumers and normal business users don't need
> 64-bit technology and won't for a while) to the mainstream market. Morons.
Feh. A big "screw you" on that. AMD isn't catering to the high end server group. They obviously can't just teleport into that market. Their catering to the smaller business that uses Xeon servers. Backwards compatibility with x86 is of the utmost importance in this market. Basically, they're marketing x86 workstations and x86 servers that happen to allow you to enhance performance of some types of programs with simple recompilations. There is a good chance that I might get the lower end version of this product when it comes out, as I use Linux, which may strongly benefit from those extra registers in x86-64, on my home machine. We'll have to see, of course, before I pull out the green.
> And you seem to be ignoring the numbers (remember that 'reality' the rest of us
> live in matters to us, if not to you). AMD is going broke. Intel isn't.
That's a bad measure to use. You don't have any controls in this analysis. There are a lot of reasons why AMD is losing money (poor management a la Hector Ruiz, inability for a relatively small company to handle a very harsh recession, etc..), and there are a lot of reasons why Intel is still doing phenomenally (people buy Intel no matter what, currently excellent execution, they can afford to strongly diversify). Many of these reasons have nothing to do with the technical/engineering side of the equation. IMHO, both AMD and Intel have incredible engineers, and frankly AMD especially warrants respect for being able to ramp technology at *approximately* the same rate as Intel despite having a very, very miniscule fraction of their resources. That is why I was a big AMD fan a couple years ago, at around the time when the company was dominated by the excellent triumverate of Sanders, Raza, Meyer as well as a couple critical folks like Norbert Juffa and Paul Hsieh. At this point in time, AMD was a quantum of a company that somehow managed to produce a piece of engineering that allowed them to, for a brief time, outdo the capabilities of a company fifty times their size. I am somewhat dismayed that AMD turned into a more traditional company over the last two years or so.
-JC
Re:Intel is in trouble (Score:2)
First of all, 64 bits isn't about speed, it's about address space.
Furthermore, even if speed is the issue, many of the people (like myself) who care most about it build compute clusters. The calculation is simple: (1) does it do 64 bit, and (2) how many FLOPs do $70000 buy me. The Itanium doesn't do very well on that metric.
Of course it's expensive, it's not for Joe Buck or Tim Small Business.
It is: the prices of memory have come down, and it makes sense for people to be able to use more than 2G in each process. It's great for databases, web server, video editing, and games, to name just a few mainstream applications.
x86 compatibility is worthless in a high end 64-bit machine, somethin AMD doesn't seem to grasp.
Quite to the contrary: x86 compatibility greatly reduces the risk of migration. I know that all my applications will continue to work as they do now, with no recompilation or bugs, but in addition, I can migrate individual apps to the 64 bit architecture. That's much better than what Itanium gives me.
They're marketing a high end technology (consumers and normal business users don't need 64-bit technology and won't for a while) to the mainstream market. Morons.
Yeah, right. You sound like a marketing guy. Oh, wait, you probably are a marketing guy. The only reason to buy 64 bit chips from Intel is that they are from Intel, and that Intel will probably manage to kill off the competition again, no matter how awful their chip is. But with AMD's backwards compatibility, that doesn't matter: AMD doesn't need to take over the 64 bit market to win, all they need to do is deliver good performance and value in their 32 bit mode and 64 bit functionality for a handful of custom applications.
Re:Intel is in trouble (Score:2)
Do you have trouble putting two simple sentences together? You said the Itanium is not a dud because it's fast. I'm saying, fast isn't the primary reason why 64 bit is needed.
2G? Where are you from? "people" need more than 2G? No they don't. Even most businesses don't need more.
The ability to memory-map entire files alone is sufficient reason to go to 64 bits, even if the machine doesn't take as much memory. Of course, many web applications, databases, and video editing applications achieve an entirely new level of performance and simplicity if everything can just get loaded into RAM.
And you can access more than 2G from the P4, depending on your OS configuration.
Don't make me laugh. If you think that's a real option for addressing mroe than 2G from a single process, maybe you just program in VisualBasic (if you aren't just an Intel marketing droid).
AMD is doing exactly what people used to bitch about Intel about - they're extending an ancient ISA with all its cruft and legacy crap. But now that it's AMD it's A-OK with all the nerds. Bah!
The complaint about x86 chips was that their segmentation made writing compilers for it really hard and that they performed poorly. Once Intel started delivering decent performance at a decent price and had filled in the gaps in the instruction set and architecture, most rational people stopped complaining. Nobody cares what the registers are called or how the instructions are numbered as long as compilers can deal with it. And people became even happier when AMD delivered comparable performance at a lower price.
Now, Intel is doing the same thing all over again: they are delivering a crappy VLIW architecture that makes generating good code really hard, and their price/performance ratio is probably the poorest in the industry.
AMD, on the other hand, looks like they are going to deliver a fast chip at a great price, a chip that it is easy to generate code for, that is backwards compatible, and that even has 64 bit addressing.
Re:Intel is in trouble (Score:2)
Re:Intel is in trouble (Score:2)
I didn't know the performance was that good, I heard that the chip was good but the compilers still needed a lot of work.
thanks for the explanation (Score:4, Funny)
And then there are the customers, who consume these processors like living dead zombies animated by radiation from outer space.
Re:thanks for the explanation (Score:1)
When will they target *ME*? (Score:5, Interesting)
Then, following the link, I realized they only plan this dual core toy for the *Itanium* line, anyway. Bummer. I do like how the article says Intel hasn't sold as many of them as they planned, though... Can we say "DOA"? I thought they had all but abandoned the mega-flop (in the movie sense, not the CPU sense) Itanium.
Anyway, back to my point...
I don't want a CPU with 6MB of cache (the reason they give for pushing back their SMP-on-a-chip). I don't want an Itanium. I don't even want a P4.
I would *run* to the store, however, to buy a quad (since at their current fabs, they could fit four in the same space as a single P4, so why only go dual) P-III somewhere around 1.5Ghz (like the chip they plan to release with 6 or 9MB of cache). Not an inconsiderable amount of CPU power (My current machine has "only" a dual PIII/933, and I have yet to find my "killer app" reason to upgrade).
So, listen up, Intel - the server market may pay more per chip, but we "mere" home users buy a HELL of a lot more of them. So throw us a bone, 'kay?
Because if you don't, AMD will (eventually).
Re:When will they target *ME*? (Score:2, Informative)
Re:When will they target *ME*? (Score:1)
Anyway a quad or dual p3 is not as good an idea as quad or dual p4 because p4s are designed to support these features much more than an p3 is.Though I wont mind having a quad p3 NOW.
Re:When will they target *ME*? (Score:1)
In w2k Task Mangler will fit in the 6 meg cache of the Itanium II and that is about it!
actually (Score:1)
Well, in Win2K/XP, there is supposed to be zero memcopies when writing data out the network stack. Though if you have a crappy application, all bets are off
Re:When will they target *ME*? (Score:5, Informative)
The Itainum 1 was mainly a research chip, a first generation to let people start to develop and test on real hardware. MS took advantage of this and rolled out an IA-64 version of Windows. Intel was hoping for some server sales, but the real goal was getting the new IA-64 system into production silicon.
The Itainum 2 is a much more practical chip. It is something that peopel will probably seriously look at for high end server as it is competitive with 64-bit chips from Sun. You may see it in a few workstation, but probably not many, it's mostly a server chip. Remember, we are taling competiton with big iron here, not desktop system.
Now, as time goes on, the technology will become much more mature and cheap and will eventually filter into desktops. Hopefully, that will happen before we start to hit the 32-bit crunch.
The idea here is not wait until the last second for people to need a 64-bit chip, but to get it to market sooner so you can start working on it.
This, by the way, is not the first tiem Intel has done something like this. The Pentium Pro was blasted when it came out because it's 16-bit performance sucked. Sure it did great for 32-bit but who teh hell used that? Well tehn along came Windows 95 and teh PPro architecture was refined into the PII and it was a great chip since 32-bit was rising rapidly and it smoked at that. The P3 is the third incarnation of the PPro architecture. It's optimised and enhanced (ala SSE) but the same fundimental architecture. The P4 is the first brand new architecture since the P3.
The Itainum is a much larger change than the P4 since it is not only going to 64-bit but a new ISA (EPIC instead of CISC). It needs time and testing before it will be real.
However, Intel is certianly NOT ignoring the home market. The P4 is going to continue to be refined (we are on the 3rd revision of P4s and a 4th is soon comming) and should scale up to around 10ghz. There is plenty of life left in it (and probably subsequent chips based on its architecture). Then, by the time it is getting ready to be replaced, the then current Itanium chip should be ready for prime time.
So quit your bitching. If you don't want a P4, fine, stick with a P3. Why the hell do you care WHAT Intel is doing if you don't want a new chip? When you do decide you want one, get a P4, you ahve no lack of options with them and they scale to rather high speeds already and are not stopping.
Re:When will they target *ME*? (Score:2)
Originally I had argued that Apple should switch to a desktop Itanium 2 if they weren't going to go with a stripped down Power 4; once the rumors of the 970 got confirmed....
As for the original poster's comments regarding quads and speeds and so forth it didn't make any sense to me.
Re:When will they target *ME*? (Score:2)
For most applications, you will only see as much "speed" as one CPU can offer you, no matter how many CPUs you have. This of course does not hold true in efficiently multithreaded apps (as opposed to the majority of multithreaded apps out there which would actually perform *better* as a single thread), but for the things we actually upgrade for (killer new game), the speed of any one of your CPUs matters more than how many of them you have.
Now, Let's say CPUs powerful enough to reasonably do anything you want exist (short of intense number-crunching research, for which enough CPU power will *never* exist to satisfy the demand). I personally believe we passed this point somewhere around the PIII/800, though *certainly* the newer PIII/1400's have reached this point.
Once you have that level of performance, a faster CPU doesn't mean *you* can do things faster. Think back to the mid 1990's... Just how fast of a CPU did we need to run Word or Excel such that it would *never* exhibit an observably delay? A PII/300? Even that high?
So, what do we do now? Well, we can run any one thing as fast as we want. How about putting things in the background? Let's say I want to encode a movie to Xvid. At full CPU on a 1Ghz machine, this takes around 6 hours, and your machine grinds to a halt in terms of responsiveness. You could set the priority low, but the machine will still "feel" laggy, and if you do something else CPU intensive, the video encoding will drag out for *FAR* more than 6 hours.
So now on to the point.
Have you ever used a dual-CPU desktop machine?
The first thing you will notice, the UI *almost never* feels laggy. You can have a load of 50 CPU's worth of processes running, and the desktop will still respond when you click something. Your foreground task will act reasonably responsive. Yeah, with 50 CPU-sucking processes, you won't do anything in the background very quickly, but if you try, you can still use the machine.
So, to go back to the Xvid idea, try this on a single-CPU machine: Queue up a movie to encode, set it to low priority, then start playing your favorite CPU intensive 3d shooter. Wow, lag sucks, huh? And look, a framerate of 10. How nice.
On a dual, even with the same "total Mhz" (fairly meaningless, but just to stay fair), you wouldn't even need to set the video encode to low priority. Just fire it up, then your game, and enjoy your game at a "normal" frame rate.
But the benefit doesn't stop at "one extra CPU intensive task per CPU". With thoughtful management of process affinities, you *really can* run those 50 CPU sucking taks, confine them to CPU1, and play your game (still with no noticeable slowdown) on CPU0.
So, how does this relate to my oriinal point?
We don't *need* a 10Ghz chip with half a gig of cache. We don't even need 64b chips yet, though I agree that, for the sake of an increased per-process address space, it wouldn't suck and we'll need it within a few years. Why don't we need this? Because we can't use it. No single interactive application needs even as much CPU as the current average single-CPU desktop machine can throw at it.
So what about multitasking, you say?
Well, I've already answered that one. If it costs less, and takes less complicated technology, to make four 1Ghz processors than it does to make one 4Ghz processor, why would we go with the single 4Ghz processor? And, for the reasons I've addressed above, I would even pay somewhat *more* for the quad setup than for the total-Ghz-equivalent single CPU setup.
Very clear (Score:2)
However I'll disagree regarding the idea we've hit a long term beachhdead.
- Large Java apps definitely tax the CPU. Normally its the slowish harddrive that is the bottle neck; but with these guys the GUI becomes noticably sluggish with actual wait times "check your watch" wait times. And the harddrive isn't really going so this is the Java. I use a PIII 1ghz and the Java apps are Oracle interface stuff. Being primarily a Perl programmer I can understand the appeal of languages that execute of VMs rather than compile in terms of programmer productivity. I have to say that if these become the norm we are going to need much faster CPUs.
-- Video I'm still feeling CPU related constraints there. I've never worked on a system with a GeForce Ti4... so maybe that solves the problems but at least on my current system... I've heard from others that nothing on a PC handles HDTV video properly yet (though its mainly bus issues) and you still need to go SGI for this.
I guess the other thing I don't see is how you end up taxing large numbers of CPUs in a home system. Generally I've got 1-2 forground tasks and at most 1 background task that are intensive. Assuming something like 1 CPU for OS+hardware that means under the very worst consitions I'd rarely (ever?) see a difference between 4 and 50 for my home / workstation setup.
Don't get me wrong I've definitely seen the difference between 6 and 12 on servers but that was with dozens of highly active and hundreds total threads (and god help us if we are still using MSFT operating systems with that number of threads).
Re:When will they target *ME*? (Score:1)
- So you say that super-duper 4 in 1 chip is only 1.5 GHz, right? [Thinks he does not run 4 MSWord's or 4 Quake's at once]. Wrap me that 1._7_ GHz Celeron then! [Vroomm!!!]
Re:When will they target *ME*? (Score:2)
Don't take this as a disagreement, I do indeed think heat dissipation would (and already does) make one of the biggest problems. However, according to the P-III 1.13-1.40 spec sheet [intel.com], the
For comparison, the Athlon XP 2600+ gives off 68.3W [siliconacoustics.com], the P4 3.06 sucks 82W [tomshardware.com], and Intel's next Itanium, the Madison, will nicely heat your computer room at a whopping 130W [geek.com].
My opinion: Itanium does the job, and if people would spend the time it would take to learn a new archecture, it would be a nice, fast chip to start from.
I agree with you that, technologically, the Itanium looks rather impressive. However, even taking into consideration that it (well, at least the upcoming Madison core) does 6ops/clock compared to the P4's 2ops/clock, that still leaves it short for raw power at only 1.5Ghz (since, by the time Intel starts shipping in quantity, the P4 will certainly have passed 4.5Ghz). I understand that clock speed doesn't mean everything, but clock speed times throughput per clock *does* give a pretty good indication of its upper limit.
somebody give me theirs.. I'm broke again
Okay... A quick tip for getting properly-mounted Athlon heat sinks, without risking damage to the chip... Buy a motherboard combo. I suppose this doesn't apply if you just upgrade an existing CPU, but if you need a new motherboard, get them at the same time, from the same place. That way, not only will you not risk a crushed chip, but if it dies of heat within a few days, just send it back and get a free replacement.
My uninformed opinion... (Score:5, Interesting)
Obviously, I don't keep up with this part of the computer world. Is IBM normally ahead of the game when it comes to new chip processes? It seems to me like Intel, whose main priority is processor manufacture and distribution, would be ahead of IBM, who have diversified to the point that I don't even know what their primary product is.
Re:My uninformed opinion... (Score:2, Insightful)
Intel principally produce products (chips), IBM don't they're a services company - that's the big, big change Gerstner made in turning the company around. Hope this helps with your confusion :-)
Re:My uninformed opinion... (Score:1)
Re:My uninformed opinion... (Score:5, Informative)
IBM ($81.19billion FY 2002) is four times the size of Intel ($26.76billion FY 2002) in revenue terms.
Re:My uninformed opinion... (Score:1)
Re:My uninformed opinion... (Score:3, Insightful)
Re:My uninformed opinion... (Score:3, Insightful)
Except every machine IBM sells (excluding their x86 systems, which just buy in Intel chips) is based around a single CPU architecture - POWER, the heavy-duty PowerPC variant. So, everything IBM does in 'CPU development' is going into the POWER/PowerPC core, although they seem to share a lot of generic fabrication advances (copper interconnect, silicon-on-insulator etc) with AMD for the Athlon/Hammer line.
Granted, IBM do a lot more than just CPU design, whereas Intel are almost exclusively CPU vendors (although Intel divide their efforts between IA-64, x86, i960 and StrongARM/Xscale) with some sidelines (NICs, switches, chipsets). Overall, I'd say IBM put a lot more muscle behind POWER/PowerPC than Intel can behind IA-64 and x86.
Re:My uninformed opinion... (Score:2)
Re:My uninformed opinion... (Score:1)
Re:My uninformed opinion... (Score:2)
IOW: Z-Series = S390
The -I-Series- are what we used to call the AS400 line of machines. They use the IBM PowerAS CPU.
They, also, will never run a chip from Intel, for many of the same reasons (true HA)
These machines, incidentally, are the kinds of things that Intel is actually trying to move up either into, or against, with the Itanium. And like I said, I don't think they can...at least not as "only" a chip-house.
Re:My uninformed opinion... (Score:2)
But my question was the original poster's comment that the Power 4 was used in all IBM's products. I don't know much about the CPUs IBM is actually using on these two lines but it didn't seem to be the Power line.
BTW HA=?.
Re:My uninformed opinion... (Score:2)
The Z-Serious are using IBM POWER processors, as are the AS400 and RS6000 class machines, AFAICT.
Re:My uninformed opinion... (Score:2)
In other words, that "P4 killer" that's on the way from IBM is going to be nearly a year behind the P4 in terms of performance.
As for IBM's 90nm fab process, how many 90nm chips do you see from IBM now? None. You know why? Because the 90nm process isn't finished yet! They've got the equipment, but they haven't actually started volume production of chips on that process yet, and probably won't for at least a few more months. I wouldn't count on being able to buy a chip produced on an 90nm IBM fab process before the end of the year, probably several months after you can buy a chip produced on an Intel 90nm fab process.
Why Intel when there's better? (Score:1, Funny)
Re:Why Intel when there's better? (Score:1)
Intel just needs to realize that they are not, in fact, a high-end manufacturer. They should leave that business to the big boys like IBM and (I never thought I'd say this) Compaq.
Re:Why Intel when there's better? (Score:2)
Re:Why Intel when there's better? (Score:1)
Fukencomputer huh ?
Better mod the parent up as funny quick before people get suckered ;-)
Is it good for the customer ? (Score:4, Interesting)
First HP holds back [slashdot.org] on their alpha line, then Intel does this....
The important question is, Is it good for the consumer by letting others into the market (lesser competition, flatter market etc.) or does it harm him by slowing down technology?
Re:Is it good for the customer ? (Score:2, Insightful)
Look at the other fun fact about the Itanium... (Score:4, Interesting)
When they began the IA64, Intel and HP set up a company to hold the IP related to the new architecture. That company owns the IP, Intel and HP get rights to use it. None of Intel's or HP's cross-licensing agreements apply to any of the IA64 IP.
AFAIK, every other major CPU ends up have some amount of cross-licensing, except the IA-64. They own it lock, stock, and barrel. The only chink in the armor seems to be Intergraph.
How does hyperthreading differ? (Score:5, Interesting)
But how how different will the architecture of a dual-die chip differ from hyperthreading, such as which CPU components will be shared (like cache, instruction decode/scheduler, etc.)?
Also would the Linux kernel's logical processor abstraction used to enable hyperthreading support (see IBM developerWorks Article [ibm.com]) also continue to work effectively with a dual-die chip?
Re:How does hyperthreading differ? (Score:1)
Re:How does hyperthreading differ? (Score:5, Insightful)
Well, of course 2 processors will outperform a single one. Processors have a finite pool of resources. The point of HT is not to perform like dual processor, rather to act like them, increasing the performance of a single CPU at a negligible cost.
Buying 2 processors would cost you twice as much as a single processor, even more when you consider the cost of a motherboard and enough memory to make dual processors a worthwhile investment. You would get roughly double (OK, 1.6x) the performance at double the cost.
Buying a single HT-enabled processor, however, would only cost you 10 or 15% more, and you would be seeing a 20-30% performance increase across the board. I think that's a better deal.
Re:How does hyperthreading differ? (Score:4, Funny)
Re:How does hyperthreading differ? (Score:2)
I'm guessing that this guy's spelling and grammar checkers are using 100% of his two CPUs right now. That's why he wants four
Re:How does hyperthreading differ? (Score:2)
So sue me, I can't type or spell, I don't pretend that I can, cause I don't give a shit.
Re:How does hyperthreading differ? (Score:2)
The performance increase isn't shaped quite the same way as that of a true dual-CPU setup because an instruction on one logical CPU in an HT setup may have to wait for a resource in use by an instruction on the other logical CPU to be released.
As I understand it, it's kind of like resource locking...it's unsafe for two things to use the same resource simultaneously, so one waits for the other to release the resource.
I don't know if a two-CPU or a one-HT-CPU setup would work better for every situation, or if they'd each have their uses. a two-HT-CPU setup would have the advantages of both, though.
I hope AMD can fit in something like this. Even if it's patented, one could argue "prior art" since resource locking has been used in computers since the dawn of fileservers.
Re:How does hyperthreading differ? (Score:2)
Then you'd probably have ht on them too, so that you'd have one physical cpu, seen as two by any smp os, and then you'd hyperthread them so that you get *four* virtual cpu's if you activate ht too.
You'll probably see a much bigger performance boost by putting two separate cores in one package than ht'ing one core.
And if you put two or more of these in one box, well do the math. =)
I haven't actually read the entire articles so I don't know what resources the two cores would share, but it must be less than what is shared when doing hyperthreading.
The major drawback is probably that the two cores would need to share the same data and adress bus to the main memory of your machine.
Re:How does hyperthreading differ? (Score:2, Informative)
Lets say you have a hypothetical CPU with n execution units. (For simplicity, we won't distinguish between types of execution unit, such as integer, floating point, branch, etc).
You fetch and decode a bunch of instructions, and then issue them n-at-a-time to these execution units, for maximum performance.
But, the instruciton stream has some inherent limitations on which instructions can be issued concurrently, due to dependencies between instructions, instruction type mix mismatching available execution unit type mix, and instructions waiting on loads, etc. Even with control and data speculation, there may be fewer than n instructions READY to issue on the next clock cycle.
So, you have three choices:
1. Just issue the ready instructions, and let the other execution units go to waste.
2. Switch to another thread, maybe it has n instructions READY to run. (This is usually called on-chip multithreading).
3. Issue a mix of READY instructions, some from one thread, and some from another thread, which combined together use all n execution units. Both threads get to make some forward progress, and no execution units are "wasted". (This is usually called on-chip hyperthreading).
So, back to the big picture: Hyperthreading isn't a replacement for a second CPU or core, because it does not provide any more computation resources. It's a way of using the available resources in a CPU more efficiently, so that fewer computation units are likely to go to waste on any given clock cycle.
A dual core chip typically duplicates almost ALL the circuitry on the chip, often even including the caches. Big chips have low yields and cost a lot. Dual core is a way of throwing a lot of money at getting more parallelism. Kind of like having multiple CPUs in separate sockets, but with both advantages and disadvantages coming from the closer coupling. Hyperthreading is a way of throwing far less money at the problem of squeaking out some of the wasted performance in an existing CPU design.
It isn't free, by the way. Hyperthreaded CPUs do have to duplicate some hardware on a per-thread basis. Obviously, thread context registers like program counter and stack pointer have to be duplicated, as do application registers. But they share caches, execution units, decoders, memory management units (mostly), bus interface logic, etc.
Hope this paints a clear picture.
So they're going to do it for real now? (Score:3, Interesting)
So now instead of virtual processors (read hyperthreading) intel is going to release a chip that does hyperthreading for real?
Damn i'm confused.
(BTW tried hyperthreading, marginal increase for some apps, slowdowns for others)
Re:So they're going to do it for real now? (Score:4, Informative)
What HT does is allows this single CPU to pretend to be 2 independent CPUs, effectively splitting it in half (but not necissarily down the middle). The upshot of this is that it can more effectively deal with cache bubbles and all those horrible performance-draining problems Intel chips, with their insanely deep pipelines, are vulnerable to.
Basically, if you only throw a single thread at the processor, only the first virtual processor does the work and the other virtual processor is idle, allowing the entire processing power of the computer to deal with one problem, instead of half of it sitting idle. This is an advantage because HT only requires 5% more transistors, and the net benefit is something like a 20% performance increase. Of course, if you're not doing any work where you actually *use* multithreaded apps, you'll never understand why HT is a big deal.
This post has gone way beyond what I originally intended to say, and instead of rescuing it, I'm just going to kill it now.
Re:So they're going to do it for real now? (Score:5, Informative)
For a good analysis, read this article [arstechnica.com] over at Ars. In particular, it does point out that the likely cause of slowdowns in some apps is down to cache contention. Near the end, it also says:
In short, sometimes it helps, sometimes it hinders.Finally, you don't need multithreaded apps to take advantage of SMP/HT; if you're running a cpu intensive application on one CPU, the other is free for interactive stuff. You do, however, get much more benefit in a multi-threaded application.
Re:So they're going to do it for real now? (Score:2)
Yes it does. The 2 'threads' of the CPU share the same bus and cache, in some scenarios pressure on the cache is such that it runs slower. I think anandtech had a good article on this a while ago.
Re:So they're going to do it for real now? (Score:2)
There's a certain amount of overhead with HyperThreading, and some additional concurrency issues. Consequently, you can see some marginal (1-3%) slowdowns in some applications.
In poorly behaved systems (such as the Linux ext2fs implementation), stupid locking can result in significant performance hits. This is primarily a result of one thread waiting on a result of another thread which is scheduled on the same physical CPU via a spin-lock. A spin-lock is really simple, it simply spins in a loop checking whether a condition is met or not. With normal SMP, this isn't too much of a problem because sitting in a spin-lock doesn't slow down the other chip, but with virtual processors a spin lock takes resources away from the other threads that are running. Tada, massive slowdown.
But, this kind of situation is a result of bad design, and even then is unlikely to occur outside of critical operating system code. Most applications these days will experience a large performance improvement from hyperthreading.
Despite how some anti-Intel people are trying to spin it, HyperThreading is for the most part a good thing. If nothing else it improves system response - even if one process is spinning, you don't have to constantly wait until the kernel preempts it before processing new user events, etc...
And your statement that SMP can't slow anything down would be correct in a world without concurrency issues. Process synchronisation incurs overhead, and in a poorly designed application this overhead can be not insignificant when run on a SMP machine. If the application does most of its work in a single thread, this overhead could actually result in slowdown. But fortunately you don't see software this badly written very often.
Re:So they're going to do it for real now? (Score:1)
Great - more processor speed. Do we need it? (Score:3, Interesting)
Overall CPU speed doesn't seem to make much of a difference when the bus speed is the same, certainly not in the systems I've tested. However, up the CPU bus speed, and you'll find your performance greatly improved, because you're getting data to the processor quicker.
Some years ago, I tested this theory with a couple of old 686 chips - one 200, one 233. I benchmarked the 200 and 233 both at 75MHz bus - virtually identical results. Then I ran them at the same CPU speed, but 83MHz bus, and the benchmark results improved by exactly 83/75. What does this tell you?
Re:Great - more processor speed. Do we need it? (Score:3, Informative)
That you were running a single thread computationally-intensive task as a benchmark.
Dual CPUs are there to help parallism. They won't show great increases on pure number-crunching. For example, my previous machine was a dual-533 Celeron, and it would be nice and responsive whilst running multiple apps, even if one of them (say, my MP3 encoder) decided to max out one of the CPUs.
Cheers,
Ian
Yes, of course we need the speed! (Score:2)
Yes of COURSE WE DO!
Its this whole DRM thing, I thought they had just lost their marbles and were pushing something that could never sell - but no.
They WANT YOU TO CRACK DRM, because cracking the keys will take a lot of processing power, and that means more high-spec machines.
Think about it, what other reason would you need the juice for? Only code cracking really eats major cycles, so its all a cunning plan to sell hi-spec equipment. Damn they're clever
Re:Great - more processor speed. Do we need it? (Score:1)
> dual core processor won't make much of a difference, just as processor speed doesn't -
> rather, what is important is the speed you can get data in and out of the processor.
> Overall CPU speed doesn't seem to make much of a difference when the bus speed is the same,
> certainly not in the systems I've tested. However, up the CPU bus speed, and you'll find
> your performance greatly improved, because you're getting data to the processor quicker.
> Some years ago, I tested this theory with a couple of old 686 chips - one 200, one 233. I
> benchmarked the 200 and 233 both at 75MHz bus - virtually identical results. Then I ran them at
> the same CPU speed, but 83MHz bus, and the benchmark results improved by exactly 83/75.
> What does this tell you?
It tells me that the benchmarks you use are not the same as the benchmarks I use. Here's my rundown:
Civilization III: The complexity of this algorithm increases exponentially based on number of cities. I'm running a 16 civ game at the moment, and the game literally takes more than twenty minutes to cycle a turn, and it's only the nineteenth century! Granted, my 800MHz Duron isn't state of the art, but it's not state of the fart, either, and even the top of the line processors would buckle under this stress.
PAR: I download very large binaries off usenet that are separated into multiple files. PAR is a tightly coded system that makes extra parity files that you can use to build missing files of a download set. It takes a *long* time to verify the parity on 800MB files.
RAR: This probably would be helped somewhat by faster memory access, but I suspect that extracting an 800MB multipart RAR set would also be strongly enhanced by a faster processor. I mean, it'd be nice to not have to wait five to ten minutes to extract this stuff before viewing it.
Qt: Compiling scales almost directly with processor speed, at least on some types of code. And I do a lot of compiling on Linux. And I do a fair amount of compiling on win32, as well. Compile of a large program can take many hours. When I type './configure && make' or 'qmake -project && qmake && make', I want to get up, prepare some coffee or munchies, come back with the yummes, and immediately test the newly created binary.
There are other programs that I use that depend on the processor, but those are some of the biggies. There are also programs that would benefit from faster data access, of course. But memory isn't really getting faster. It's just getting higher bandwidth. When you request data from PC2100 or PC800 memory, that data doesn't start coming back any sooner than it did with PC133 memory. It's just really expensive to increase the frequency of the entire northbridge and corresponding devices. That's why the microprocessor has been doing the majority of the speed boosts. It offers the most increase for the least expenditure.
-JC
http://www.jc-news.com/
in the other news... (Score:5, Informative)
c't magazine [heise.de]
translation of a short except: even early prototypes of amd opteron can win over all competition in four ways systems - either 32 or 64 bit - at the sap sd benchmark. and that with only 1.6 ghz (planned to launch at 2 ghz)
i think the chart says it all. go amd!
Firecracker, Firecracker, ra ra ra! (Score:1)
Oh, wait. I forgot. They don't exist. And when they do the competition will have advanced a considerable degree (people who cheerlead always assume that their hero non-existent CPU will come out and the competition will be at the same place it is now).
Also, one benchmark doesn't mean squat.
Grr. (Score:3, Interesting)
It's good to see at least they're on the road to 65-nm fabrication. But it'd be nice if they breathed some more life into their current architectures. IA-64 docs are interesting reads, but the hardware just isn't terribly impressive in practice yet. (At least, kernel compiles felt like they took forever on my professor's dual IA-64 research boxes compared to
I'd like to see Intel do something New[tm] and Exciting[tm] on the home market. IA-64 is that, I'm guessing they just need to tweak existing setups or something. I love the feeling of having a processor architecture before me to dig into. (That's why I picked up an old EV56 machine for
Does n't it defeat the purpose? (Score:3, Interesting)
On one hand you have the VLIW type guys (or EPIC in intel speak) whereby you increase parallelism at the instruction level.. or the Multicore guys where you increase the number of number instructions executed by having multiple cores running different tasks.
Whilst in principle I've got no problems with merging the two, I get the impression that by going the dual core route Intel are admitting that they wont be-able to get the kind of performance out of EPIC that they where promising.
Just a thought to consider.
Side Effect of Open Systems (Score:1)
some just dont get it... (Score:2, Informative)
the itanium series is designed with special applications in mind including scientific work and datamining applications. keep in mind that 9mb of cache may be too big for the typical application but for those high end where you would want to let say analyize an entire database and get statistics to determine trends, then you might want to think again. faring the cpu even with a higher clock rate but with a small cache won't keep up with the competition.
i would be pleased to see an amd opteron chip with at least 3mb cache in the market (maybe i can think about getting one of them.)
with competition, i believe there are just three right now, with ibm's power, and sun's ultrasparc to make the rest. this is for the high end arena.
and of course, the processor is just a variable to the equation. in the enterprise arena, you must need a good platform. that is it should be very scalable (with hundrends of processors in a system and upgradability) and reliable (with 99.999% uptime and hot swap components including cpu, memory, i/o cards, etc.). intel has good tools and partners for these and amd will take some time to catch up (but i believe they would.)
intel has some good plans for itanium including the dual-core cpu and even the same pin compatibility (although it doesn't mean it can be fitted into the old ones.) the thing is, intel is already gearing a battle in the enterprise arena. with its resources, it will be able to deliver quite better products in the future.
i believe intel has lots of technologies lying around that we do not even know. of course, currently, you will not put all your cards. wait for some threat and put it down one by one.
with the latest results, intel is doing well financially compared to a greater loss for amd. their new hammer line will be a saving factor for them (question still to be answered this year - and i'm excited about this.) and i'm sure intel already has a pentium 4 running at 5ghz lurking around their labs. they are just waiting for the new processor before we start a new ghz revolution.
Intel is NOT pushing back anything (Score:5, Interesting)
From CNet News [com.com] they are actually going to release it FASTER that the previous schedule.
The double core itanium deadline is now 2005 instead of 2007 and adding a new chip for 2004.
Maybe the confusion arise fromthe fact that "Originally, Montecito, due in 2004, wasn't a dual-core chip, but it was morphed after engineering and manufacturing teams concurred that a dual-processor chip could be mass-manufactured at Intel by 2005."
It would be a good idea to change the headline!
Re:Intel is NOT pushing back anything (Score:2)
I'm going to go check if Tom's Hardware has their processor roadmap updated yet...
From the article.. (Score:5, Funny)
The Santa Clara, California-based company is the leading maker of processors, which serve as the brains of computers.
Ah, so that's what those things do..
Poor reporting strikes again. (Score:1)
<slashdork>Gee whiz, from my vast knowledge of the industry, I can see that Intel is going down the toilet. It takes them a whole year to design a dual-core processor! Egads!</slashdork>
Re:I vote lib-dem (Score:2)
Good ole Jean Cretien took out a protestor with a tiger claw to the throat. A classic photo for future generations. Gotta love it!
Re:I vote lib-dem (Score:1)