Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Into the Core - Intel's New Core CPU 178

Tyler Too writes "Hannibal over at Ars Technica has an in-depth look at Intel's new Core processors. From the article: 'In a time when an increasing number of processors are moving away from out-of-order execution (OOOE, or sometimes just OOO) toward in-order, more VLIW-like designs that rely heavily on multithreading and compiler/coder smarts for their performance, Core is as full-throated an affirmation of the ongoing importance of OOOE as you can get.'"
This discussion has been archived. No new comments can be posted.

Into the Core - Intel's New Core CPU

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Thursday April 06, 2006 @09:31AM (#15075403) Journal
    Ok, so I know I'm going to get a lot of AMD people agreeing with me and a lot of Intel people outright ripping me to shreds. But I'm going to speak my thoughts come hell or high water and you can choose to be a yes-man (or woman) with nothing to add to the conversation or just beat me with a stick.

    I believe that AMD had this technology [wikipedia.org] before Intel ever started in on it. Yes, I know it wasn't really commercially available on PCs but it was there. And I would also like to point out a nifty little agreement between IBM and AMD [pcworld.com] that certainly gives them aid in the development of chips. Let's face it, IBM's got research money coming out of their ears and I'm glad to see AMD benefit off it and vice versa. I think that these two points alone show that AMD has had more time to refine the multicore technology and deliver a superior product.

    As a disclaimer, I cannot say I've had the ability to try an Intel dual core but I'm just ever so happy with my AMD processor that I don't see why I should.

    There's a nice little chart in the article but I like AMD's explanation [amd.com] along with their pdf [amd.com] a bit better. As you can see, AMD is no longer too concerned with dual core but has moved on to targeting multi core.

    Do I want to see Intel evaporate? No way. I want to see these two companies go head to head and drive prices down. You may mistake me for an AMD fanboi but I simply was in agony in high school when Pentium 100s costed an arm and a leg. Then AMD slowly climbed the ranks to be a major competitor with Intel--and thank god for that! Now Intel actually has to price their chips competitively and I never want that to change. I will now support the underdog even if Intel drops below AMD just to insure stiff competition. You can call me a young idealist about capitalism!

    I understand this article also tackles execution types and I must admit I'm not too up to speed on that. It's entirely possible that OOOE could beat out the execution scheme that AMD has going but I wouldn't know enough to comment on it. I remember that there used to be a lot of buzz about IA-64's OOOE [wikipedia.org] processing used on Itanium. But I'm not sure that was too popular among programmers.

    The article presents a compelling argument for OOOE. And I think that with a tri-core or higher processor, we could really start to see a big increase in sales using OOOE. Think about it, a lot of IA-64 code comes to a point where the instruction stalls as it waits for data to be computed (most cases, a branch). If there are enough cores to compute both branches from the conditional (and third core to evaluate the conditional) then where is the slowdown? This will only break down on a switch style statement or when several if-thens follow each other successively.

    In any case, it's going to be a while before I switch back to Intel. AMD has won me over for the time being.
    • As an old hippy socialist Apple user I completely agree with pretty much everything you said. Although some of the ideas that caught my interest in the IBM PPC970 apparently can be found in this latest Intel offering... still I'm mostly in the middle of my upgrade cycle... and I expect quad cores before I upgrade my PowerMac and a second generation of widescreen MacBook before I update my laptop.

      Also I would be interested in a Cell / Power based content creation workstation --- but not from Sony, I've give
    • f there are enough cores to compute both branches from the conditional

      I don't see how that could really be useful. I mean if you were computing instructions on a one by one basis, then perhaps that would work, but you fill the pipe then find out it's the prediction is wrong so you go to the other cpu, however when you look at the bigger picture you realize that you are essencially crippling one CPU by dedicating it to doing something other than actually processing.

      Intel's CPU branch prediction is already k
      • If there are enough cores to compute both branches from the conditional
        I don't see how that could really be useful.
        Doing it with multiple cores would probably be a waste, but isn't that what the IA64's predicated execution is all about? To avoid pipeline bubbles it executes both paths from the branch, and once the branch condition is known the results from the not-taken path are thrown away.
        • by default luser ( 529332 ) on Thursday April 06, 2006 @12:22PM (#15077088) Journal
          Right, there are two camps for the "high-end" branch prediction concept:

          Camp 1: devise adaptive, multi-component prediction systems that offer both fast and accurate branch prediction. Waste hardware purely for branch prediction.

          Camp 2: Use the compiler hint if available, otherwise execute both paths, and throw away the incorrect processing path. It seems cheaper on the surface, but you have to realize: all that extra fetching to process both paths in reasonable time mean more fetch bandwidth and more execution units required just to keep up.

          Obviously, if your code contains lots of branches that cannot be predicted by the compiler hints, the Camp 2 solution is going to perform worse. The advantage of active branch prediction is that you never have to recompile the code to keep the branch hints "optimized" if your datasets change.

          It doesn't really matter which camp you choose, because both camps waste space on a Branch Target Buffer (predicts the TARGET of the branch) anyway, and that's often more costly than the branch direction predictor. Even the Itanium has a BTB, that's how it can instantly start executing the "branch taken" case.

          The Itanium is just taking advantage of a serious architectural flaw to perform branch prediction. Even modern compilers are inserting 20% or more "noops" into the instruction stream, why not take advantage of that underutilization. On any other platform, it would be a very stupid approach to branch prediction.
    • by tpgp ( 48001 ) on Thursday April 06, 2006 @09:58AM (#15075642) Homepage
      I will now support the underdog even if Intel drops below AMD just to insure stiff competition. You can call me a young idealist about capitalism!

      Hmmmmn, I think I'll actually call you someone who needs to read up a bit on both idealism and capitalism!

      Also, on a somewhat note - never care about a company, because the company cannot reciprocate your feelings.

      If Intel comes out with a better, cheaper processor tomorrow, don't buy the AMD one, buy the intel one. Their is no point treating a company like a person.
      • I vote with my money: I consider Intel to behave badly, so I don't buy Intel if there's a reasonable alternative.

        This is personal responsibility: I will try to avoid moving resources to a company that behaves badly, instead trying to move they resources where they do good.

        Eivind.

      • by evilviper ( 135110 ) on Thursday April 06, 2006 @10:50AM (#15076041) Journal
        If Intel comes out with a better, cheaper processor tomorrow, don't buy the AMD one, buy the intel one. Their is no point treating a company like a person.

        Clearly you've never heard of a boycott, picket, or any other similar form of consumer revolt.
        • Those forms of consumer revolt deny the company money. Just like denying a car gasoline this will cause the company to eventually stop functioning. I think the OP made a very good point. There is no point in treating a company like a person. Companies are also completely amoral.
          • Those forms of consumer revolt deny the company money. Just like denying a car gasoline this will cause the company to eventually stop functioning.

            The car, however, doesn't know you're going to stop giving it gasoline if it doesn't do what you want, and can't possibly respond. So TERRIBLE analogy. A company is certainly far closer to a human than a mindless machine.
            • You could also say the car doesn't have a board of directors so terrible analogy. That's not the point. Analogies can hardly ever be taken beyond the immediate context of what is being demonstrated and that is that both companies and cars need something to make them go and without it they stop.
              • That's not the point.

                I obviously missed the point. In fact I still do. I guess I gave you too much credit, assuming you were trying to make a point about the inhumanity (amorality as you said later) of corporations.

                ...both companies and cars need something to make them go and without it they stop.

                If that really was your point, it's so banal and inspid that I can't understand why you even went through the trouble of posting it.

        • Clearly you've never heard of a boycott, picket, or any other similar form of consumer revolt.

          You mean those things that have an almost insignificant effect compared to market forces the vast majority of time? Until Intel starts killing dolphins, only a handful of nerds are going to care.

      • by Anonymous Coward
        If Intel comes out with a better, cheaper processor tomorrow, don't buy the AMD one, buy the intel one. Their is no point treating a company like a person.

        You missed his point entirely. You're advocating a short-term, passive outlook, while the GP is advocating a long-term, active one. If you buy whoever is less expensive now, you get the benefit of saving money on this purchase and every purchase from them until they decide to raise prices. And that will be shortly after they snuff all the competition out
      • I fpeople would buy the better, cheaper processors, then x86 would have been long gone.
      • If Intel comes out with a better, cheaper processor tomorrow, don't buy the AMD one, buy the intel one. Their is no point treating a company like a person.

        What the grandparent was saying is that it's good to support the underdog for the sake of the future. If Intel comes out with an amazing chip and everyone stops buying AMD, then AMD goes out of business. What happens to development at Intel? It slows. What happens to prices at Intel? They increase. Eventually this will get so bad that it becomes
      • Also, on a somewhat note - never care about a company, because the company cannot reciprocate your feelings.

        If Intel comes out with a better, cheaper processor tomorrow, don't buy the AMD one, buy the intel one. Their is no point treating a company like a person.


        Well, the poster specifically said he did not care about either company, just that there was still competition. And I think there is a assumption of parity when you suggest buying the product from the company with less marketshare.

        Especially, as yo
    • by DesertWolf0132 ( 718296 ) on Thursday April 06, 2006 @10:04AM (#15075693) Homepage

      As you fear a beating from the Intel side after what I say I fear I will receive a beating from both.

      In my personal experience the AMD chips have been the fastest systems I have ever owned. My problem with them is the boards made for them (this is personal experience only) tend to become unstable after a couple of years. Intel boards, in my experience, stay stable longer.

      For example, I have two 5 year old systems, one with a Gigabyte AMD Athlon board, and one with a true Intel P3 board. Both run Slackware. Both have insane cooling so the board temps never go over 100 degrees. The Athlon board system will occasionally reboot for no reason. The Intel board system has run for months without ever needing to be touched. The last time I brought it down was for a power outage that lasted longer than the battery on my UPS. I have tested everything on the Athlon system. The power supply is solid, the hard drive is new and the second one I have installed, none of the controllers test bad, and while it is running nothing tests bad using diagnostics. Then it suddenly reboots.

      One would think this an isolated incident but I have build 6 Athlon systems in the last 5 years for friends and only two are still stable. All of the Intel systems I have built with true Intel boards in the last 15 years are still running including a 486 DX/2 66. I know this is personal experience only and not a good enough sample to make any real judgement but as for me, I pick Intel. That said, I believe the problems I have had with AMD come from the fact that none of the boards are made by AMD. If AMD made a board up to the same standards as its CPU I believe my opinion would change in a heartbeat.

      You may commence my flogging now...

      • Really? In my experience, the only motherboards I've had to replace were intel, one just this last week an 845BG thats only 2 1/2 years old. But I also have two Dell Dimension 800's that I use as servers which are up constantly, so I guess my opinion is in the Pentium 4 era quality went down.
      • What sort of tests have you run on it? My home machine would misbehave occasionally, with random applications crashing. I tested it with Memtest86+, and it didn't find any problems. Since I run Linux, I tried repeated kernel compiles next. Doing that, I found it couldn't manage more than two or three complete compiles without the compiler failing. In my case, re-arranging my DIMMs cleared it up. But since one of the DIMM's was bought at fire sale prices at a computer surplus show, I don't think I can blame
      • My problem with them is the boards made for them (this is personal experience only) tend to become unstable after a couple of years. Intel boards, in my experience, stay stable longer.

        You're comparing one brand of motherboard (Intel) with a very large GROUP of motherboards (any Socket-A compatible). For it to be fair, you'd have to compare something like Asus Intel motherboards to Asus AMD motherboards.

        That said, I believe the problems I have had with AMD come from the fact that none of the boards are made

        • >Guess what, Intel doesn't make motherboards either. They contract with Asus or another company to sell their
          >motherboards with the Intel brand on it.

          Two points:
          1. Intel design chipsets for their CPUs. AMD designed one, a while back, and otherwise relies on 3rd party.
          2. Intel may well have designed, engineered, and spec'ed the board, regardless of who makes it.

          So this is really a statement that Intel has better control of delivering their CPU capabilities to the end user than AMD, independent of the r
      • For example, I have two 5 year old systems, one with a Gigabyte AMD Athlon board, and one with a true Intel P3 board.

        I used to like Gigabyte boards ever since I got my old TX board for my P200. But I don't think I'll be buying any more Gigabyte boards for now:
        • My server had a Gigabyte GA7VAX (I think) which went up in a cloud of smoke (capacitors blew - lots of smoke).
        • My MythTV box has one of the smaller Gigabyte Athlon/VIA boards. WoL doesn't work even though there's an option in the BIOS, ACPI S3 mode do
        • FWIW, here's another vote against Gigabyte. I was putting together an AMD system about 6 months ago, and made the mistake of buying a Gigabyte board with NForce 4 SLI. I don't know why...I have no intention of sticking two graphics boards into my PC...it was one of those stupid impulse buys. Anyway, the board runs OK, but has a major heat management issue: the NForce chipset has a thin heatsink and no cooling fan, and there is no room to put one in because the graphics card projects over the heatsink for th
      • Meh. I haven't noticed any real difference between the two. Really, the only mainboard failure I remember hitting me was the bad-capacitor problem, which was due to the mobo and capacitor makers cutting corners.

        I don't overclock and make sure to keep my machines reasonably cool. I try to use good power supplies.
      • You didn't mention motherboard manufactures, and those have a ton of influence of system stability. Those cheap-ass Via motherboards are just that - cheap.

        If you buy a decent mb (nvidia nForce 4 is my personal pick, in a mini configuration to fit in a shuttle xpc), then you're good to go with a rock solid system.

        I think the actual processor is rarely the problem, unless you have cooling issues.

      • There was no way to make the VIA KT133A systems stable. It wasn't AMD's fault. It's maddening too, because you can almost get them stable, but not quite. I had two different KT133A motherboards (one Tyan, one Asus), no difference. I finally replaced mine with a later KT (KT266A?) and it became stable.

        Look up the problems with soundblaster sound cards, they exhibit the problem, but it wasn't Creative's fault.

        I've had the same experience as you. I've alternated Intel and AMD, and except for the KT133A they've
      • You know it totally agree here. I use Opterons professionally and love the 280's we are using right now. But even in high end server boards, I've had the best luck with the quality with which Intel reference servers and workstations are implemented with.

        For example. Chang Sing Song Gung FUTECH Bloody Monster board gets a few BIOS updates and is forgotten. Off to the next chipset. Intel designs are supports for eons, I still have a PPRO VS440FX motherboard with a BIOS that came out MANY years after the board
    • Then cut out all the "personal revelation" nonsense. You are trying to write a comparison between an Intel processor and an AMD processor and don't see why you should try an Intel processor first? You like AMD's explanation better? This isn't a matter of who csn write the most entertaining copy. What does the "support the underdog" sentence even mean?
    • It's entirely possible that OOOE could beat out the execution scheme that AMD has going but I wouldn't know enough to comment on it. I remember that there used to be a lot of buzz about IA-64's OOOE processing used on Itanium. But I'm not sure that was too popular among programmers.

      There is nothing new with Out of Order Execution. It's been implemented in all the Pentium cores as well as AMD chips from the K6 (I think) on up. In fact, the reason why going to multi-core designs is necessary is because i

      • Just to clarify a few things:

        1) OOO was not used in the original Pentium. It debuted (for Intel) in the P6 family. This includes the Pentium Pro, PII, & PIII. The P4 is also OOO, but is not a P6 derivative.

        2) Super-scalar does not require multiple pipelines. The term refers to the ability to run simultaneous execution units, but these can be fed in various ways. In the Pentium, there were indeed two separate pipelines. However, the P6 dispatches micro-ops (uops) to multiple execution engines from
        • Using the compiler to pre-extract parallelism simplifies the hardware, but a single binary won't be optimized for all CPUs within the same family.

          It's been a while since I looked at the IA64 architecture, but ISTR it addressed this issue by tagging instructions. I.e. an instruction word contains 3 instructions which are executed in parallel and a tag. Other instruction words containing the same tag are allowed to be executed in parallel too. So your basic processor can execute 3 instructions in parallel
    • I believe that AMD had this technology before Intel ever started in on it. Yes, I know it wasn't really commercially available on PCs but it was there.

      Perhaps (I'm not sure what "this technology" refers to), but so what?

    • Ok, so I know I'm going to get a lot of AMD people agreeing with me and a lot of Intel people outright ripping me to shreds.

      It is interesting that you start your comment by trying to build a dichotomy. Almost all the responses to your comment have been from people who (unlike you) don't care about the companies, only the products and results.

      As a disclaimer, I cannot say I've had the ability to try an Intel dual core but I'm just ever so happy with my AMD processor that I don't see why I should.

      Oka

      • Intel is fucking sleazy. They weren't even going to replace FDIV-bug processors until the market screamed at them so loudly that they realized that AMD would eat them within a year if they didn't fix it, because no one would ever trust them again. Capitalism is one thing, but if you just follow simple capitalism (buy the cheapest product that does what you want) then you're rewarding bad behavior, too. AMD is simply better at doing what they say than intel, so even if AMD cost a bit more, I would want to pa
    • ...but I simply was in agony in high school when Pentium 100s costed an arm and a leg.

      I'm not going to beat you for choosing AMD over Intel. I'm not going to beat you for claiming that your loyalties will switch on a moments notice. I'm not even going to beat you for getting a well modded first post.

      No, no, no. I'm going to beat you for being a brat.

      Why? Because when I was in high school, I simply was in agony because 6502's cost an arm and a leg.

      whippersnapper.
    • by ciroknight ( 601098 ) on Thursday April 06, 2006 @11:31AM (#15076535)
      I believe that AMD had this technology [wikipedia.org] before Intel ever started in on it.

      No offense, but you lost me right about here. The Athlon 64 and Opteron (and the Clawhammer/Sledgehammer chips as a whole) are fundamentally a whole different direction than the Core Duo. While they're aiming towards the same goals (really damned fast x86 code execution), they get there in two entirely different ways.

      The idea behind the Athlon 64 and Opteron chips were to attack Intel where it would hurt them most, the midrange server section of their business. AMD realized that Intel sells more of these machines, and the maintainance contracts on these machines mean that they're going to keep coming back to you for more of them, even 5 years down the line when your chips are virtually "obsolete". This is broadcasted very loudly in their choice to integrate a memory controller onboard their CPUs; in order to upgrade chips with an integrated memory controller, you have to replace the whole board, and managers aren't going to want to do that very often. Your chips are cheaper overall (because they don't have to have external logic to drive the memory controller anymore, and they were cheaper to begin with), but it locks you into AMD as a company, and locks you into that chip (a slam dunk victory for AMD).

      The Intel Core philiosophy was something completely different; it was reactionary in the sense that the Pentium 4 and Netburst were sputtering to the end of their performance gains, way earlier than Intel could have prediticted. But at the same time, Intel has always been known to make great mobile chips, and the Intel Core Architecture was built on a mobile chip platform. It was the logical choice, even in March 2003 when the Pentium M/Core Architecture first made itself available to the world as Banias. The Athlon 64 didn't even make itself available on the market until April (Opteron) or September (Athlon 64) of that year.

      Better late than never? Yeah, of course. But the point is, the Opteron was meant to be a server chip and take back the market from Intel and is completely succeeding. The Core chips were entirely meant to be Mobile chips, and due to technology trickledown, we're starting to see that Mobile chips are just as much at home in desktop computers.

      And, I know you werent' trying to make yourself out to be a complete and total AMD fanboy in your post, you entirely came off that way, especially without knowledge of the product itself. I don't care particularly for either company, just the fastest chips I can possibly get my hands on, and right now that's the Athlon FX, but in a few months that's going to be Conroe.
      • This is broadcasted very loudly in their choice to integrate a memory controller onboard their CPUs; in order to upgrade chips with an integrated memory controller, you have to replace the whole board, and managers aren't going to want to do that very often.

        I have yet to see any enterprise server that has had its "chips upgraded" beyond installing a 2nd CPU in a SMP-capable system with only one processor shipped. Never. Not once.

    • I'll post some corrections.

      It's entirely possible that OOOE could beat out the execution scheme that AMD has going but I wouldn't know enough to comment on it. I remember that there used to be a lot of buzz about IA-64's OOOE processing used on Itanium. But I'm not sure that was too popular among programmers.

      Out-of-order execution has been standard on x86 processors since the Pentium Pro. Itanium doesn't have OoO, at least so far. Its goal has been to reduce hardware complexity by letting the compiler h

    • "It's entirely possible that OOOE could beat out the execution scheme that AMD has going but I wouldn't know enough to comment on it. I remember that there used to be a lot of buzz about IA-64's OOOE [wikipedia.org] processing used on Itanium. But I'm not sure that was too popular among programmers."

      Let's clarify a few things: A processor executes instructions either in-order (including VLIW processors), or out-of-order. The former is much simpler to implement, but the latter is much more powerful. Why? b

    • I don't think you sound like a fanboi at all.
      Run what's best for you.

      I completely agree that competition is a Good Thing, not just for pricing but as long as neither company gets too far ahead of the other, they're also both working like mad to design better products for you and me to use.

      As for myself, I've historically been known to purchase Intel for my desktop machines, mostly because I prefer to have my chipset designed by the same company as my CPU.
      I recently bought this notebook, and I had the choice
  • by escay ( 923320 ) on Thursday April 06, 2006 @09:34AM (#15075435) Journal
    overheard at the intel core processor design lab:

    "Brian, there's a message in my cereal! it says OOO..."

  • by digitaldc ( 879047 ) * on Thursday April 06, 2006 @09:41AM (#15075507)
    Do you think that when we open up a new Apple(TM), we will find a Core(TM)?
  • processors are moving away from out-of-order execution toward in-order, more VLIW-like designs that rely heavily on multithreading and compiler/coder smarts for their performance I have stopped following processor developments recently, but is this really happening? Between Intel and AMD, the only chip I've heard of that was VLIW and relied on the compiler was Itanium, and I don't see why AMD or Intel would risk repeating that mistake* again any time soon. *I dont think Itanium was a mistake because it re
  • and for the dyslexics out there: AAH and OOO
  • by MECC ( 8478 ) * on Thursday April 06, 2006 @10:36AM (#15075912)

    "You can give your heart to Jesus, but your ass belongs to the Core!"

  • Article summary (Score:5, Insightful)

    by Animats ( 122034 ) on Thursday April 06, 2006 @11:57AM (#15076837) Homepage
    Here's the short version:
    • Intel has a new x86 CPU coming out. It's basically an improved version of their last few CPUs, but because fabs have improved, they can fit more execution units in.
    • The wide "vector"-like instructions now have real 128 bit execution units.
    • There's a new branch prediction scheme for loop exit, which seems clever.
    • Hoisting of loads from an unknown address is now performed more speculatively than it used to be, at the cost of some complexity in the retirement unit.
    • The author of the article has no clue that the retirement unit is the hard part. That's where all the hard cases end up being unwound.
    • No benchmarks yet.

    That's what's in there.

    • Re:Article summary (Score:5, Informative)

      by Hannibal_Ars ( 227413 ) on Thursday April 06, 2006 @03:13PM (#15078796) Homepage
      "Hoisting of loads from an unknown address is now performed more speculatively than it used to be, at the cost of some complexity in the retirement unit."

      I think you mean, "hoisting of loads above a /store to/ an unknown address." If you're going to pretend to school little old clueless me about the complexities of memory reordering and retirement then at least learn the difference between a load and a store.
  • Abstracted from the hacker's dictionary,

    core
    n. Main storage or RAM. Dates from the days of ferrite-core memory; also still used in the UNIX community and by old-time hackers or those who would sound like them. Some derived idioms are quite current; `in core', for example, means `in memory' (as opposed to `on disk'), and both {core dump} and the `core image' or `core file' produced by one are terms in favor.

    =

    If now Intel has gone to using old ferrite core memory to perform CPU f
    • On the other hand, 'core' is also used to refer to individual CPUs on the same silicon die. Thus 'Core Solo' and 'Core Duo' show the same kind of imaginative brilliance that brought us the word processor called 'Word' and the windowed user interface called 'Windows'. I'm looking forward to what the ingenious marketroids think of next.
    • I like name recycling. Look up the last award-winning movie with the name "Crash" and see what I mean... when my parents said they went out and rented "Crash" I did a double-take.
  • Core will excel on the types of applications that will make up the vast majority of server and consumer code in the near to medium term. And because it's designed for relatively low core-count multicore, it will help the software industry gradually make the transition to multithreaded code.

    Ok, the conclusion is off, single threaded cores are for the desktop. Multicore is when you need the highest connections with the lowest latency. Hyperthreading helps by fighting memory latency, but they havent put hypert
  • The Intel 'core' architecture looks like it is focused first and foremost on single-core performance but with improved dual-core capabilities beyond the current 'netburst' architecture which was essentially single-core only. For example, the new core memory aliasing described on the last page of the article doesn't look like it will scale very well with more than two cores and even two cores will have performance hits, although it will be much better than the current Intel dual-core processors. The core a
  • by jinxidoru ( 743428 ) on Thursday April 06, 2006 @01:13PM (#15077625) Homepage
    This is slightly off-topic, but can someone please tell me my Intel continues to have so few registers? I have done some assembly work on x86 and it is always such a chore because I spend 75% of the time moving data in and out of registers. I would love to at least be able to do a double for loop without having to move my iterators. It's just so frustrating.
    • IIRC, 64-bit mode has twice as many registers. In general, adding architectural registers increases the code size, which may reduce performance due to I-cache pressure. Modern x86 processors have good store-to-load forwarding so that spills and fills are fast; I imagine Intel and AMD are not very concerned with ease of assembly programming these days.
      • I imagine Intel and AMD are not very concerned with ease of assembly programming these days.

        It's not the ease that is the issue. The issue is the fact that you have to move things in and out of the CPU a lot more because there are not enough registers. Take a look at the clock ticks involved and you'll see that this is actually significant when doing highly processor based calculations.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...