Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

AMD Says Power Efficiency Still Key 167

Larsonist writes to tell us that even though AMD's new architecture wont be released until mid-2007 they are still letting people in on what some of the new features will be. From the article: "While clock speeds have not been revealed, each of the four cores will integrate 64 KB L1 Cache and 512 KB L2 cache. The native quad-core architecture will also include a 2 MB shared L3 cache, which may increase in capacity over time. The processor will have a total of four Hypertransport links - up from three today - that provide a total bandwidth to outside devices of 5.2 GB/s. AMD is also thinking about integrating support for FB-DIMMs 'when appropriate.'"
This discussion has been archived. No new comments can be posted.

AMD Says Power Efficiency Still Key

Comments Filter:
  • by Pharmboy ( 216950 ) on Saturday September 02, 2006 @07:13PM (#16031037) Journal
    But will it run Vista? (j/k)

    Is Vista going to support 4 cores, or like XP Pro and 2k, limit it to 2 "cpus" so they can charge more for the server version?
    • by adisakp ( 705706 ) on Saturday September 02, 2006 @07:16PM (#16031045) Journal
      MS licensing is for currently for physical CPU's not for cores. Right now a dual xenon (two CPU's) counts as two CPU's in MS licensing terms but a dual-core (two CPU's within a single die or processor socket) is one CPU under MS licensing terms.

      In other words, MS counts sockets, not cores.
      • by Pharmboy ( 216950 ) on Saturday September 02, 2006 @07:29PM (#16031074) Journal
        If I remember right, there was some controversy with Oracle considering a dual core as two cpus, and they backed off. If we see 4 or more cores, the need for multiple sockets goes down, and I just wonder if MS will reconsider this licensing to prevent "lost revenue".
        • by Anonymous Coward on Saturday September 02, 2006 @07:54PM (#16031135)
          Microsoft will not charge per core anytime soon. They might have a dominating position on the desktop, but when it comes to servers, they're fighting tooth and nail to get a position of respectability, let alone dominance.

          So, let's look at the two markets seperately.

          Desktop, the users are likely to not care too much, provided the "Per core" cost is low enough. When we start seeing 4/8/16 core CPUs, a $10 per core fee will add up quick, but most home users will be using OEM copies and won't see that cost. Most business will have site licenses and won't care. But, some home users will, and some businesses will care, and they'll seriously consider alternatives (Maybe, maybe not switching). Microsoft would much rather "lose" money by not fleecing people, than have them even CONSIDER switching, so management's going to ditch the idea for desktops and workstations.

          Server market... they need any advantage they can get. They main competition is Solaris, BSD, and Linux. Linux and BSD are *free*, and Solaris has a bunch of good features which are pretty much Solaris only, even still. Charging per core would be suicide in this market, too.

          So, what market would charging per core be a good idea for Microsoft? None. Say what you will about their software writing abilities, but nobody should doubt their marketing prowess.
          • Their marketing is poor. Remember the "if Microsoft designed the iPod box" contest?

            At this point, they are the de facto standard, so they don't need good marketing.
          • Re: (Score:3, Interesting)

            I agree that Microsoft (or likely anybody else) won't change to a per-core pricing model, but for different reasons. The point of per-CPU pricing is just to determine the market - the type of user/computer:

            1 CPU = most laptops and desktops, low end servers
            2 CPU = high-end workstations, average servers
            4 CPU = high-end servers

            As the number of cores ramp up, as you said to 4/8/16, then charging per core would be like charging per GHz or per L2 cache size - it doesn't make sense, adding cores will just
            • '' I agree that Microsoft (or likely anybody else) won't change to a per-core pricing model, but for different reasons. The point of per-CPU pricing is just to determine the market - the type of user/computer:

              1 CPU = most laptops and desktops, low end servers
              2 CPU = high-end workstations, average servers
              4 CPU = high-end servers ''

              Nice that the MacBook I bought is a high-end workstation.
          • by eggoeater ( 704775 ) on Saturday September 02, 2006 @09:02PM (#16031301) Journal
            but when it comes to servers, they're fighting tooth and nail to get a position of respectability, let alone dominance.
            Are you kidding?
            I work for a very LARGE bank. I guarentee you we have more boxes running MS Server 2003 than all others combined.
            We have some HPUX, IBM, and SUN sprinkled in there but several of the vendor apps I've worked with lately has dropped all support
            for SunOS and is now requiring Windows Server for their apps.
            We use to run Novel for our file servers but that was dropped for MS Active Directory.
            The next leading server OS we use is probably zOS. No Linux yet but I know the powers-that-be are looking at it.
            (BTW, I'm no MS fanboy, I'm just making a point.)

            I'm sure there are many industries in which MS does not have the majority of the server market but
            large financial groups are not among them.

            • Re: (Score:3, Funny)

              by Nutria ( 679911 )
              we have more boxes running MS Server 2003 than all others combined.

              Is that because you need more Server 2k3 boxes to get the job done?

              • by eggoeater ( 704775 ) on Saturday September 02, 2006 @10:38PM (#16031509) Journal
                Is that because you need more Server 2k3 boxes to get the job done?
                Outside of the obvious joke, this is actually true, but probably not for the reasons you think.

                Real reason: politics. One of the problems with server apps is that they tend to be "critical" to normal operations of the business.
                This means if you tell someone, "Hey, your server is under-utilized so were going to put this other groups app on there...",
                the shit is definitely going to start flying. (ex. "that server came out of my budget..." etc.)

                Ask anyone where I work (or probably where anyone works) and they would say that it would be the end of the world to have to share a server.
                In truth, it's all too easy for someone to mess up and bring a server to it's knees.
                The group I work in, we have probably 80 servers (yes, they all run MS), and none of them can be shared because it violates the SLA we have with Cisco. They won't support the app if anything else is running on the server.
                (Cant say I entirely blame them.)

                They solution to this problem is virtualization; I've been telling my co-workers that the future of servers is virtualization.
                Until we can really make it work politically, we'll be running racks and racks of servers that mostly run 90+ percent idle.

                • Re: (Score:3, Insightful)

                  by jelle ( 14827 )
                  "and none of them can be shared because it violates the SLA we have with Cisco."

                  But _that_ is not politics, it's because Cisco understands windows servers and how badly they handle more than one task.

                  • Agreed. I was just trying to make another point about server under-utilization.
                    Virtualiztion will allow a MIN and MAX amount of CPU power to go to each machine.
                    Cisco doesn't allow for that as an exception to the SLA, but they should.


                  • Any examples or references?
                • by Nutria ( 679911 )
                  Until we can really make it work politically, we'll be running racks and racks of servers that mostly run 90+ percent idle.

                  And geeks wonder why MSFT is gaining market share and has 100 jillion dollars in the bank...

            • by Heembo ( 916647 )
              I'm sure there are many industries in which MS does not have the majority of the server market butlarge financial groups are not among them.

              I hate to agree with your logic, but you are right on. And I would go even further than that, MS rules the big finance companies on the desktop as well in more ways than one.

              I have a client who is a high-end billion-dollar equity fund who is paying me out the nose to write VBA for EXCEL for them! I let them know I'm a J2EE guy, I'm a web applications architect, I
            • Compare that to the pharma, biotech, and chemical industries where dependence on Windows is the kiss-of-death. Certain segments of the telecom sector are the same way today. The financial industry is Microsoft's last bastion (and even then, most of the exchanges have moved to Linux or UNIX).
            • I work for a very LARGE bank. I guarentee you we have more boxes running MS Server 2003 than all others combined.

              It can't be a very large bank if one person knows what OS is installed on all the banks servers.

              I have enough trouble keeping track of the servers required to run our one application. Let alone keeping track of every server used by every application in each of dozens of subsiduaries across 80 operating countries.
    • Re: (Score:2, Funny)

      by Virtex ( 2914 )
      But will it run Vista?
      Sadly, no. By the time Vista comes out, AMD and the rest of the world will have long since moved beyond quad-core processors.
    • Comment removed (Score:4, Insightful)

      by account_deleted ( 4530225 ) on Saturday September 02, 2006 @07:50PM (#16031127)
      Comment removed based on user account deletion
      • by hackstraw ( 262471 ) * on Saturday September 02, 2006 @11:04PM (#16031554)
        "processor power factor" (think Oracle)

        I'm not sure if Oracle still does this, but they used to have almost voodoo math to figure out how much you owe oracle. It was something like X/CPU, then that value multiplied by a scale for the type of CPU (at the time RISC vs CISC), and then another multiplier by the amount of RAM on the box.

        When I heard that, I always suggested under specing the box and then silently upgrading it after the Oracle guys left. I believe the technical term is sliding scale which means the more you can afford, the more you will pay.

        • the more you can afford, the more you will pay.
          You're looking for price discrimination.
        • I believe the technical term is sliding scale which means the more you can afford, the more you will pay.

          Damn Socialists are everywhere nowadays I tell you!
    • Re: (Score:3, Interesting)

      by gmack ( 197796 )
      I'm going to guess not very well at first. With all 4 cores having the abillity to run at different clocks speeds I doubt this qualifies as SMP anymore.

      Most SMP code is tested on CPUs of equal clock speeds so odds are this is going to bring out all sorts of fun race conditions in Vista, Linux and *BSD and I'm personally not so sure I'm going to touch this until the resulting dust settles.

      I'm not saying it's a bad idea.. it looks like a good one but this will take time for the software to mature.
      • it's better actually (Score:4, Informative)

        by r00t ( 33219 ) on Saturday September 02, 2006 @08:10PM (#16031185) Journal
        AMD now provides a TSC (cycle counter) that doesn't vary in speed when the core speed changes. This greatly helps with timekeeping.

        As for race conditions: that is pretty well taken care of already. SGI has Linux on a 2048-way system now.

        • Re: (Score:3, Interesting)

          by gmack ( 197796 )
          But were the 2048 way systems running at different clock speeds? Or rather were the processors in each node running at different clock speeds? Last time I saw someone on linux-kernel mismatch processors it brought all sorts of interesting issues out.
          • Re: (Score:3, Informative)

            But were the 2048 way systems running at different clock speeds?

            Different clock speeds should not present much of a problem. In terms of performance it should be a non-issue - right now you can get variable performance out of the same code depending on other factors like memory contention, cache pollution by other processes, etc so if a cycle takes 1.00ns or 0.50ns isn't going to be anything new.

            Only the scheduler is going to care about frequency differences, and considering that we already have the abili
      • by maraist ( 68387 ) * <michael.maraistN ... m ['AMg' in gap]> on Saturday September 02, 2006 @08:26PM (#16031217) Homepage
        Most SMP code is tested on CPUs of equal clock speeds

        You're kidding right? I can't imagine any software which depends on the timing of coperative CPUs.. MPI and general divide-and-qonquer work-clusters could care less about the performance level of peer threads/co-processes. Hell process interrupts due to pre-emptive multi-tasking is enough to guarantee lack of symmetry.

        Now perhaps you're referring to scheduling problems in the kernel.. I'm sure that AMD would be generous enough to provide kernel patches as are necessary.
        • by pchan- ( 118053 ) on Sunday September 03, 2006 @02:25AM (#16031841) Journal
          Now perhaps you're referring to scheduling problems in the kernel.. I'm sure that AMD would be generous enough to provide kernel patches as are necessary.

          I find that the two processors in my dual-core Athlon X2 run at slightly different speeds (according to AMD, this is expected). That in fact did cause the Linux kernel some problems, since it was trying to balance handling the interrupts between the two. The problem happened when the timer interrupt bounced between the two, as within an hour or two of startup their tick counts became significantly separated. This made the system clock start running forward at a rapid rate. A Linux patch fixed this issue. So I can definitively say that Linux does run on SMP cores at different speeds.

          I'm not sure how Windows will do it, but they'll probably figure it out if they haven't yet. The real challenge is a new scheduling algorithm for variable CPU capabilities (although we do have that to some extent with frequency scaling on single CPUs).
        • CPU scheduling assumes the processors are equally fast. This was one of the problems with hyper-threading. The second virtual CPU sucked and the scheduler didn't always know leading to poor performance.

          Making the scheduler efficient on a multi-speed machine, is less than trivial.
      • Re: (Score:3, Informative)

        by adisakp ( 705706 )
        SMP means the processors are similar (i.e. can run identical binary code). They do not need to run in lock-step synchonization to be SMP. Indeed, it is currently possible to halt a single processor in a dual processor system (two sockets) so a similar case already exists in current SMP systems.
        • by morcego ( 260031 ) *
          Actually, SMP means the processors are symmetric, which has deeper implications than simply similar.

          Lets also remind that SMP is Intel's way of doing multiprocessing. AMD's is called simply MP, and is a very different beast, already having different code on the Linux kernel.

          That being said, as long as we are already outside the SMP concept (we are talking about AMD here), I doubt different clock speeds will be much of an issue. Specially since much of the MP code on the Linux kernel already uses spinlocks t
          • Re: (Score:3, Informative)

            by be-fan ( 61476 )
            Symmetric doesn't really have any deeper implications. It just implies the processors are similar. There's no underlying implications about synchronization. When writing code for SMP machines, not only is it not possible to accidentally depend on the processors being synchronized, it's impossible to explicitly depend on synchronization.

            Spin-locks or not, cores running at different clockspeeds aren't going to expose any more race conditions than regular usage. Even on a current SMP system, the processors wil
            • by morcego ( 260031 ) *
              Care to show me a 3-way SMP system, with similar processors ?

              But I agree there the "other" implications for symmetric are mostly hardware related, even tho some of them reflect on software (OS) design.
              • Care to show me a 3-way SMP system, with similar processors ?

                Upgrade one CPU in a two single-core SMP machine to a dual-core. (Quite possible with Opterons).

                --paulj
                • by morcego ( 260031 ) *
                  And since Opterons are MP, and not SMP, what is exactly your point ?
                  • Despite all the blather in this thread, Opteron are SMP machines, by general definitions anyway. Many of the posters in other threads appear to be confused between AMP and ccNUMA.
        • by catacow ( 24626 ) * <chris @ c h ris-edwards.org> on Sunday September 03, 2006 @12:59AM (#16031724) Homepage
          SMP means the processors are similar (i.e. can run identical binary code). They do not need to run in lock-step synchonization to be SMP.

          The Symmetric in SMP refers to the fact that each process can run the same tasks.In an assymmetric setup, there may be a processor dedicated to the kernel or other tasks.

          It most definitely has nothing to do with speed.
          • by adisakp ( 705706 )
            FWIW, I program low-level hardware on video game consoles so I do lots and lots of MP programming (on custom chips). I've written code on multi-CPU designs from the Sega Saturn (2 CPU), the Atari Jaguar (3 CPU's - 1 CISC / 2 RISC), PS2 (2 RISC CPU's EE/IOP + 2 DSP's VU0/VU1) not to mention tons of weird custom chips and hardware in various systems. I am also developing on XBOX360 (SMP) and PS3 (which is primarily Assymetric if you don't count the "hyperthreading" on the PPU).

            The Symmetric in SMP refers
  • by adisakp ( 705706 ) on Saturday September 02, 2006 @07:13PM (#16031038) Journal
    The article is very light on details but the one picture implies power control at the core level. For example if core-1 is running a 100% workload and core-2 has a 50% workload, core-3 and core-4 can be halted resulting in a power load of only 45% the total 4-core max load.
    • From this article [tgdaily.com]:

      The key to achieve this goal is AMD's single-die architecture and its ability to individually adjust the clock speed of each processor core. For example, if the full processing power of all four cores isn't needed, the architecture is able to reduce the clock speed of individual cores. One core running at full speed and three cores at one third of their maximum clock speed would drop power consumption by 40%. AMD can even completely shut down individual cores for even greater reduction

    • AMD already can halt the two cores in an X2 setup independently.

      This saves power, but better yet would be if you could (halt and) power down the cores independently. AMD cannot do this yet, but Intel can.
  • But will it... (Score:3, Insightful)

    by corychristison ( 951993 ) on Saturday September 02, 2006 @07:25PM (#16031064)
    ... run OS/2?

    Joking aside, lately I've been pondering AMD's next move in the everlasting Intel vs. AMD chess game.

    I'm here for hoping they can pull ahead again and force Intel to do the same.
    Always Remember: competition is good!
    • Yes, and the real takehome message from this "announcement" is that AMD has no answer to Intel for the next year. That is a long time to be coasting! Noobody is going to put off an upgrade due to such a long-term milestone, so I wonder why they even bothered putting it out there.
      • Yes, and the real takehome message from this "announcement" is that AMD has no answer to Intel for the next year.

        Or it means that AMD's true answer is not yet announced.

        Noobody is going to put off an upgrade due to such a long-term milestone, so I wonder why they even bothered putting it out there.

        Exactly. If you make an announcement that causes your customers to put off purchases, you lose current sales. (The paradigm being Osborne, whose preannoucement of the next version was so successful tha

        • by TheLink ( 130905 )
          Well the problem AMD has is if they don't announce, people could go "Core 2 Duo".

          I suggest it's better for AMD to lose some sales to their "future" than to Intel.

          Especially given that so far historically you are more likely to be able to upgrade an AMD system meaningfully than an Intel system. So if AMD announces a nice shiny future, people might still buy an AMD _now_ that is slower than a Core 2 Duo, in hope of being able to upgrade to the next AMD stuff.

          Whereas Intel's new stuff just tends to not work wi
  • 64K code+data.

    Not 32/32.

    Tom [not official...]
  • Amazing (Score:2, Funny)

    ...they are still letting people in on what some of the new features will be.

    A company announcing upcoming features in order to create hype for their product? Who'd have thought of that.

  • by Aaron England ( 681534 ) on Saturday September 02, 2006 @08:09PM (#16031184)
    is to reduce the distance between their transistors from 90nm to 65. Intel started shipping their 65nm chips nearly a year ago (OCT 2005), while AMD has yet to ship any. AMD isn't expected to be fully converted to the 65nm process until mid-2007, and by then Intel is expected to start shipping their 45nm chips. AMD is playing catch-up these days and it's hurting them bad.
    • That, of course, if you ignore current leaks...

    • Re: (Score:2, Interesting)

      by Anonymous Coward
      Intel have publically stated that they will not be shipping 45nm chips until 2008.

      Intel shipped a few 65nm processors in 2005, but didn't really get started until 2006, and full conversion might not have happened yet, although all the important plants should have migrated by now.

      AMD have been behind on the process node, but that's not the only issue when it comes to making chips, although it is the most major. SS + SOI are other technologies that AMD is far ahead of Intel on, and they help reduce power sign
      • by Aaron England ( 681534 ) on Saturday September 02, 2006 @10:42PM (#16031518)
        Intel have publically stated that they will not be shipping 45nm chips until 2008.

        Nah. Here's what Intel's 45nm page [intel.com] says: This important milestone demonstrates that we are on track for 2007 to manufacture chips on 300mm wafers using the new 45nm (P1266) process, in accordance with Moore's Law.

        Intel shipped a few 65nm processors in 2005, but didn't really get started until 2006, and full conversion might not have happened yet, although all the important plants should have migrated by now.

        Even if true, AMD has yet to ship a SINGLE 65nm processor. By this measure alone, I'd say the claim that they are a year behind is quite adequete. But by the speed at which AMD is producing fab plants, I'd argue that they are or soon will be an entire chip generation behind.

        AMD have been behind on the process node, but that's not the only issue when it comes to making chips, although it is the most major. SS + SOI are other technologies that AMD is far ahead of Intel on, and they help reduce power significantly - hence AMD's low power 90nm processors compared to Intel's 90nm, and even Intel's 65nm P4s, and AMD aren't doing too badly in terms of performance/Watt right now either.

        Traditionally Intel has won the absolute performance title. As you said yourself it is the biggest factor when it comes to performance/Watt statistics. If been following any of the Core 2 Duo reviews, Intel is now dominating in that arena too.

    • I doubt that AMD has launched a new process within 6 months of Intel in the last decade. It's a almost a basic fact of life. Intel has the money and experience to manufacture at smaller dimensions sooner than AMD. Sometimes 6 months, this time about a year.

      It's not necessarily "hurting them [AMD] bad" though. The Opterons are still quite competitive in power use versus the latest Core 2's (certainly not as bad as the P4 vs the Opteron). Plus, there are time and cost benefits to letting Intel work o
  • Power is always key (Score:5, Informative)

    by bblboy54 ( 926265 ) on Saturday September 02, 2006 @08:23PM (#16031212) Homepage
    Well, at least if you are in a data center.

    There are two huge concerns in a typical data center enviornment: Heat and Power. These two areas are key because of the density of servers today. We're cramming so much processing and storage into 48U that people 10 years ago couldnt have even dreamed of even existing. Delivering enough power to run 48 servers can be difficult if each server is pulling 4 amps each (thats 192 amps). Considering most circuits are 20 or 30amps, thats alot of circuits to fit in one rack.

    This was always the biggest reason why Dell servers were not as popular with the companies that I have worked with. Quite simply, AMD was kicking Intel's ass with heat and power. I heard many people say they'd start ordering Dell servers by the pallet if they sold AMD processors (looks like they finally listened).
  • by Sloppy ( 14984 ) on Saturday September 02, 2006 @08:33PM (#16031229) Homepage Journal

    I'm sceptical that this technique will be very useful. (Of course, AMD is full of smart people and I'm just some net.moron.) I don't think it will be very common for the load on a 4-core processor to be somewhere the middle like 1.5. It's either going to be mostly idle (load close to 0) so you might as well power down the whole chip, or going full blast with the load as high as I think will give me the most throughput. For example, when compiling (and that's when I wish I had more cores) I'm gonna "make -j n" and my load is going to be about n, and that number is going to be chosen to be one more than the number of cores I have (or something like that). If I have a 4-core machine, do you think I'm going to make -j 2? No way.

    I can't think of many situations where I would have one core running at 100% and another at 50% and the others idle, for any significant length of time. I can imagine a desktop user clicking on something and maybe for a few milliseconds that load is somewhere around that, but then the work gets done and you're idling again. Or the user asked it to do something "hard" so all cores are near 100% (except maybe while waiting for I/O) for a "long" time.

    Am I wrong? What kinds of things does your computer work on, which are a little parallelizable but not very much?

    • by Mr. Hankey ( 95668 ) on Saturday September 02, 2006 @08:57PM (#16031288) Homepage
      One of my computers (well, 'one' and 'my' being relative terms in this case) is a 40 node cluster, completely SMP with some of the newer nodes being dual core as well. Often they're fully utilized or more, imagine a 5.00+ loadavg per node weeks on end. When a job is winding down though, the remainder of the jobs will probably finish up one at a time and you often have a few CPUs with just one job running for a few days. It's good for the electric bill to allow the CPU cores to power down.

      Another of my computers is basically used to play games. Most games don't seem to do much on the SMP side, so I doubt it would much matter how many cores there were as far as the game's concerned. They do tend to peak one CPU pretty much all the time though, while another core might end up servicing OS calls. Again, it couldn't hurt to let those sleeping processors/cores power down while they're not doing much of anything.
    • Um? are you kiddin? I have a 2P 285 box [dual core 2.6Ghz] and in certain applications [e.g. video encoding] only a single core is used [damn you mencoder!!!]. Why would I clock up the other cores? Now cpufreqd does clock down the other PROCESSOR but the other core on the same processor as the mencoder process is clocked up.

      You are right that in certain high load applications you may not need it. But remember for every live server in the world there are dozens of test boxes which take power just the sa
    • Re: (Score:3, Interesting)

      by Jeremi ( 14640 )
      Am I wrong? What kinds of things does your computer work on, which are a little parallelizable but not very much?


      How about games and media playback? In those applications, you have X amount of work that needs to be done every (say) 33ms... there might be more work to do than one core can handle, but not so much that you need all 4 cores.

    • Re: (Score:2, Interesting)

      by Alkivar ( 25833 ) *
      Games.... at least for the moment. Very few companies producing games are parallelizing their code. I can see the 100% being the core the game is running on, and the 50% core being the overhead of the interaction between the CPU an GPU. At least thats the only scenario I can currently think of...
      • by Ant P. ( 974313 )
        Sound processing. Simulating 50 or so sound effects in a 3D environment without a high-end sound card with hardware OpenAL takes a huge amount of CPU time.
    • Think servers, not desktops. They're going to be the target for both the processing power of four cores and the adaptive power features AMD describes.

      Just looking at one of our web clusters, we vary from ~ 150 to ~ 3500 requests per second, with fairly smooth build-up. Certainly not all or nothing.
    • I think generally cores 2-4 can probably be turned off or put to sleep most of the time. I have a dual processor system and only rarely do I see a need to have the second processor on at all.

      For games, I can see core one doing the graphics and core two doing I/O, system overhead, network and audio. Cores 3 & 4 can probably go to sleep unless they are doing some video transcoding or something like that.
  • by ErroneousBee ( 611028 ) <neil:neilhancock DOT co DOT uk> on Saturday September 02, 2006 @08:33PM (#16031232) Homepage
    How does this tally with a previous story about multi-core architectures being ideal for realtime ray-tracing in games? Is anyone working on a Ray-Tracing evuivalent of OpenGL?
    • Is anyone working on a Ray-Tracing evuivalent of OpenGL?

      Actually, yes: OpenRT [openrt.de]. Some games which uses it can be found here [uni-sb.de], although nothing you can try at home unfortunately, only videos available for download.
  • by grammar fascist ( 239789 ) on Saturday September 02, 2006 @09:11PM (#16031318) Homepage
    AMD Says Power Efficiency Still Key

    I'll be happy with these new processors as long as I can still efficiently heat my apartment with them.
    • Re: (Score:3, Interesting)

      by Anonymous Coward
      I'm afraid that I'm being a thermodynamics fascist, but you really can't efficiently heat your apartment with CPUs. Typically, for every BTU of heat delivered to you by electrical power, 2 more BTUs go up the power station smokestack or are lost in transmission. In contrast, for every BTU delivered from a modern gas furnace, less than 0.1 BTU goes up the chimney.
    • Re: (Score:2, Funny)

      by Sabre0591 ( 999821 )
      You've got that right. I heat my spare room with a pair of 64bit AMDs running in two desktop units. Now if I can just figure out where to put the other four maybe I can shut down the furnace this winter.
  • Power efficiency was the reason cited by Apple for dumping IBM/PowerPC for x86. If AMD can clean Intel's clock with coolness, I wonder how long until Apples are Intel's biggest competition again.
  • Modern companies believe the features of their products are merely an excuse for their marketing campaign statements. Like with HD DVD / Blue Ray.

    Speed is apparently no longer something you need to care about: all CPU's are about fast enough for most uses.
    Cores... I swear a modern OS (Vista including) can simply make no use of more than 2 cores.
    Power efficiency: they are all more or less the same, unless you have a Pentium4 / Celeron (P4 based) on a laptop/desktop system, in which case you may upgrade.

    64-bi
    • Cores... I swear a modern OS (Vista including) can simply make no use of more than 2 cores.

      Then you use two OSes. Sure, the market of people who run Linux and Windows simultaneously so they have one OS for gaming and one for everything else is small but that's just one possible application.

      For most home users more than two cores ain't that great, but for certain powerusers and for professional IT (think "servers") it might be interesting, especially with things like per-core power management.
  • "AMD Says Power Efficiency Still Key" .. funny, I thought customers tell to vendors what is key. When did this turn the other way around?
    • Re: (Score:3, Informative)

      by Phil John ( 576633 )
      Customers are wanting better power efficiency. Due to the rising costs of energy a lot of data centres are now charging based on energy consumption and heat output (needs more AC -> needs more energy). It's becoming a real problem, especially for some data centres who for varying reasons cannot increase their electricity supplies. That's why google, microsoft and others have started building data centres near a hydro-electric dam in the states, cheap and plentiful electricity.
    • When it started to cost more for the electricity to power a server for a year than it cost for the server.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...