Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

AMD Announces Quad Core Tape-Out 347

Gr8Apes writes "The DailyTech has a snippet wherein AMD announced that quad core Opterons are taped out and will be socket compatible with the current DDR2 Opterons. In fact, all AM3 chips will be socket compatible with AM2 motherboards. For a little historical perspective, AMD's dual-core Opteron was taped out in June 2004, and then officially introduced in late April, 2005.' AMD also claims that the new quad processors will be demo'd this year. Perhaps Core 2 will have a very short reign at the top?" From the article: "The company's press release claims 'AMD plans to deliver to customers in mid-2007 native Quad-Core AMD Opteron processors that incorporate four processor cores on a single die of silicon.'"
This discussion has been archived. No new comments can be posted.

AMD Announces Quad Core Tape-Out

Comments Filter:
  • Someone care to explain what that means?

    Does it have something to do with the design being finalized, or the manufacturing facility being prepared to start making them (like a game "going gold")?

    • Per TFA, "completion of the design". I was also confused by this phrase in the summary.
      • What's confusing about 'completion of the design'? AMD already said earlier this year that they were working towards a smaller process but Intel got the jump on them by going to 65nm while AMD was still using 90nm. With a reduced process size, they'll be able to squeeze a quad-core design into a dual-core space, matching the current AM2/AM3 die size.

          I guess if you weren't up to speed on which manufacturers were using which process, it'd be confusing. :)
        • I'm not sure if you're pointing out my grammar error or whether it genuinely confused you. By "this phrase" I was refering to "taped out".
    • Re:Taped out? (Score:2, Informative)

      by ZPWeeks ( 990417 )
      Tapeout is basically when the processor design process is completed and the final plans are written down to be sent and manufactured. They call it "taping out" because they used to write the specification data to magnetic tape.
      • Re:Taped out? (Score:4, Informative)

        by Chris Burke ( 6130 ) on Tuesday August 15, 2006 @02:50PM (#15912086) Homepage
        The plans themselves being the masks used to create the various layers in the silicon. These mask sets were in times past designed by placing colored pieces of tape onto paper. I'm not certain, but I think the term "tape out" actually refers to those bygone days of literally "taping out" the mask set.

        • I've heard from people in the industry that it has to do with getting the magnetic tape (mentioned by the grandparent) out of the door on its way to the fab, and is not directly related to the time the masks are created.
          • Re:Taped out? (Score:3, Insightful)

            by Chris Burke ( 6130 )
            I've gotten conflicting answers from people in industry, often seemingly related to how old they are (and thus whether they'd have been around for the actual-tape-mask phase), which is why I said I wasn't sure. Since "tape out" with magnetic tape would still be somewhat of a euphamism whereas "tape out" with real tape is literal, I'm still not convinced it refers to magnetic tape.
          • Re:Taped out? (Score:5, Informative)

            by JesseL ( 107722 ) on Tuesday August 15, 2006 @03:11PM (#15912286) Homepage Journal
            I don't think that magnetic tape was involved in the original process at all. As Chris Burke said, the masks were layed out with colered tape on a drafting table. When they were completed they were photographicaly reduced to the size needed to be transfered to silicon for etching. My father used to design printed circuit boards the same way.
          • Re:Taped out? (Score:5, Informative)

            by chewedtoothpick ( 564184 ) <chewedtoothpickNO@SPAMhotmail.com> on Tuesday August 15, 2006 @03:15PM (#15912322)
            Having designed a couple consumer devices where we had to burn silicon, I can say the grandparent post is the correct one. "Taping Out" is a referance to the stamp having been successfully created. This used to be accomplished by using lenghts of electrical tape on sheets of glass, but the templates I have taped-out (and I assume the rest of modern templates) are done by silk-screening the pathways directly onto high-temp plexiglass.
        • Bzzzt. Nope. (Score:3, Insightful)

          by JayBat ( 617968 )
          Back in the dawn of time, when dirt was new, we "taped-out" by writing GDSII to a 1/2" 9-track 1600bpi magtape.

          Back before the dawn of time, when we didn't have dirt yet, we "cut rubies" (used Exacto knives and straightedges to cut Rubylith). People still use Rubylith [ehow.com] to do fabric silkscreening and such. No colored tape on paper, not dimensionally stable and not enough contrast for camera-reduction.

          -Jay-

    • Re:Taped out? (Score:5, Informative)

      by imsabbel ( 611519 ) on Tuesday August 15, 2006 @02:42PM (#15912000)
      Its sort of going gold, with the exception that the latency is MUCH longer.
      So even if a perfect, working design tapes out, it will take at least 3months until happy little chips come out at the other end of the factory. Of course, failures, bad yields or bugs that only manifest themself in the physical design can delay this further.
      • Re:Taped out? (Score:4, Informative)

        by Grave ( 8234 ) <awalbert88@nOspAm.hotmail.com> on Tuesday August 15, 2006 @03:36PM (#15912530)
        Rarely has a CPU gone from tape-out to production in three months. In fact, I'm pretty sure it's never happened. GPUs do it from time to time, but the thing about any new piece of highly complex silicon (especially a quad-core CPU) is that it will take time to get the process correct, even if there are no bugs or glitches in the design. GPUs, while big, are relatively simple by comparison. On average it takes 9-12 months from tapeout to retail availability, though it has been known to happen in as little as six months.
    • by crabpeople ( 720852 ) on Tuesday August 15, 2006 @02:43PM (#15912013) Journal
      Its probably when they tap the chip assembly tray out so they fall to the counter. Just like muffins.

    • Re:Taped out? (Score:4, Interesting)

      by stevesliva ( 648202 ) on Tuesday August 15, 2006 @02:57PM (#15912138) Journal
      Does it have something to do with the design being finalized, or the manufacturing facility being prepared to start making them (like a game "going gold")?
      Generally it would mean that the physical design data has been released, allowing the creation of masks. (Masks being the "stencil" of each design layer used for lithography) Once the masks for the first design layers are prepared, manufacturing can begin.

      Tapeout, a.k.a. RIT (Release-In-Tape) is just an old term, similiar to RTM (Release to Manufacturing), which is becoming obselete for software. It seems that semiconductor design terminology has a much longer life than the chips-- we still call design rule checking programs, "DRC decks." Why a "deck?" Remember punch cards? Speaking of cards, that's a netlist.

      My favorite's "kerf," the area between chips on a wafer that is lost when they're diced. The term was borrowed from sawmills.
      • Re:Taped out? (Score:3, Informative)

        by Chr0nik ( 928538 )
        Actually the term "kerf" is indirectly drawn from saw [i]blade[/i] manufactueres, wherin they termed the width of a blade the kerf. Mills originally called the material lost to blade width Kerf Loss which eventually just got shortened to Kerf. [br][br] The material generated of course, being saw dust. Now days, mountains of saw dust are turned into presto logs and fire starters, for as much of a return as the wood itself in some cases, sometimes more, depending on the wood, so kerf loss is a thing of the pa
  • Software Licensing (Score:5, Insightful)

    by graphicartist82 ( 462767 ) on Tuesday August 15, 2006 @02:42PM (#15911999)
    I'm interested to see if software companies who license their software by CPU will continue to define a "CPU" as a physical socket, or a core. Right now Microsoft and VMWare (and lots of others) define a CPU as a physical socket, not a core. So a dual core processor only counts as one CPU for licensing purposes.

    It will suck if they start realizing how much more money they could be making by defining a core as a CPU for licensing...
    • It will get interesting when AMD and Intel wage licensing wars against each other. You know the type when you license your design to another company....I can't wait.
    • I've never really understood the idea of licensing software per CPU. It seems a bit crazy/arbitrary to me. Why not charge per DIMM or RAM, or per byte of L2 cache?
      • by doh123 ( 951318 )
        back in the day, most all apps could not use multiple processors, and if you wanted the specialized version that could use more, you had to pay more because of the extra development costs and low sales.

        Now, they do it because people are used to it, its accepted as norm... It doesnt cost them more but they can charge more just because people expect it. Its simple corp. greed
      • i think the answer is- because they can.

        here is a very interesting article on the subject of product pricing.
        http://www.joelonsoftware.com/articles/CamelsandRu bberDuckies.html [joelonsoftware.com]
      • by Amouth ( 879122 ) on Tuesday August 15, 2006 @03:13PM (#15912300)
        Back in the day the number of CPU's ment the number of threads that could be proccessed meaning the amount of work that could be done.. In essence One nice 4 CPU server could do the work of 4 smaller severs, software writers realized that if the licensed it per computer that large companies would buy once license and run it on a big box and not on several smaller boxes and get the same work done but with the software company not getting as much money.. so they made it so that you have do pay per cpu OR pay a whole lot for one server no matter the number of cpu's

        and you question about RAM- somethings do have it but it is more in how much can be stored - for example the only real diffrence between exchange 2000 ent and standard editions was thathte standard had a limit on the size the store could be and how much memory it would use for caching..
      • by drix ( 4602 ) on Tuesday August 15, 2006 @03:15PM (#15912319) Homepage
        Once upon a time, the only people who had SMP machines had spent a huge amount of money on them. Licensing per CPU was simply a smart way to discriminate your customer base and figure out who had a high willingness to pay. Maximize producer surplus and all that. SMP became more and more commoplace in the 90s and now, with the advent of dual core, every grandma on AOL will be running on two or more CPUs in a matter of years. Since performance gains seem to be oriented towards more parallelism and not more MHz nowadays, this effectively means that software that runs on only one CPU has reached a performance plateau compared with everything else. My guess is the software industry will wake up to this fact and stop licensing by CPU, unless they want to field all sorts of questions about why theirs runs twice as slow as the next guy's.
    • Windows XP (Pro) is already limited to two processors -- I wonder if Microsoft will remove this limitation given the likely escalation of multi-core CPUs on home computers. Given the bloat of the Vista beta, this is certainly not an unreasonable expectation.
      • by Amouth ( 879122 )
        it is limited to the number of physical cpu's not cores or thread paths..

        this is why you can have say dual p4 xeon's with HT enabled and XP will show 4 thread paths but you are not violating license - that and XP really doesn't care.. as it has no idea.. and isn't hard coded to die
    • I doubt this will happen. There is not a lot of technical reason to charge per processor or core -- after all, why not charge per issue slot? What should we do with hyperthreading? However, there is a strong business reason to do so == it allows them to segment the market and charge more money to people who can afford it. Dual core CPUs are targeted at desktop use, while multi-socket hardware is usually much more expensive and used in high-performance workstatiosn and servers by people who can afford
    • by kabocox ( 199019 )
      It will suck if they start realizing how much more money they could be making by defining a core as a CPU for licensing...

      They'll wait until we all have 4 to 8 cores. Then they'll hit us for an 4-8x hit in licensing cost. They don't want to kill off multi-core processing for main stream use before it really begins.
  • AMD++ (Score:5, Interesting)

    by andrewman327 ( 635952 ) on Tuesday August 15, 2006 @02:42PM (#15912001) Homepage Journal
    This is a big bonus to AMD. With the competition between AMD and Intel so close now it will be interesting to see how the two companies change their tactics. I wonder what the power consumption of this new quad core will be. Power consumption and heat production are becoming increasingly important.


    I am glad to see AMD making progress on its quad core chip. No longer can megahertz bring mega bucks. Moore's law doesn't mean Moore money. (Ok, I'll stop now.) We have seen more chip innovation over that past 4 years than I thought was possible.


    In case you are wondering what the differences are between AMD and Intel in quad core designs, this comes from TFA:"Intel has recently accelerated its quad-core plans; the company recently announced that quad-core desktop and server chips will be available this year. Intel's initial quad-core designs are significantly different than AMD's approach. The quad-core Intel Kentsfield processor is essentially two Conroe dice attached to the same package. AMD's native quad-core, on the other hand, incorporates all four cores onto the same die."


    I cannot wait for comparative benchmarks. I wonder how much ground Intel will gain by being first to market.

    • Re:AMD++ (Score:3, Interesting)

      The quad-core Intel Kentsfield processor is essentially two Conroe dice attached to the same package.

      How can you take an article about processors seriously when they can't properly pluralize "die" as "dies."

      I cannot wait for comparative benchmarks. I wonder how much ground Intel will gain by being first to market.

      I suspect for the desktop market we'll all see that having four cores does not improve most application performance significantly because it does not ameliorate the normal bottlenecks outsid

      • "I suspect for the desktop market we'll all see that having four cores does not improve most application performance significantly because it does not ameliorate the normal bottlenecks...

        Yes, but very few people buy Opterons for the desktop market. I'm sure that a good number get sold for the "workstation" market (CAD, rendering, etc.), but not as desktops.

        The real strength of the Opterons has been in DBMS servers, where the massive memory bandwidth from having 2, 4, or 8 memory controllers has reall

      • How can you take an article about processors seriously when they can't properly pluralize "die" as "dies."

        How can you take a Slashdot post seriously when the writer can't properly punctuate a question (ignoring jokes that "you can't take one seriously anyway," of course)?

        • How can you take a Slashdot post seriously...

          If you don't see the difference between a punctuation error in a casual, unedited post and failing to properly spell the main subject of the article you have written, when you're a professional writer, then I think you're missing the point. I don't expect posts here or even summaries from the editors to be correct or proper. I don't expect the articles themselves to have perfect spelling. I do expect them to at least know how to pluralize the main topic. It's

    • Power consumption will indeed be interesting -- it's not immediately obvious that it will be awful. Remember, the dual-core Athlons are clocked slightly slower than the comparable single-core ones, and therefore actually use less power than their sing-core counterparts. The leakage current goes up mighty fast as the switch speed increases...

    • I think intel's approche is has advantages over AMD's while AMD will have all the cores talk and use a nice cache space being on a single die - intel is going to attach two dies to one pice meaning they will more than likly have a lower defect rate in quad core cpu's as they don't have the compounding effect of having one core/die bad and throw away a good one.. I am willing to bet that AMD is going to have yeild issues with this method.. and while preformace will be better the cost will be higher do to t
      • Will AMD actually do shared L2 in their new processors? I have not heard it stated yet at least.

        Which makes things more interesting still; AMD has a better interconnect for all their four cores in the form of very local HyperTransport, but Intel has a far better interconnect within the pairs (shared L2) but a worse interconnect between the pairs (the FSB, which is not that shabby, but still no HyperTransport).

        Will be fun to see at any rate, if nothing else it would be rather interesting if Apple announc

    • Heck, to be honest, I can't wait for the benchmarks between a 2P woodcrest machine and a 2P dual -core opteron box. The quads are drool worthy, but I probably won't get one of those for a couple of years yet. I'll wait for the initial premium prices to drop.

      Unfortunately, the AM2 chips appear to be for single CPU boards only, Socket F is the new Opteron socket. But, the way I'm going, just to be able to drop in a new quad CPU will meet my needs in a year or two for the next couple of years, at least. (I don
  • by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Tuesday August 15, 2006 @02:43PM (#15912002) Homepage Journal
    Isn't AMD depending on additional cores to beat Intel's performance similar to how Intel's Prescott depended on additional MHz to beat AMD's performance?

    Sounds like the shoe's on the other foot. I hope AMD brings back the kind of engineering innovations that brought it support among those in the know back in 1999 and 2000.. (Like focusing on a superscalar architecture with the K7.)

    Four cores is a fine concept, but they mustn't forget to increase the capabilities of the individual cores.
    • Um, no. Because both are forcing more cores into the chips. The comparison you make is actually rather poor, because when Intel did that AMD was not pushing their speed with them. If you bothered to RTFA, or even any article on the subject of multi-core processing, you would know that both companies are working on, and have been working on, quad-core (and I think 8-core) designs.

      The AMD grasp to hang on was the 4x4 socket, which is an interesting idea if nothing else. I really think you should read up
    • by eebra82 ( 907996 ) on Tuesday August 15, 2006 @02:51PM (#15912091) Homepage
      I doubt that the release of a quad core CPU has anything to do with Intel getting desperate. AMD has stolen a lot of market shares from Intel in the server area so it is only natural for them to extend the current line-up with even more, faster CPU:s. You know, there is actually a market for quad core CPU:s as many server applications will benefit strongly from such architecture.

      Additionally, AMD gets to claim the quad core market before Intel, just like it got to 1000 MHz before Intel did. It's not only positioning, but also marketing.

      Last but not least, you can bet on an entirely new architecture from AMD coming next year. As with all new CPU designs, this is a difficult, expensive and time-consuming project so it's not like Intel and AMD are ramping out new CPU:s too often. Instead, they try to improve current technology and make the most out of it.
    • by Gr8Apes ( 679165 ) on Tuesday August 15, 2006 @03:05PM (#15912208)
      In a word, no.

      In brief, AMD is putting together 4 cores on a single die, like their current dual core design. Intel just got to the 2 cores per die stage. Their 4 core design is 2 dual cores slapped together.

      This story is about the fact that the next gen of AMD's chips are design complete. More importantly, AMD claims it is going to have a working prototype this year. The importance of this is that if AMD succeeds, they will be able to display a working copy of their next generation CPU when Intel intends to ship their first quads. It could do untold damage to Intel's ability to sell those quads if AMD's quad solution blows it away, as I strongly suspect it will. So does IBM, HP, Sun, and Dell, as all have signed on for AMD to power their servers.

      This puts the shoe firmly back on Intel's foot. I'm sure Intel was hoping to not wear it for at least a little while. ;)
      • As a poster above me stated, there is every chance that Intel could win this little battle. Sure, the four cores on a single die does allow for better communication between the cores and better use of the caches. BUT, because of the increase in space on the wafer for a four core processor, the yield rates may drop dramatically. Intel's solution side-steps the yield problem by simply joining two dual core processors into a single package. So, while AMD's processors may have a slight performance edge over
        • This is the same argument proferred in the first round a couple of years ago when AMD went 2 cores on a single die vs Intel's 2 separate cores slapped together. Did we forget the outcome of that battle so quickly? (Refresher: Intel got their ass handed to them)

          While I don't disagree with your point about the potential for increased failure rates of 4 cores on a die vs 2 cores, also note that we're at least one more generation advanced in fab facilities, which one hopes will help ameliorate the failure rates
          • No, AMD's response was several years. Do you think they taped about their two core processors and then sat around on their butts for a year? They started right into their next design, just like every other processor comany does when they finish a project. Besides, Intel's old two core processor design, the one with two single cores joined into a single package, was a thrown together response to AMD's duel core processors. Core 2 has been designed from the begining to be a duel core product and so they d
          • There is no major architecture change in the AMD 4 cores, so it won't blow the intel solution away. I strongly suspect the reverse. Intels glued together cores will still carry the day.

            The difference from last time when Intel did this packaging move, is that intel was starting out with inferior cores and its loss was assured. This time Intel has the superior core and that will swamp the very small packaging differences.

            But I dont' really care. I am just looking for this move to lower prices on dual cores. M
      • This puts the shoe firmly back on Intel's foot. I'm sure Intel was hoping to not wear it for at least a little while.

        No, this puts the shoe back on AMDs foot.

        It may put the ball into Intels court, but thats another game entirely.

        Hint : A shoe is a beneficial object that most people would like to have.
        An example. - Your abusive boss of many years is demoted and you end up in charge of him. Now the shoe is on the other foot - your foot.

        Putting the ball into someone elses court puts them under pressure to resp

    • Manufacturing technology has progressed to where many more transistors can be cost-effectively packed onto a chip than years ago. Yet, all of the low-hanging fruit in CPU design was picked long, long ago, and so was the medium-hanging fruit. It takes vastly more to get a significant improvement out of a new architecture now than it did ten years ago.

      So, packing more cores onto a chip allows you to fill your die with working transistors, and doesn't cost you billions in R&D.

      Interestingly enough, Intel
      • The thing there is though that it will take Intel an estimated 3-5 years to get to where AMD is today without infringing on any of AMD's IP with the assumption that hell would disappear before AMD would agree to cross-license their hypertransport IP. That estimate was from an article on CPU design about a year ago, so the time factor may or may not have diminished - the main point is that AMD enjoys a huge first to market advantage that won't go away before main adoption is underway.

        Intel's basically screwe
  • JESUS.
    That means I can get a 32-way system in that Tyan 5U case.
    That's just friggin' ludicrous.

    IIRC, 32-cores is the limit for the current generation of Hypertransport. I think it has a 6-bit address and half of them are reserved for memory controllers. And that doesn't include I/O MCPs. So the practical limit is 28 (7x4).

  • by Ancient_Hacker ( 751168 ) on Tuesday August 15, 2006 @02:46PM (#15912047)
    I bet nearly nobody knows what "taped out" means. Or why it's so funny.

    Way back in the 1960's the way you designed a printed circuit board, or an integrated circuit, was to get a big piece of clear plastic and lay out the lines with red tape. They used red tape so you could see through it, in order to align the tape exactly over the layer below ( most PC boards use at least two layers, IC's at least 5 layers.) As you can imagine, a rather tedious, error-prone process.

    When you were done with the tape and exacto knifes, you'd hand the plastic over to the foundry guys, who would photographically reduce each layer to the appropriate microscopic masks.

    Sometime in the mid 70's, computers and optical printers got cheap and good enough so you could actually design the lines and layers on a COMPUTER SCREEN. Sales of red tape went way down. Nobody missed the red-tape days.

    Nowdays just about everything is computerized in this process. THere's never a plastic sheet or tape or paper stage-- the bit images go directly form the design mprogram to the foundry.

    But they still say "The design got "taped out"."

    • by Eric Smith ( 4379 ) * on Tuesday August 15, 2006 @02:53PM (#15912110) Homepage Journal
      The next step after using mylar and rubylith was using CAD, and sending a nine-track magnetic tape of the data to the foundry. So "tapeout" came to mean writing the final magnetic tape.

      Nowdays, of course, the data is usually transferred over the internet, so no tape of any kind is involved (not even duct tape). But it is still called tapeout for historical reasons.
      • Nowdays, of course, the data is usually transferred over the internet, so no tape of any kind is involved (not even duct tape).

        So, since it's over the internet, I guess these days you could say it was "tubed out"?
    • Nowdays just about everything is computerized in this process. THere's never a plastic sheet or tape or paper stage-- the bit images go directly form the design mprogram to the foundry.

      But they still say "The design got "taped out"."

      It's no unusual at all for for 'heritage' terminology to survive past the technology or system that inspired it - because the understanding of the term is still widely held.

      For example - the 'Christmas Tree', the section of a submarines ballast control panel tha

    • i clicked on the read more... just to see this comment, because i knew it'd be there. never fails that i hear someone asking what taped-out means whenever it's used.... i learned about it at a hardware talk while an intern at apple years ago
    • Like 'Dialing' a phone, or better yet 'booting' a computer (from the old tall tales about a man lifting himself into the air by his bootstraps)

      also consider the adage: Never underestimate the bandwidth of a station wagon loaded up with backup tapes.
  • by martinbogo ( 468553 ) on Tuesday August 15, 2006 @02:47PM (#15912054) Homepage Journal
    It took AMD a very long time to create a low-wattage version of the dual core 280. With four cores burning away on the new chip, I wonder how efficient putting a quad-core chip on a server board will be. Right now, most servers are running more than 80W per chip, making for a massive thermal dissipation problem. There's a lot of heat to shunt away from the chip, after all.

    I'd rather have an ultra-efficient dual core chip, sayyyy .. 25W .. over having a quad core monster at over 140W!

    • by thebdj ( 768618 ) on Tuesday August 15, 2006 @02:54PM (#15912115) Journal
      It is called the 65nm process. While these might not be running the 65nm yet (the information seems vague but leans to it still being a 90), by the time the quad cores reach desktops, I would suspect 65nm will be a lot more common, and should help considerably in improving the power consumption. (This is part of what helped Intel keep their CPUs under control for a while.)
      • Inside the linked stories, they mention how Deerfield (the 65nm process chips) have dropped from the roadmap. They extrapolate that to mean that these will be the only 65nm chips.

        Another decrease in power consumption can be obtained by lowering voltages, which I understood from another article to be handled on K8L by introduction of another new tech - but I don't have that link at the moment.

        And lastly, it's not just pure power consumption you're worried about these days, but power consumption per computati
    • Well I would bet that with quad cores will also come a die shrink so they will put out less heat per mip than the current chips. Going to Quad cores may shift the advantage to AMD. AMD currently has an advantage thanks to Hyper-transport once you hit 4 plus cores.
  • Soon there will be news of your server room melting into the Earth's core from the heat.

    This will happen 4 times before they plan to do a recall, thus the name "quad core".
  • by djcinsb ( 169909 ) on Tuesday August 15, 2006 @02:56PM (#15912134) Homepage
    Citing similarities between blade counts in razors and processor counts in servers, Gillette began acquiring shares of AMD in a hostile takeover bid.
  • Wonder how long it will take for compilers and languages to catch up with the concurrency challenges [www.gotw.ca]. Till then, applications will run slower than ever.

    [On the desktop, multimedia players, browsers, compilers, IDEs, how many of them will use those cores? Servers seem to be ready though.]
    • Java (Score:2, Informative)

      by Gr8Apes ( 679165 )
      Write your code in Java. Concurrency utilities are built right into 1.5 on up. With these processors, it should no longer be an issue...

      Now I know I just lost any karma this story might have gained me.... ;)
      • If you don't fundamentally understand parallelism, Java isn't going to help you. I mean, so it's got a "synchronized" keyword. So what? You've still got to know at what granularity you want to synchronize stuff, you've still got to avoid deadlocks and race conditions, etc.

        The only thing hyping Java as a magic silver bullet will do is encourage the creation of a lot of buggy threaded code.

    • It isn't an issue with two or four cores. Most the code that is written at my company is multi-threaded now.
      Think about how many tasks are running on a standard desktop just when you are surfing the net. You have the GUI, TCP/IP stack, and the browser. That is three threads at a minimum. And yes I know that I have way simplified it. Most PCs and servers have dozens of tasks, processes, and or threads running at any one time.
      Yes dual or quad cores are close to useless if you are running dos but for most use
      • That is perfectly true, but of course that will only really make a difference when you have 3-4 processes that are constantly consuming cpu time. You will still run into slow downs when one process starts blocking others like when the httpd has multiple requests waiting on a db transaction to complete, etc. Also, as another poster pointed out recently I think, it may be hard to provide enough memory and IO throughput to keep all 4 cores going at full steam.
    • Languages and compilers are already here that do multi core processing. I'm hoping that this is the start of more and more cores in desktop systems. I remember the days of the transputer and the connection machine. Chuck Moore of Forth fame has a new comapny that is in the process of producing a 'sea of processors'. Simple stack based cores that are linked.

      More and more cores means that exisitng langugages are less and less efficient. You want to control task distribution on a 32 core machine? What ha
  • by Pluvius ( 734915 ) <pluvius3&gmail,com> on Tuesday August 15, 2006 @03:05PM (#15912204) Journal
    And when will Gillette-Intel come out with its five-core Fusion system with the patented "Serving Surface" for a close and comfortable network solution?

    Rob
  • Buy for tomorrow (Score:5, Insightful)

    by spyrochaete ( 707033 ) on Tuesday August 15, 2006 @03:07PM (#15912230) Homepage Journal
    In fact, all AM3 chips will be socket compatible with AM2 motherboards

    This is precisely why I recently purchased an Athlon 64 X2 instead of a Core Duo despite glowing reviews of the latter. The Duo is on Intel's ancient 478/775 sockets whereas X2 is on AMD's new AM2 socket. How many more processors can Intel jimmy into those tight little PGAs? AM2 will have legs for years to come while early adopters of Duo will be buying new motherboards with their next CPU upgrades.
    • #include "how-often-do-you-upgrade-just-your-cpu.h"
    • Re:Buy for tomorrow (Score:3, Interesting)

      by xenocide2 ( 231786 )
      Was this nugget of insight any less valid reguarding the now defunct 939 slot, or the soon to be released AM3 slot?
  • Next node (Score:4, Insightful)

    by TopSpin ( 753 ) * on Tuesday August 15, 2006 @03:17PM (#15912346) Journal
    Wake me up when AMD has 65 nm scale cores. The vast majority of Dou Core 2 Duo Conroe Core whatever performance and efficiency gains are due to the differences between 90 and 65 nm features. Smaller scale means more execution units and more sophisticated cache logic on the same die. Until AMD does 65 nm their products will be either too hot or too slow.

    We've been at 90 nm for so long people almost forgot what a massive improvement a smaller node size can make. Various AMD 65 nm engineering samples are floating around Asia and AMD has made announcements about various 65 nm models appearing Q4 06, early 2007. This is the real battle. However, no mention of what these quad-core parts are supposed to be using...

  • Before long these guys [fernstrum.com] are going to be moving in on the CPU cooler business.
  • by EricBoyd ( 532608 ) <(moc.oohay) (ta) (dyobcirerm)> on Tuesday August 15, 2006 @03:21PM (#15912390) Homepage
    Honestly, can you use 4 cores in any of your current applications? I think the time is coming when the 30 year trend in faster CPUs will end. If you can't increase the mega-herts, and extra cores don't actually improve application performance, what will Intel and AMD do to keep improving their products? I wrote an essay with some possible ideas: Computers in 2020 [digitalcrusader.ca]
    • dificult to take you seriously when you misspell Hertz.

      Poor Heinrich.

      Yes, the megahertz wars have flattened for the time being. Now compiler are going to be perfected for multicore developmen, and some new ways to use Multi-processor will emerge.
      Then we will hit some limit to the max cores, and speed will be the name of the game once again.

      At this point, I think we wuld be better off if computers couldn't get faster until some break though 10-12 years from now. Code optimization would come back in a mean wa
    • Actually, yes. Four cores would do very nicely for several of the applications developed by the company I work for.

      We produce real-time data acquisition and analysis systems for multi-channel data in the audio bandwidth and above. Some of our programs have several threads per channel, and on a 128-channel system I believe we have seen over 500 threads running...

      Anything that can allow our software to do more real-time analysis on the captured data without compromising the low-latency display update rates de
    • by NerveGas ( 168686 ) on Tuesday August 15, 2006 @06:10PM (#15914502)
      Photoshop can max out the four cores in my dual dual-core Opteron setup. Admittedly, I don't do that often, but that's still one app which *can*, and that's just a desktop app. Most server-oriented applications, however, are designed to take advantage of multiple CPUs.

      steve
  • Half-Assed... (Score:4, Informative)

    by Nom du Keyboard ( 633989 ) on Tuesday August 15, 2006 @04:14PM (#15913119)
    Perhaps Core 2 will have a very short reign at the top?

    It's half-assed of /. summary to say the above without even a mention of Kentsfield, which will probably beat AM3 to market with 4 cores in a single package. Next time give us the whole ass.

  • Asymetric cores... (Score:3, Interesting)

    by Prof.Phreak ( 584152 ) on Tuesday August 15, 2006 @05:00PM (#15913758) Homepage
    What I'd really like is asymetric cores... something like a really power efficient simple 1Mhz core, but when needed, a more powerful 2Mhz core steps in... then a 4Mhz core, then 8Mhz core... The box can have like 32 cores, each one 2x as fast as the last... (oh, I wish!) while 99.9% of the time, you're only using the simple 1Mhz one (ie: how much cpu power does it really take to update the clock?).

    (it doesn't have to start at 1Mhz... it could start at 100Mhz, jump to 500Mhz 2nd core... 1Ghz 3rd core... and 2Ghz 4th core---so an idle CPU would use very little power).

    Besides, most of the time, you won't use the cores equally anyway. You'll likely run 1 "heavy" app (some game), and a few very light ones.
  • by IronChef ( 164482 ) on Tuesday August 15, 2006 @05:10PM (#15913864)
    All this multi-core stuff is great, but is software keeping pace? It's nice to multitask more quickly, but unless I am mistaken that extra core doesn't help when you are playing a 3d game.

    (I read that Unreal's upcoming "Gemini" rendering engine will be multi-threaded on the PS3. Hopefully that'll mean it supports multiple procs on the PC too.)
  • by lcsjk ( 143581 ) on Tuesday August 15, 2006 @06:24PM (#15914635)
    A board design was nearly ready for production when the taping was completed, i.e., taped-out. The same process was used for early IC designs in the 60's and 70's. (Probably also in the late '50's, but I am too young for that.)

    Back in the mid-60's people were using black crepe-paper tape (like masking tape but black and stretchy) for laying out PC boards. Being 'stretchy' allowed it to bend around corners. Large sheets of clear film were used and aligned front to back by punching a hole in the sheet corners with a 1/4 inch diameter pins to keep them lined up. Then the board pattern was taped onto the sheets of film; topside on one layer and bottomside on another. A few designs used more layers. Mostly these were 4X actual size. These taped sheets were then reduced in a photo darkroom and used to make a glass photo-mask of actual size.

    However alignment remained a problem, so some company came up with the process of using red and blue plastic tape for the front and back sides of the board and these were both put on the same large piece of 4X plastic sheet. That way the front and back were always in alignment. A red or blue filter was used in the photo lab to expose only one of the colors for each layer.

    The same processes were used for large IC's well into the '70s and pictures appeared on covers of various publications when the 6800, 6500, and 8085 processors hit the market. I was not in the semi-conductor industry, but I have never read any article that said a board was "taped-out" when it was put on magnetic tape for manufacturing. It was nearly always used to tell management that the physical board layout was nearly complete and ready. Sometimes the taping took weeks.

    When large high-resolution computer moniters became available, the red-blue became obsolete and the board design went straight to magnet tape for the Gerber-Plotter. However, I never heard any person refer to this as being "taped-out".

  • Naming (Score:3, Funny)

    by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday August 15, 2006 @06:29PM (#15914672) Homepage
    AMD will really miss an opportunity if they call the new chip anything other than Tetrathlon.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...