Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Intel To Produce 65-Nanometer Chips In 2005 187

Ridgelift writes "In keeping with Moore's Law, Intel will begin mass-producing chips using 65-nanometer process technology in 2005, according to a ZDNet article (additional coverage at EE Times and The Inquirer). Intel recently produced a Static Random Access Memory (SRAM) cell at 0.57 square microns, in comparison to 90-nanometer process measuring 1 square micron. "You can get a 40 to 50 percent increase in clock speed with no further improvements" says Intel director Mark Bohr."
This discussion has been archived. No new comments can be posted.

Intel To Produce 65-Nanometer Chips In 2005

Comments Filter:
  • Intel culture (Score:4, Insightful)

    by BiggerIsBetter ( 682164 ) on Monday November 24, 2003 @07:12PM (#7552349)
    What a beautifully telling Intel quote that is, "You can get a 40 to 50 percent increase in clock speed with no further improvements". Just keep ramping it up boys.
    • Is that enough? (Score:3, Insightful)

      by nnnneedles ( 216864 )
      50%, hmm.

      doesn't Moore's law require 100% increase every 18 months? Yeah I know Moore's law isn't really about speed, but still.

      • For the most part, clock speed != performance.

        Yes it goes to a large part of it within the same processor family, but it doesn't scale at 1:1.
      • Originally Moore's law stated that transistor density would double every 12 months. That was fairly quickly changed to say that it would double every 18 months. The current "law" states that transistor density doubles about every 24 months.

        Long story short, we haven't really been following Moore's law for a little while, though we do continue to double the amount of bits we can stuff onto a piece of silicon at a fairly rapid pace. Intel's plan to bring out 65nm chips before the end of 2005 continues thi
      • "Yeah I know Moore's law isn't really about speed, but still."

        But still what? You negated your entire point, and then said "but still." What are people supposed to say to you regarding that but still? I am at a loss as to what to say to that. You're comparing 50% speed increase, to Moore's law which describes a 100% increase in the number of transistors, and you acknowledge that this is an apples to oranges comparison, yet you invite further comments on the topic.

        And how on earth did that post get mar
      • Personally I find that GPU/RAM/bus speeds count more and more (or is that moore) compared with simply the CPU.

        Speed does not necessarily drive repsonsiveness for desktop users.


        linux web hosting [cherry-web...ing.com.au]

    • PC Toaster (Score:2, Insightful)

      by DigiShaman ( 671371 )
      With a large enough heatsink, I could put a few slices of bread between the fins. Not only will this new CPU toast your data, but your breakfast too.
    • Isn't that close to what they said about moving to .90? That, uh, didn't happen. The Prescott is coming in at over 100 watts - CASES will need to be redesigned to handle the heat output.

      Intel bet their farm on being able to ramp up clock speed as opposed to making a more efficient chip (ala Opteron) and they're finding it harder to keep up. Take a look at the efficency of even a Pentium M at 1.3 GHz and you'll see why this is important - at least from a technical standpoint.

      But I guess if you're whole mar
  • Bohr? (Score:2, Funny)

    by Anonymous Coward
    Bohr? I wonder if he really knows where the manufacturing plans are or where they're going.
  • Reduce Power? (Score:5, Interesting)

    by brandido ( 612020 ) on Monday November 24, 2003 @07:12PM (#7552358) Homepage Journal
    According to the article,
    Reducing the size of the chip improves performance, reduces costs and can potentially cut energy consumption. In a nutshell, electrons have a shorter commute in 65-nanometer chips, so performance goes up. The gate length--the distance electrons travel to get from the source to the drain on a transistor and thereby flip the transistor on--drops from 50 nanometers to 35 nanometers in 65-nanometer chips.
    However, it was my understanding that power consumption will often go up with smaller geometries as leakage current increases with the smaller gaters. Can anyone elaborate on this?
    • I was going to ask what about heat dissipation? When devices get smaller you have a smaller area to shed heat. This issue (which is pretty serious - ask any Athlon owner) wasn't covered in the article.

      Chip H.
      • It's not that they produce a smaller die, they just jam more transistors on to it. Heat dissipation will be the same, heatsinks and fans (or water, or peltier, liquid nitrogen, etc...)
    • Re:Reduce Power? (Score:5, Interesting)

      by John Courtland ( 585609 ) on Monday November 24, 2003 @07:18PM (#7552418)
      There's all sorts of problems when you get that small and fast. EMF interference, gate jumping, electron migration. The thing basically is a small radio transmitter, and starts causing itself problems just by running so fast. They need to really start designing more intelligently, unlike (as a previous poster stated) "ramping it up".
    • Re:Reduce Power? (Score:5, Informative)

      by addaon ( 41825 ) <addaon+slashdotNO@SPAMgmail.com> on Monday November 24, 2003 @07:20PM (#7552442)
      The relative importance of leakage increases at smaller geometries, but for all geometries on the near horizon, the increase isn't enough to outweigh the decrease in 'normal' (switching) power usage. This will probably change around 40 nm, but at 65 nm we're still making serious improvements.
    • by mbessey ( 304651 ) on Monday November 24, 2003 @07:23PM (#7552467) Homepage Journal
      On an individual-gate basis, smaller gates use less power, since there's less capacitance at the gate to charge or discharge. Of course, smaller gates mean more components in a given area, which increases power consumption.

      These two effects should just about cancel out, since gate capacitance increases with the square of the feature size, and the number of gates drops at the same rate.

      Which leaves you with the other effects (including leakage), which are all worse with smaller gates. So, a maximum-size part will have a higher power consumption on a smaller process, but if you took an existing design (like a Pentium 4) and rebuilt it on a smaller process, you should get a lower power consumption (and smaller/cheaper die size).

      • I think the issue, which addresses the originally commented upon quote, was that all things being equal, making and equivalent to the P4 with the smaller die size would create the performance advantage.

        The problem becomes, however, that as they can shrink the processor, they can also pack in even more features. Thus, you have processors which are faster, have hyperthreading, predictive caching, etc, but are somewhat the same size as the 486 (relatively speaking).

        Which comes back to the issues posted in the

    • I think its the opposite.. since electrons have less distance to travel, they are less likely to 'leak' and thereby have a lower power consumption and generate less heat.
      • Re:Reduce Power? (Score:3, Informative)

        by mlyle ( 148697 )
        Nah, it's actually the opposite of that.

        Since electrons have less distance to travel, the resistance of the dielectric is less and less will leak. In extreme cases, for very small geometries, quantum tunnelling becomes an issue as electrons disappear on one side of the gate and appear on the other.

        But as other posters said, leakage is currently still fairly insignificant compared to the huge WOOOSH of power that goes into the chip when things switch. Although leakage is becoming now more important for
    • Re:Reduce Power? (Score:3, Interesting)

      by batura ( 651273 )
      While leakage is a big problem, its not as big as the power usage per switching transisitor, which if P = C * F * Vdd^2. This is the power consumption when the transistor goes from its high to low state and reducing the distance between gates reduces the capacitance in the wire. At really high frequency, you can make any wire seem like a capacitor, so its important to reduce the lenght of wire you're using.
    • Re:Reduce Power? (Score:3, Informative)

      by rsmith-mac ( 639075 )
      At this point, without some sort of additional chip technology(SOI, tri-gates, etc), it seems very likely that power consumption will definately stay even, if not go up entirely. Every new scale of technology is a bigger problem to make work, as the (known) laws of physics aren't moving with it, posing a very absolute barrier. Whereas 350nm, 250nm, and even 180nm went off without a hitch(with companies even manging to stick some Cu in there), 130nm was a big problem for AMD and TSMC(makers of Nvidia's GPU's
      • Re:Reduce Power? (Score:3, Informative)

        There are a number of things going on here, but a few important things to think of.

        First off, with previous shrinking of the manufacturing process you could run the processor at a lower voltage. Most 500nm chips ran at 3.3V, 350nm chips ran at 2.8V, 250nm chips ran at 2.0V, 180nm chips ran at 1.75V and 130nm chips now run mostly at 1.55V. As you can see pretty quickly though, the difference in voltage isn't as much as it used to be, and with 90nm production, that difference is pretty much zero, most 90nm
  • Moore's Law (Score:5, Insightful)

    by worst_name_ever ( 633374 ) on Monday November 24, 2003 @07:13PM (#7552366)
    In keeping with Moore's Law

    Well, more like "keeping Moore's Law a self-fulfilling prediction for yet another generation of processors". ;)

  • by Timesprout ( 579035 ) on Monday November 24, 2003 @07:15PM (#7552392)
    The gate length--the distance electrons travel to get from the source to the drain on a transistor and thereby flip the transistor on--drops from 50 nanometers to 35 nanometers in 65-nanometer chips.

    For all those lazy or out of condition electrons out there, they only have to travel 35 nanometers now to get some work done.
  • by Selecter ( 677480 ) on Monday November 24, 2003 @07:15PM (#7552393)
    But it seems to me to be rather premature announcement, supposing these chips will be out when Intel says they will be. I think Intel is starting to feel the heat from quarters they didnt expect, like AMD and Apple via the good graces of IBM. Athlon 64 looks like a winner and so does the IBM made G5. IBM and AMD both have great looking roadmaps for the future.

    This smells like a another smear piece by Intel to me, kinda like paper launching the P4 Emergency Edition on AMD's rollout day for the Athlon 64.

    Boo. Hiss.

    • Why is it premature? They announced they had managed to get the 65 nanaometer process working and intend to put it into full scale production in a couple of years. Perfectly normal behaviour and the timescales seem rational. I'm sure Intel shareholders and the IT commuuity in general like to be kept up to date with what Intel is working on.

      What do you expect them to do, develop the process and then srrap it ? , or maybe keep stum for a couple of years and then suddenly start rolling 65 nm chips out the
      • by Selecter ( 677480 ) on Monday November 24, 2003 @07:35PM (#7552563)
        I thinks it's premature to build a few test SDRAM cells and then magically announce they are going to build chips using that tech in possibly less than 1 year and 1 month. It's far more likely they will not meet that target, given the real hurdles of fully implementing that process to overcome. A few SDRAM cells does not a P5(6?) make.

        Also, someone is not telling the truth.

        "The 65-nanometer chips will not include the IBM-touted silicon-on-insulator technology, either. "We have not seen any significant performance advantages with SOI," Bohr said."

        Well, who is it? IBM and AMD are going with it. Who's wrong, Intel or IBM/AMD? I'd like to know.

        • I might not be a matter of who is right or who is wrong. Things aren't always that black and white. Sometimes different approaches can yield comparable results.

          Of course, it's fashionable to bash Intel around here. If it were AMD announcing this, the fanboys would be lining up for their fr1st pr0st proclaiming it the Second Coming ;)
        • by Erich ( 151 ) on Monday November 24, 2003 @07:52PM (#7552681) Homepage Journal
          SOI has a much different design methodology. If you are Intel and have a really great design flow for non-SOI, it may not be as simple as "just go to SOI."

          Also, for complete systems, SOI has a problem in that memory density tends to be much lower... so your caches have to be smaller if they are on-chip.

        • "a few SDRAM cells"? 4 million is more than a few. And they're SRAM cells, not SDRAM. Kind of stuff you need for cache. It does give a pretty good indication of how well along the process is.

          BTW, this is apparently being done at the fab known as D1D in Hillsboro - this isn't a small scale research lab, it's a full size production fab. That it is being done there indicates it isn't as far away as you might think.

          As for your comment about SOI, why does it need to be so black and white? It's always a judge

    • This smells like a another smear piece by Intel to me

      ...and you smell like a troll.
      Intel has announced they've made an advancement in technology and all you can do is bash them. Someone had to break the .65nm "barrier" and you're such a fanboy that you can't stand that it was Intel. Grow up.
      I'm quite sure the Opteron will eventually benefit from a die shrink as well.

      Who modded this idiot up? Shame on you.
  • by Anonymous Coward on Monday November 24, 2003 @07:15PM (#7552397)
    "You can get a 40 to 50 percent increase in clock speed with no further improvements" says Intel director Mark Bohr."

    Yeah, I get those "40 to 50 percent increase" emails all the time...I've been deleting them as fast as they come in.

    Ohhhhhh...wait.... He said CLOCK, not COCK
    nevermind :-)
  • Terrific (Score:2, Funny)

    by ActionPlant ( 721843 )
    So does this mean, with 60nm tech, the die can be four times as large with an increase of 500% power? If we're moving from 90nm to 60nm, in the same die size that effectively puts us at a 30% efficiency increase. Times four (heck, just add more layers if you need more circuits!)...well, I'm hoping this means we see 20Ghz chips in time for Longhorn's launch. Watch it crash in 1/5 of the time!!

  • That was a gauntlet to the face and no mistake. AMD have just announced a new Fab in Dresden, remember, at 90nm....

    • Not really. (Score:4, Insightful)

      by TCaM ( 308943 ) on Monday November 24, 2003 @07:28PM (#7552506) Homepage
      From all I have read the new AMD fab, like most any other will start out at a given process size, likely 90nm in this case, but will be ramped down so to speak. Do you really think they are buying near a billion dollars worth of equipment that isn't in any way upgradeable? Do you think Intel builds entirely new fabs for each new process and just takes the wrecking ball to the old ones?

      Also given that intel still isn't shipping any quantity or anything at 90nm I take the 65nm claims with a grain* of salt.

      *the process size of said grain may vary
      • I worked at intel - and it is a known fact (at least it has been previously) that it is cheaper to build a completely new fab than to re-tool an existing one.

        Fabs cost multiple billions, but it costs even more to dismantel and re-tool a fab for completely new machines, hardware and processes in production.

        • Much of this depends on the initial design of the fab I would think. Also large corporations quite often have a throw away and start fresh because it is cheaper mentality , not because it actually is but because of things like tax loopholes and depreciation. Also factor in that as the jumps between process sizes are becoming more frequent companies are looking more forward and designing fabs with these transitions in mind.

          It is expected that amds new fab in dresden will work with 300mm wafers on 65nm and
      • You know, with better design, it shouldn't matter. If all Intel is planning to do is scale their P4 another generation then I say, 'Big Whoop!'

        Look at how the Opteron is kicking ass at only 2.2 GHz! Or for an even more painful example, look at the Pentium M at 1.3 GHz. Unbelievable performance if you want it. But Intel seems hell bent on clock frequency and that's exactly what you get with the P4 designs.

        Keep in mind though, ATi totally ruled Nvidia this year with their 9800 Pro design and you know, it's
    • Re:Ouch! (Score:4, Insightful)

      by WinterSolstice ( 223271 ) on Monday November 24, 2003 @07:31PM (#7552534)

      The plant in Dresden will actually work, producing actual chips. This bit from Intel is just vapor at this point.

      Besides, Intel will have to re-tool, debug, and market anyway. It's not like AMD will be any different.

      • I'd hate to burst your bubble, but Intel happens to already be manufacturing the chips successfully.
        • Don't think so. 2005?

          Right now, they probably have at best development versions that are extremely expensive and seriously low yield. They have a long way to go before it will work well enough to make money. I'm sure AMD will be along about 6 mos after Intel (if the chips sell well) and then by late 2005 there will be a new IBM PowerPC chip anyhow.

  • by Not_Wiggins ( 686627 ) on Monday November 24, 2003 @07:19PM (#7552435) Journal
    But, you'll also be incuring greated magnetic field interference. Heck, the thing will also generate more heat as driving current through smaller traces creates more "friction;" the chip might break itself simply under thermal load.

    Just because you can make it smaller, doesn't mean it'll function properly. There's a theoretical limit to how small traces can go before the interference makes signaling impossible.

    I can't wait to see how many processors get "down-binned" once they ramp up production with this tech. 8/
    • Pretend you're an electron in a Hydrogen atom. Now imagine how large the proton is, that you orbit all day. Its so huge you could probably live on its surface, along with 1000 other electrons, in your own electron village. But Protons don't like that so they push all you lazy dirty electrons away and make you get a job.

      Anyway, to make a long story short, electrons push eachother around in a wire, so things like AC work. And they're really tiny, so the wires can be made really really small. In fact, mo
    • You've just got to make the wires really straight so the electrons can go ballistic...

      O, we aren't that small yet? Well, give them time. :-)
  • by Anonymous Coward on Monday November 24, 2003 @07:19PM (#7552436)
    If they were really thinking ahead, they should have tried for 64 nanometers. Then, when the chip size halves every few years according to Moore's law, it can stay a whole number of nanometers for a few more years yet.
  • Moore's "Law"? (Score:3, Insightful)

    by nacturation ( 646836 ) <nacturationNO@SPAMgmail.com> on Monday November 24, 2003 @07:20PM (#7552446) Journal
    I've always wondered why it's called Moore's Law. After all, it's not something which is mathematically provable. You'd figure computer scientists and systems engineers would be a bit more rigorous and call it Moore's Theorem, Moore's Axiom, or Moore's Postulate (I'm not sure what the best terminology is for this kind of conjecture). Granted, it has been approximately held, but there's no underlying reason why processor speed couldn't increase by an order of magnitude in a few months given the right implementation.
    • I've always wondered why it's called Moore's Law.

      It's called Moore's Law because the guy at CompUSA would get funny looks if he said Moore's Theorem. Often times you must dumb down your speech and use improper or vague terms to be understood.

      Sad and true, a winning combination!

    • by taradfong ( 311185 ) * on Monday November 24, 2003 @07:39PM (#7552594) Homepage Journal
      Similarly, "Murphy's Law" was supposed to be called "Murphy's Axiom" but something got screwed up.
    • I'd be more comfortable with them calling it "Moore's Observation."

      Or, to put it more directly, "Moore's Observation of a Small Sample of the Overall Computing Power Increase In A Specific Timeframe... Limited Application!" 8)

      Maybe then it can go away... as it should.
      • I'm fairly sure they've been using moore's law as an excuse to delay the release of technologies long since working the lab for quite awhile now. Do you really think intel fly's by the seat of their pants praying AMD won't discover technology that boosts chip speeds to 200 or 300ghz? Yeah right, both sides are already WAY beyond what's being released, they are only ramping up what they have to. They compete at the high end sure, but it's in terms of how long they can make their predeveloped technology la
    • Certainly not Theorem - the definition [wolfram.com] of a theorem is that it can be proven to be true. And not Axiom either - an axiom [wolfram.com] is typically a statement that provides the base for a mathematical system (like "There exists and empty set"). Postulate is, AFAIK, just a synonym for axiom.

      One could call it "Moore's conjecture", "Moore's oberservation", or "Moore's prediction" if one wants to be strict. The use of the term Law is not completely wrong however, there are other examples of the term being used for things
    • The term is correct -- see here [auckland.ac.nz] for details. Within the realms of science, a law is specifically a generalization that may be made based *on observation*. Moore was making a generalization based on past observation. He was not making a theoretical claim about the future. All this is quite proper for a law. As a matter of fact, if he was hypothesizing that processor speeds will stop doubling in the next twenty years, that would be a hypothesis, since it's not a generalization based on a body of observat
    • You'd figure computer scientists and systems engineers would be a bit more rigorous and call it Moore's Theorem, Moore's Axiom, or Moore's Postulate (I'm not sure what the best terminology is for this kind of conjecture)

      Axiom: Unprovable assumption - basic assumption from which you build others? No.
      Theorem: Result based on axioms, through a rigorous proof? No.
      Postulate: Generally used about an assumption made in a proof. Like, if we postulate that result A is true, this leads to result B. No.
      Law: Typicall
    • If any corporation in a capitalist system discovers a way to make a CPU run at Y Ghz, it would do its best to slowly increase clock rate from the current speed CPUs at X Ghz up to Y so it can maximize profits.

      There is no incentive to rush unless you have competition. And we're talking top of the line CPUs here. Competition would need several billion dollars and some really smart people to even dream of competing.

      Just look at the Alpha. Where would be today if we could have prevented DEC from being boug
    • I've always wondered why it's called Moore's Law.

      Because it rhymes.

    • Well, what's a Law?

      Newton's Laws of Motion are only true within measurement errors at low speeds and relatively low masses.

      Boyle's Law only applies to a nonexistent ideal gas; it does not apply to any gas in actual existence. Ohm's Law requires an ideal conductor.

      Bode's Law breaks down at Neptune (if you count Ceres, the largest asteroid, as a planet), and only works approximately. Zipf's Law holds true in vast numbers of things (commonality of words, city sizes, web traffic . . .), but there doesn't s
  • by mackman ( 19286 ) on Monday November 24, 2003 @07:27PM (#7552498)
    "Leakage, the unintentional dissipation of electricity, among other phenomena, can also inadvertently raise memory consumption." I would have to disagree, unless they're watching Johnny Mnemonic.
  • Cool, but... (Score:4, Insightful)

    by EverDense ( 575518 ) on Monday November 24, 2003 @07:27PM (#7552499) Homepage
    Wouldn't Moore's Law have failed by now without AMD competing for market share?
    • by moehoward ( 668736 ) on Monday November 24, 2003 @07:46PM (#7552634)
      No. Intel is always competing with itself. They want to make their products obsolete as soon as possible so that people upgrade.

      Please mod parent back down, as I have made him look foolish.
      • Why is the parent modded "funny".

        Obsoleting their own product is EXACTLY what Intel's business plan is. Why else would someone buy a new chip if their current chip is good enough?
    • Wouldn't Moore's Law have failed by now without AMD competing for market share?

      No, it wouldn't have. They have the biggest competitor there is, their own product from last year. They need to keep improving and speeding up so that people will see a reason to get rid of their older, perfectly usable computer and replace it with the new model. This is the premise the entire industry is build on.

      Admit it, the machines that most people already have are, in most cases, fast enough to do the job they got it for
    • What about IBM,Motorolla? Also ram manufactors are looking for small micron size. Not to mention Graphics processors guys like NVIDIA or ATI.
    • Re:Cool, but... (Score:2, Insightful)

      by dustinmarc ( 654964 )
      Wouldn't Moore's Law have failed by now without AMD competing for market share?

      I don't think this is because of AMD. I would attribute it more to the fact the Gordon Moore, the creator of Moore's law is a co-founder of Intel and currently the chairman-of-the-board. It's probably more of Intel employees trying to not upset the boss by keeping up with what he obviously feels is the appropriate rate for number of transistors on a chip.
  • Translation: (Score:4, Insightful)

    by raehl ( 609729 ) <raehl311@yahoo.GAUSScom minus math_god> on Monday November 24, 2003 @07:41PM (#7552611) Homepage
    "You can make a 80% to 100% price increase without any further improvements."
  • Don't you mean a 65mm wafer size? Oh wait, I'm thinking of another article [slashdot.org].
  • by Alomex ( 148003 ) on Monday November 24, 2003 @07:45PM (#7552633) Homepage

    The superconductor industry has detailed plans which are known set several years in advance.

    If 65nm technology is possible, actual design specs have already been approved and work has already started on the design of a fab facility. So there is no speculation in the report.

    • I'm virtually certain that you meant semiconductor.

      I'm also virtually certain that IBM has press-releases concerning nanotube-based transistors, which I'm actually certain has nothing to do with design rules and designing fabs. This smells like nothing more than a paper release, similar to earlier releases/predictions that the semiconductor industry would be standardized on 300mm wafers by now, which has failed to materialize.

      • I'm virtually certain that you meant semiconductor.

        Yup. Brain typo.

        I'm also virtually certain that IBM has press-releases concerning nanotube-based transistors, which I'm actually certain has nothing to do with design rules and designing fabs. This smells like nothing more than a paper release,

        Slow down, cowboy! I'm not saying it is true. My point is that IF 65 nm mass-produced chips for 2005 are at all possible, then by this time this technology has to be well past the speculative stage and well i
  • Questions. (Score:3, Insightful)

    by Veramocor ( 262800 ) on Monday November 24, 2003 @07:57PM (#7552719)
    1. Approximately how many silicon atoms in a nanometer?

    2. Whats the likely minimum amount of atoms that you need for a transister. Would switching materials effect that limit?

    Given these two it should be easy to predict the smallest transitor size, and thus when moores law has to end.
  • Useless metrics (Score:2, Interesting)

    by tttonyyy ( 726776 )
    Hmm... I'm having trouble visualising 0.57 microns square. Lets see - even with these reduced cell sizes, you'd need 3600 square meters (half the size of a football pitch) of SRAM to have one bit per person on the world [osearth.com].

    Assuming a constant 50W/sqr.mm [overclockers.com], that'd be 180GW of heat. Someone find me a heatsink for that baby!

    • Re:Useless metrics (Score:2, Informative)

      by Bender_ ( 179208 )
      Hmm... I'm having trouble visualising 0.57 microns square. Lets see - even with these reduced cell sizes, you'd need 3600 square meters (half the size of a football pitch) of SRAM to have one bit per person on the world.

      Yes, I know its the fault of the metric system, everything would have been easier with mils, Angstrom and squarefeet.

      But the correct result is 0.0036 m^2. Does a Gigabyte of Dram (=8 Billion Bits), which is obtainable in todays technology, take up a football pitch? no!
    • by FuzzyDaddy ( 584528 ) on Monday November 24, 2003 @08:41PM (#7553056) Journal
      1 square meter = 1 meter*1 meter = (10^6 micron * 10^6 micron) = 10^12 square microns.

      1 square meter is NOT 10^6 square microns.

      But bonus points for being the first one to make this mistake in this thread, someone always does.

  • by David Jao ( 2759 ) * <djao@dominia.org> on Monday November 24, 2003 @08:26PM (#7552947) Homepage
    Look, I am not a chip fabrication expert. I am merely a sideline observer. But based on my observations, Intel will probably not make it to 65nm in 2005.

    My position is based on nothing more than simple counting:

    • Intel achieved 250nm process technology (deschutes) in January 1998 [com.com]
    • ... 180nm (coppermine) in October 1999 [com.com], although availability was scarce until January.
    • ... 130nm (northwood) in January 2002 [zdnet.co.uk]
    • ... 90nm (prescott) is not out yet, although it is supposed to be out in fourth quarter 2003 [com.com]. I'm going to go out on a limb here and predict January 2004.
    Their track record is clear: the average time between circuit size improvements is two years. Based on their history, 2005 would be a stretch, with the most likely release date falling somewhere in early 2006.
    • I believe that the first application Intel will use on the 90nm process is the Prescott-core CPU, the replacement for the Pentium 4. My guess is that Intel will call this new chip the Pentium 5 when it is officially unveiled early in 2004.

      Maybe this is the reason why haven't seen Service Pack 2 for Windows XP or Service Pack 5 for Windows 2000--they will incorporate new code that will take full advantage of the additional multimedia instructions offered by the Prescott-core processor.
  • by doormat ( 63648 ) on Monday November 24, 2003 @08:44PM (#7553067) Homepage Journal
    Look at all the problems they are having with the 90nm process right now? That thing is leaking current like you wouldnt believe. Power dissapation is 90-100W. Heat is a big issue. I'm thinking something is going to have to happen to lower current bigtime. Remember thats 100W at 1.3V or so, for 77A, whereas the current P4's use 70W or so at 1.5V, for 47A.
  • So.. When Doom III or any other OpenGL based game comes out, will it be listed?

    I'm half tempted to see what games it might list for me right now, but it doesn't seem to be available with Mozilla Firebird..
  • I dunno how many people really appreciate how incredible the contributions that Intel has made.

    I recently learned that thier 3GHz processors possess 1.2nm (12Angstroms) gate oxide thickness. I'm not exactly calibrated, but it can't be more than a Si atom conected to an Oxygen connected to a Si atom conected to an Oxygen along the thickness direction. And this is *consistently* done across a 300mm wafer (~1 foot!). It's just insane!

  • I'm sure as hell not going to buy one. The heat issue makes me nervous, but electricity costs money. Am I going to have to call up an electriction to install a dedicated 240 volt circuit just to run a computer? I don't think so. I just don't need it that bad.

    Do not make the cores any more complicated, just shrink them and run then at a lower voltage. Not put 8 to 16 cores spaced out in one package. Same power consumption, more computational power. And since you don't need to run the chips at higher v

  • It's great accomplishment that 65 nm chips are sampling but projecting out into the future has always struck me as silly. "will produce in 2005" is rather like:

    "Cisco to roll out Gigabit WiFi in 2009",

    "AMD to sell x86-96 chips in 2010",

    "Microsoft Longhorn Will Read Your Mind in 2008"

    "Dell Credit-Card Server to Eliminate Cash"

In English, every word can be verbed. Would that it were so in our programming languages.