Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel

Intel Finds Moore's Law's Next Step At 10 Nanometers (ieee.org) 182

An anonymous reader writes: Sometime in 2017, Intel will ship the first processors built using the company's new, 10-nanometer chip-manufacturing technology. Intel says transistors produced in this way will be cheaper than those that came before, continuing the decades-long trend at the heart of Moore's Law -- and contradicting widespread talk that transistor-production costs have already sunk as low as they will go.

In the coming years, Intel plans to make further improvements to the design of these transistors. And, for the first time, the company will optimize its manufacturing technology to accommodate other companies that wish to use Intel's facilities to produce chips based on ARM architecture, which is nearly ubiquitous in modern mobile processors.

This discussion has been archived. No new comments can be posted.

Intel Finds Moore's Law's Next Step At 10 Nanometers

Comments Filter:
  • When will someone throw down and bring down or at least challenge intel's monopoly...
    • by Anonymous Coward

      Intel doesn't have a monopoly, there is AMD. I would suggest buying all AMD chips to help support AMD's efforts.

    • Not sure would say Intel has a monopoly, but there is a huge capital cost involved in adopting each new generation of fabrication facilities, to the point where there are very few companies that can take a seat at that table - that is the reason why most chip design companies outsource their fabrication requirements to one of the companies with the desired/required technical facilities.

    • When will someone throw down and bring down or at least challenge intel's monopoly...

      Being several generations of process nodes ahead of one's competitors, like TSMC or GSMC or other fabs, is not a monopoly. Intel has consistently had a policy of investing their money in state of the art fabs everywhere they have it, and they happen to have a big volume driver w/o becoming commodity, like memory. This is what the old US businesses used to do - invest cash into enhancing their company value

      If you are talking about the x86/x64 ISA and related patents that allow a company to make x86 CPUs,

    • Samsung which has a stake in global foundaries will break Intel's monopoly over expensive chip fabrication. In fact AMDs new Ryzen x86 chip is built by them and so are their .14 NM GPUs and even Nvidia's. Cell phones beat Intel as a result :-)

      So monopoly ride is soon over. ARM is new king nowdays anyway

  • One cannot imagine how freaking tired I am of hearing about Moore's Law - there's no law, there's never been one. There was a mere observation that the number of transistors doubled every 18 months or so.

    Whoever decided to call this observation a law must forever be held up to shame. And the ones who keep repeating this nonsense.

    • Re:No Moore's Law (Score:5, Insightful)

      by radarskiy ( 2874255 ) on Monday January 02, 2017 @10:28AM (#53592067)

      That's what a scientific law is: a relation between measured observations. It can be purely empirical.

      There's a law for centrifugal force, and it isn't even a real force!

      • Moore's Law is as much fiat as observation. "Transistor density of integrated circuits shall double every 18 months. Make it so!"

        • When it costs 10^7 USD to put up just one factory, you're going to heavily incentivize the equipment supplier who is a 1 month outlier on the schedule.

    • Murphy will eventually supersede Moore.
      Just sayin...

    • I attended a course by Yale Patt from U. Austin, who is one of the "popes" of Computer Architecture research (see this ranking [sigmicro.org], for example), where he discussed Moore's Law.

      He argued how Moore's Law was not a physical law, nor a technological or market-driven law. But it was a real law and had a very large impact.

      Instead, he argued very accurately that Moore's Law was actually a psychological Law: given that it provided the baseline for the expected performance (or transistor count) increase, every company

    • You could say the same about Ohm's law - it was just the empirical observation that current through something was directly proportional to the voltage across it. And that's not always true, but it's true widely enough for it to be a useful law.

      • by dbIII ( 701233 )
        No you can't. One is physics and the other was a guy called Moore setting a long term goal.
        • The original post was "there's no law, there's never been one. There was a mere observation".

          Ohm's law was also a mere observation at the time it was made. There was no theoretical understanding behind it. That didn't stop it being called a "law".

    • So I am sure you are tired of hearing about Newton's laws too then.
  • Yeah right... (Score:3, Insightful)

    by klingens ( 147173 ) on Monday January 02, 2017 @09:25AM (#53591869)

    Moore's Law isn't dead, that's why Intel already has the 3rd 14nm CPU family and is planning another one, Coffee Lake, in 14 nm before moving on to 10nm.
    Intel isn't making 4 different CPU families on 14nm cause the process works so well and is so cheap.

    First 14nm, Broadwell, was released 2014, released abysmally late and very underperforming, and the first 10nm is expected to be released 1h 2018. They may sample a few trial wafers in 2017, but there won't be a chip sold. 4 years is not what Moore's Law promised back then, and the Tick-Tock model is totally dead and buried as well.

    This IEEE Spectrum rag sounds worse like Popular Mechanic with that much paid cheerleading bullshit.

    • Re:Yeah right... (Score:5, Informative)

      by Tough Love ( 215404 ) on Monday January 02, 2017 @10:22AM (#53592047)

      Moore's law has been decelerating for a long time but is far from dead. What's really surprising is how far visible light lithography has been pushed, when everybody thought EUV would be needed long ago. Now, feature size is _way_ less than the wavelength, nice trick that. Even less than EUV wavelength. Probably, EUV will be used for 5nm nodes. Nanoimprint might take over when EUV reaches its limits. This is while staying with silicon. A 1 nm [arstechnica.com] transistor (gate size) has already been demonstrated, and it won't stop there.

    • What is the diameter of a silicon atom in nm? Anybody know?

      • first page in a google search says "The atoms used in silicon chip fabrication are around 0.2nm"
      • Google says ~0.11 to 0.15 nm, depending on if you use atomic radius or covalent radius. That's 1.1 to 1.5 Angstroms.
      • by dbIII ( 701233 )
        That's not going to be the number because each junction is more than a single atom.
        Around 1998 a guy in a university lab I worked at made a diode out of a single atomic layer of gallium arsenide on a silicon substrate, and he was nowhere near the first, but making a lot of junctions in the right places is a hell of a lot harder than putting a very thin coating on something.
  • It is speed that matters. In the old day when we make the transistors smaller they got faster. That hasn't happened in along while now.
    • by Mashiki ( 184564 )

      That's because we're hitting multiple problems. We have heat, die size, and electrical limitations(bleed over in the substrates). It means in the end, that having multiple physical cores on one chip is the only direction that things will be going until those other problems can be solved. There's also the other issues with memory across the system bus being too slow and causing problems. HBM solves some of those issues, but it's still too cost prohibitive to use on CPU's at least right now. Where in th

      • That's because we're hitting multiple problems. We have heat, die size, and electrical limitations(bleed over in the substrates). It means in the end, that having multiple physical cores on one chip is the only direction that things will be going until those other problems can be solved. There's also the other issues with memory across the system bus being too slow and causing problems. HBM solves some of those issues, but it's still too cost prohibitive to use on CPU's at least right now. Where in the case of GPU's it's not.

        GPU ram is different than CPU ram for a good reason. THe data is fast but narrow. CPU needs wide loads and ram optimized for that. GDDR 5 is great for a few things fast where the GPU goes massive parallel. BUt it would cripple your i7 easily which needs more bandwidth and lower latency.

    • Nothing "goes faster". The distance that must be travelled decreases so that less time elapsed between the input being available and the result being available. Saying that hasn't happened in a long time is ridiculous. It happens each and every time and there is no way to avoid it in fact.
  • But (Score:2, Insightful)

    by Anonymous Coward
    Thanks to "trusted computing" and all the other innovation and backdoors they've brought to chips, I don't want a new intel processor anymore.
  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Monday January 02, 2017 @01:01PM (#53592643) Homepage Journal

    the decades-long trend at the heart of Moore's Law

    According to this law, our computers are 1024 times more powerful today, than they were 15 years ago. And they are.

    But the user-experience still sucks. Web-browsers are still bloated and slow — and need an occasional restart. You still can't talk to computers reliably [cnbc.com] — Alexa is considered the best [slashdot.org], yet it is pathetic. Being able to reliably show something to a computer will take another 15 years, if not more.

    Spammers may be able to generate spam faster, but reliably detecting and blocking their crap — without occasionally blocking real e-mails — remains elusive.

    The fanciest UIs — be they by open source or commercial projects — would just stupidly hang or otherwise behave erratically every once in a while.

    Hardware-makers may be doing their jobs, but the software-engineers aren't doing theirs... Not well enough, anyway.

    • by djinn6 ( 1868030 )

      The fanciest UIs — be they by open source or commercial projects — would just stupidly hang or otherwise behave erratically every once in a while.

      Hardware-makers may be doing their jobs, but the software-engineers aren't doing theirs... Not well enough, anyway.

      You can't prove that those issues aren't because the processor decided 1 + 1 = 3 for one particular instruction.

    • by Raenex ( 947668 )

      According to this law, our computers are 1024 times more powerful today, than they were 15 years ago. And they are.

      Bullshit. I lived through the exponential increases before the 2002 wall, and it was glorious. If that had continued, it would make today's computers look like ancient relics.

      Sure, today we have more cores, obscene amounts of ram, and you can fit a decent computer in a mobile phone, but when it comes to general purpose computing the exponential increases in performance died a long time ago. There are young adults alive today that will never have experienced what it was like.

    • Moore's law was never about processing power, just transistor size. It ignores all other things that make a computer "powerful", such as clock speed, IOPS, FLOPS, multiprocessors/concurrency, bandwidth, on-chip cache size, etc...
  • Intel says transistors produced in this way will be cheaper than those that came before, continuing the decades-long trend at the heart of Moore's Law -- and contradicting widespread talk that transistor-production costs have already sunk as low as they will go.

    Err, what now? I thought smaller transistors were desirable for performance reasons. Has the marginal per-transistor cost been what's holding us back all these years?

    I was under the impression that the costs for microprocessor fabrication had to do with their design and then building the foundry. The per-unit cost (and thus per-transistor cost) is utterly negligible, right?

    This is a salient point because it implies that in decades to come we're eventually going to see a steep drop-off in prices for

    • Moors law is about cost. Chip cost is based on area, so making stuff smaller reduces the cost per transistor. The dropoff in cost has already happened, that's why you can get a phone that has more power than the entire USA 30 years ago, for a days wages.
      • Moors law is about cost

        No it isn't. It's about transistor count.

        Chip cost is based on area, so making stuff smaller reduces the cost per transistor.

        ... explain that, if you would. Chip cost is not driven by the cost of raw materials, yes?

        Point #1: Calculators can be bought at dollar stores and have been sold in dollar stores for at least a decade, if not two. (Pocket calculators used to cost hundreds if not thousands of dollars.) Correct me if I'm wrong, I do not think that these calculators are using the latest sub-90nm technology. I suspect they're using very old fab technology.

        Point #2: I don't have

    • by Agripa ( 139780 )

      Intel says transistors produced in this way will be cheaper than those that came before, continuing the decades-long trend at the heart of Moore's Law -- and contradicting widespread talk that transistor-production costs have already sunk as low as they will go.

      Err, what now? I thought smaller transistors were desirable for performance reasons. Has the marginal per-transistor cost been what's holding us back all these years?

      I was under the impression that the costs for microprocessor fabrication had to do with their design and then building the foundry. The per-unit cost (and thus per-transistor cost) is utterly negligible, right?

      This is a salient point because it implies that in decades to come we're eventually going to see a steep drop-off in prices for not just CPUs, but also RAM and flash memory once enough patents expire and enough high-output fabs come online, which promises to be a utterly world changing solution-in-search-of-a-problem. (Specifically, I predict this will be the point at which AI really takes off.)

      Moore's law has *always* been primarily of economic importance. Decreasing the cost per transistor is what makes later fabrication node economically feasible.

      • Again, I'm talking about marginal costs here. "Decreasing the cost per transistor" is a fairly strange way of putting it at best. The raw materials needed to create the transistors haven't been significant for a long time now. The point is how many transistors you can make with a given die as well as the properties of those transistors.
        • by Agripa ( 139780 )

          Costs are roughly proportional to area but if more transistors can be placed into the same area, then the cost per transistor is less and that is what primarily drives investment into new process generations even at the expense of performance.

          Intel's William Holt gave a recent lecture on the subject - Moore’s Law: A Path Forward [vimeo.com].

          • Costs are roughly proportional to area

            That's what I assumed. "Cost per transistor" is technically accurate, but the primary factors in cost are in the design of the chip, the creation of the die, the cost of the foundry and the time it takes 'em to create the chip (which is a function of die size, sure. Among other things.) As I've speculated elsewhere, if a government were to drop hundreds of billions of dollars on large, high-output fabs (while either not caring about patents or buying out the patent holders or waiting until they expire or

            • by Agripa ( 139780 )

              Watch that video again. At the end, Holt shows the cost of *not* investing in the next node with the intention of lowering the cost per transistor. It amounts to spending 3 times as much money in production 10 years later just to keep even with competitors that *did* invest spending a fraction of that amount.

              At some point when the investment is a lot greater, it will not pay but he gives the numbers showing just how much money that would be and it is a lot.

              • It amounts to spending 3 times as much money in production

                "Spending money on production" is not the same thing as "spending money on silicon wafers". If the machine time is limited[1] and/or expensive and the die size is held constant, then obviously shrinking your transistors lets you do more with the same amount of machine time. Summarizing that gain as "we made the transistors cheaper!" misses the point in my view. The focus should be on the cost, output, speed, efficiency, availability (if third party) and IP status of the fab machinery. Shrinking the transi

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...