
Intel Finds Moore's Law's Next Step At 10 Nanometers (ieee.org) 182
An anonymous reader writes: Sometime in 2017, Intel will ship the first processors built using the company's new, 10-nanometer chip-manufacturing technology. Intel says transistors produced in this way will be cheaper than those that came before, continuing the decades-long trend at the heart of Moore's Law -- and contradicting widespread talk that transistor-production costs have already sunk as low as they will go.
In the coming years, Intel plans to make further improvements to the design of these transistors. And, for the first time, the company will optimize its manufacturing technology to accommodate other companies that wish to use Intel's facilities to produce chips based on ARM architecture, which is nearly ubiquitous in modern mobile processors.
In the coming years, Intel plans to make further improvements to the design of these transistors. And, for the first time, the company will optimize its manufacturing technology to accommodate other companies that wish to use Intel's facilities to produce chips based on ARM architecture, which is nearly ubiquitous in modern mobile processors.
monopoly (Score:1)
Re: (Score:1)
Intel doesn't have a monopoly, there is AMD. I would suggest buying all AMD chips to help support AMD's efforts.
Re: (Score:2)
Ryzen could be a winner. If it gets even close to intel's performance the value will be there.
Re: (Score:1, Informative)
What's the difference between AMD and Intel? Intel believes that information should be patented and sold. AMD believes that it should be patented and used as a industry standard -- while giving that patented information away. Keep in mind that it was Intel who was also sued successfully multiple times by AMD for antitrust violations, price fixing, operating an illegal monopoly and fixing benchmarks(both simulated and real world). Pretty sure there's a few others I'm forgetting, just a FYI Intel has lost
Re:monopoly (Score:4, Insightful)
This tends to follow typical trends of "industry leader" vs "also-ran". What would an industry leader have to gain by establishing well-defined standards? In contrast, standards are critical for the also-rans to compete.
Don't think for a minute that AMD wouldn't do the same were they in Intel's shoes. They play nicer because they're the underdog right now.
Re: (Score:2)
Don't think for a minute that AMD wouldn't do the same were they in Intel's shoes. They play nicer because they're the underdog right now.
Their past history seems to show that this is their corporate policy. If you're young, then you don't remember the AMD of the 80's and 90's when their CPU's were king and Intel was the one on the verge of bankruptcy.
Re: (Score:2)
Re: (Score:2)
AMD in 90s? Once i586s came out AMD was nowhere to be found.
Thanks for showing you're very young, and have no idea of what you're talking about. AMD were the ones who created a compatible CPU at half the cost and worked on the same boards as intel...in the 90's. Then there was the Slot 1 and Slot A bit.
Re:monopoly (Score:4, Insightful)
Go Google AMD Zen aka Ryzen. Intel has a worthy competitor if true? It's not out yet but was demoed last week beating an Intel broadwell 8 core CPU :-)
Also I am 40. In my 20s AMD made superior x86 chips over Intel! No you did not misread that? Google slashdot pentium IV vs AthlonXP from early last decade for a laugh? The pentium IV sucked! It was hot and single core and had inferior performance over the AthlonXP. AMD also invented 64 bit computing for x86. Intel wanted the horrible Itanic proprietary Mercedes to replace x86. Intel crippled the pentium IV making it just 32 bits and trolled the virtues of lWisc or whatever funny architecture it was for servers.
Thanks AMD for saving x86 and bring us 64 bit computing to mere mortals outside of MDF rooms. Now go kick Intel's ass again?
PS I still support Intel as my virtualization stuff is tuned for their chip. In 2 years that may change with KMS and Hyper-V supporting Ryzen if they hit it big
Re: (Score:2)
There are hints that Intel is actually seeing Ryzen as a threat and having to adjust. First they moved their investor meeting from Nov to Feb , then Intel just killed the KBL-H(mobile workstations) for the 2017 road map.Note wccftech screws up some the arch sizes but the headline is correct. http://wccftech.com/intel-kaby... [wccftech.com]
Intel is not going to have the best q2-q4 in 2017, and maybe into 2018. Intel has nothing really new on the roadmap, just die shrinks.
Re: (Score:2)
Re: (Score:2)
Intel's Itanium was a HPC (high performance computing) architecture meant to compete with IBM's Power and Sun's SPARC. Unfortunately delays and underperformance turned the product into a joke in the chip industry. There was never a road-map or intention to eventually bring it to the desktop as a i386 replacement. AMD saw an opportunity to score a marketing victory by extending i386 to 64bit before Intel did. The main benefit of 64bit is being able to address more memory then the 4GB 32bit registers are limited to. When the Opteron (first AMD64 processor) was released in 2003 the average amount of memory shipped with desktops was just 500Mb. Despite x86-64 underwhelming performance compared to HPC architectures, its cheap price and readily availability made it popular in the Enterprise market eventually killing Intel's own Itanium, all-but killing SPARC and relegating IBM's Power to a niche player
Actually some Pentium's mysteriously had 64 bit support not too lang afterwards. Hmmm
64 bits mean more registers and better performance with extra instructions like SQL databases. It is not just memory limitations as Intel did add PAE in the pentium IV chipset.
The reason for this is simple. HP and Intel wanted ITanium to win at all costs and even crippled the superior Alpha chip and prematurely killed it. It would have been better but the idiots including Fiona have no concept of sunken costs. Shit the fans
Re: (Score:2)
I remember some of the hype about Itanium was the future - I even have textbooks here that mentions it ("The 80x86 Family: design, programming, and interfacing Third Edition" by John Uffenbeck (ISBN 0-13-025711-7) lists the P7 family being the Intel Itanium, compared to the P6 [Pentium Pro | Pentium II | Celeron A | Pentium III | Xeon], P5 [Pentium | Pentium MMX], P4 80486, etc; but no mention of the Intel Pentium IV in the book at all.). I've also seen the Itanium listed as the Intel 786 family (compared
Re: (Score:2)
There was never a road-map or intention to eventually bring it to the desktop as a i386 replacement.
That is not what Intel's actions and marketing indicated. They said Itanium would replace the last generation of x86 which was represented by the Pentium 4 and Itanium included hardware support for executing x86 code.
Re: (Score:2)
Hell, the Pentium III (Tualatin) beat the Pentium IV in performance. I had a Pentium III-S workstation (clocked at like 1.5GHz) with 2GB of RAM that my coworkers who were running Pentium IV and Pentium IV with Hyper Threading workstations were jealous of (which lasted until we rolled out Core 2 workstations, which I got one of the first) how well it performed. There again, I also upgraded the GPU where as they had the Intel i9xx GPUs, and I also added a SATA controller towards the end of its run that made
Re: (Score:2)
I've been building my own computers for nearly 20 years, and I don't understand why anyone buys Intel over AMD.
What's the point of paying double/triple the price? Better performance?
Intel's compilers and libraries only take advantage of features found in Intel processors. AMD is great if you want your software to ignore the various instruction set extensions. So the extra cost for Intel CPUs is actually a licensing cost for their compilers and libraries to pay for Intel's programmers to go out of their way to cripple performance on AMD; that does not come cheap.
Re: (Score:1)
Odd NCAR's WRF mesoscale weather forecast model performs much better with AMD processors than with Intel for the same clock speed. AMD runtimes are roughly 5 minutes shorter on an AMD for 300x300 km 2km resolution 6 hour forecast than on an intel
Re: (Score:2)
Surely (depending on your simulation software) cpu would be mostly irrelevant when it comes to strongly parallel floating point heavy math?
Sure coding for OpenCL is certainly more restrictive in a lot of ways, but done properly where appropriate the performance gains can be immense (understatement).
Re: (Score:3)
Not sure would say Intel has a monopoly, but there is a huge capital cost involved in adopting each new generation of fabrication facilities, to the point where there are very few companies that can take a seat at that table - that is the reason why most chip design companies outsource their fabrication requirements to one of the companies with the desired/required technical facilities.
Re:monopoly (Score:5, Informative)
People don't realize this. Even without patents, no one else is close to 10 nm yet.
You mean besides the three companies that have already (Samsung) or will shortly (TSMC, Toshiba) beat Intel to 10nm?
... not understanding that Intel invented lying about node size.. and hasnt even produced a true 22nm yet.
Intel fumbled the ball on this node. Their process advantage is gone, and combined with their vertical integration disadvantage, will see them fall farther and farther behind. Thats why they have recently done massive layoffs and are now blanketing press releases about a new "cloud strategy."
Intel knows that they are now in a bad position. Their competitors also know it. Contrary to popular shalshdot belief, the list of Intels main competitors do not include AMD or even ARM. Intel is a fabrication company. Its main competitors are TSMC, Samsung, Toshiba, and Global Foundries, and there are dozens of smaller competitors, and all of them are now eating into Intel at all node sizes. Samsung arrived at 10nm mass production first, and TSMC is following closely behind.
of course some anonymous coward will now say that Samsung isnt producing true 10nm
Rate these things on transistor density and you will see that Intel is behind Samsung now, and will soon also be behind TSMC and Toshiba.
Re: (Score:1)
Contrary to popular shalshdot belief, the list of Intels main competitors do not include AMD or even ARM. Intel is a fabrication company. Its main competitors are TSMC, Samsung, Toshiba, and Global Foundries, and there are dozens of smaller competitors, and all of them are now eating into Intel at all node sizes. Samsung arrived at 10nm mass production first, and TSMC is following closely behind.
Who is making x86 chips that can out-perform Intel's?
Re: (Score:3)
x86 is dying, and 2017 will be the year of Linux on the desktop, Netcraft confirms.
Re: (Score:3)
x86 is dying, and 2017 will be the year of Linux on the desktop, Netcraft confirms.
I appoint you king of 2017. Make it so.
Re: (Score:2)
Designing a chip and fabbing them are two different things. You're combining something Intel has an advantage on with what is being discussed here, and claiming the outcome is the best. In reality the best outcome would be an Intel designed x86 chip fabbed in someone else's foundry. But for the sake of vertical integration we are now stuck with a sub par product given the current state of technology.
But yay for your comment.
Re: (Score:3)
Re: (Score:3, Informative)
Re: (Score:2)
The important criteria is the cost per transistor which must decrease to make the next fabrication node economical. This comes even at the cost of reduced performance.
2016 Plenary Session 1 - Moore’s Law: A Path Forward [isscc.org]
Re: (Score:1)
I can remember when TSMC was mostly known as an 8255 cloner.
Re: (Score:1)
Re: monopoly (Score:1)
As others have pointed out this is simply not true. Intel's 14nm isn't tmsc 10nm. Intel invented the lying about feature size in first place to keep their stock price up.
Being the best (Score:2)
When will someone throw down and bring down or at least challenge intel's monopoly...
Being several generations of process nodes ahead of one's competitors, like TSMC or GSMC or other fabs, is not a monopoly. Intel has consistently had a policy of investing their money in state of the art fabs everywhere they have it, and they happen to have a big volume driver w/o becoming commodity, like memory. This is what the old US businesses used to do - invest cash into enhancing their company value
If you are talking about the x86/x64 ISA and related patents that allow a company to make x86 CPUs,
Re: Samsung to challenge (Score:2)
Samsung which has a stake in global foundaries will break Intel's monopoly over expensive chip fabrication. In fact AMDs new Ryzen x86 chip is built by them and so are their .14 NM GPUs and even Nvidia's. Cell phones beat Intel as a result :-)
So monopoly ride is soon over. ARM is new king nowdays anyway
Exclusive rights have a purpose (Score:5, Informative)
At least part of the problem is that the reason those markets are lucrative is because of patents.
If I spend X billion dollars developing Y, I need to be able to (at least) make X billion dollars back.
You, on the other hand, with Y in hand because of weak patents, and no need to have spent X billion dollars to get there, will be selling Y under the price that I can afford to, because you didn't spend X on developing it. So I go out of business. Which means next time you need methodologies, you won't be getting them from me. Because you killed me by entering the market without paying the same costs I did.
These problems are very serious when you're talking about very expensive development and/or manufacturing. They affect drug companies, chip manufacturers, vehicle manufacturers, etc. Some types of development and/or manufacturing require big costs to bootstrap, and no, bottom line, it's not reasonable to allow the next-in-line operation to bypass those costs at the early entry entity's expense.
Patents have a limited term, either 14 or 20 years, depending on the type; this sets fairly discrete bounds on what you can, and can't, do. Unlike copyrights, patent law hasn't (yet) fallen off the edge of the earth into the blatantly unreasonable.
In the US, this all stems from article I, section 8, clause 8, of the constitution (emphasis mine):
Re:Exclusive rights have a purpose (Score:4, Insightful)
Fast Tech (Score:3, Insightful)
Let's assert this is uniformly true.
So let's say we reduce the patent term to 2.5 years so it's not old news. After 2.5 years, you can do whatever you like with any invention.
This, in turn, means that the time the creator has to recoup their inventing costs is 2.5 years; no longer. Because after that, a competitor will enter the market having spent nothing to get where the creator spent all that money.
This will considerably reduce the amount
Re: (Score:2)
The whole patent thing is for disclosing the tech. At 5 years and the speed of the court system I doubt companies would bother with patents but companies will still do R&D and simply keep their cards to themselves.
The system needs to change. All the IP laws need their own court system and their needs to be body of (not sure what you would call them) that actually know and review technology and instead of statically rewarding they should be dynamically reward inventors. Rewards should be merit based. Wi
Re: (Score:2)
Which they will do as soon as you take their development treasure and run away with it.
Re: (Score:2)
Re: (Score:3)
In technology, 14-20 years is effectively 100 years. Technology is old news in 5 years and almost useless in 10 years. Since we're talking about CPU companies, let me know how competitive a 14 year old CPU is. Patents are great for innovative breakthroughs. They are bad for evolutionary next steps. Instead of making lots of quick steps and evolving technology quickly, create artificial gaps between each step and slow things down.
But that doesn't change the mechanism in finances, and the time it would require to recoup one's costs. The tech may be old, but if $X billion has been spent in trying to develop Y, it's not gonna be recouped in 5 years just b'cos Y is obsolete in 5 years. So they'd either have to hike prices of Y, which would then make it more difficult to sell, and longer to recoup $X B or they can charge the patent costs and split those costs upfront
Re: (Score:2)
In technology, 14-20 years is effectively 100 years. Technology is old news in 5 years and almost useless in 10 years. Since we're talking about CPU companies, let me know how competitive a 14 year old CPU is. Patents are great for innovative breakthroughs. They are bad for evolutionary next steps. Instead of making lots of quick steps and evolving technology quickly, create artificial gaps between each step and slow things down.
But that doesn't change the mechanism in finances, and the time it would require to recoup one's costs. The tech may be old, but if $X billion has been spent in trying to develop Y, it's not gonna be recouped in 5 years just b'cos Y is obsolete in 5 years. So they'd either have to hike prices of Y, which would then make it more difficult to sell, and longer to recoup $X B or they can charge the patent costs and split those costs upfront
In the case of high performance logic semiconductors, achieving the same performance on a 10 year old process costs 3 times more than if you had developed a new process following Moore's Law so there is a great incentive to pay for research and development; if you stop, then you will not be competitive and this will remain true as long as the cost per transistor continues to decrease.
process shrinks at this stage (Score:2)
This used to be the case when the industry was going from 1 micron to the sub micron scales - 0.65, 0.5,0.35. After it got to the 0.18, that has been less true, as fabs have migrated from 200mm to 300mm wafers, and the proposition has grown a lot more expensive for migrating from 300mm to 450mm wafers. Just going from 200 to 300, factories had to be re-tooled, and the changes are even more disruptive while going from 300mm to 450mm. So the add-ons you are estimating in staying w/ an old process is over
Re: (Score:2)
Intel's William Holt gave a recent lecture on the subject - Moore’s Law: A Path Forward [vimeo.com].
Re: (Score:2)
Re: (Score:2)
In technology, 14-20 years is effectively 100 years. Technology is old news in 5 years and almost useless in 10 years.
That's only true for now because of how young the computing industry is. Once things have matured a bit and tech has gotten closer to the fundamental physical limits then you'll stop seeing such a break-neck pace of advancement.
Re: (Score:1)
Copyrights about entertainment can be longer than patents about technology, because it's just entertainment, and people can be repeatedly entertained with copyrighted information/recordings/writings, whereas technology often is obsolete and thus technically unfeasible in a much shorter time frame.
Re: (Score:2)
NVIDIA could have gotten around it by acquiring one of the Taiwanese companies (think it was Via) that had acquired Cyrix and Centaur, and then made 64-bit CPUs that were successors of theirs
Re: (Score:2)
Their architecture is a very highly scalable one - from 1 core to up to 72 cores. That's some definition of 'shit'. Recall that this architecture was designed by AMD: Intel was trying to go EPIC/VLIW w/ Itanium, but failed, since EPIC did not have the die size savings promised by the VLIW approach of tossing everything to the compiler. In fact, Itanium 3 is more of a RISC than a VLIW CPU, w/ RISC concepts like register renaming (which in VLIW is dealt by the compiler), and there had been better RISC CPUs
Re: (Score:2)
It's not just the law. It's reality as well. The law says that you can't resort to unfair means to eliminate your competition. But it says nothing about making superior products and making your competition unattractive that way.
No Moore's Law (Score:2, Insightful)
One cannot imagine how freaking tired I am of hearing about Moore's Law - there's no law, there's never been one. There was a mere observation that the number of transistors doubled every 18 months or so.
Whoever decided to call this observation a law must forever be held up to shame. And the ones who keep repeating this nonsense.
Re:No Moore's Law (Score:5, Insightful)
That's what a scientific law is: a relation between measured observations. It can be purely empirical.
There's a law for centrifugal force, and it isn't even a real force!
Re: (Score:2)
Moore's Law is as much fiat as observation. "Transistor density of integrated circuits shall double every 18 months. Make it so!"
Re: (Score:2)
When it costs 10^7 USD to put up just one factory, you're going to heavily incentivize the equipment supplier who is a 1 month outlier on the schedule.
Re: (Score:2)
Murphy will eventually supersede Moore.
Just sayin...
Re: (Score:2)
I attended a course by Yale Patt from U. Austin, who is one of the "popes" of Computer Architecture research (see this ranking [sigmicro.org], for example), where he discussed Moore's Law.
He argued how Moore's Law was not a physical law, nor a technological or market-driven law. But it was a real law and had a very large impact.
Instead, he argued very accurately that Moore's Law was actually a psychological Law: given that it provided the baseline for the expected performance (or transistor count) increase, every company
Re: (Score:2)
You could say the same about Ohm's law - it was just the empirical observation that current through something was directly proportional to the voltage across it. And that's not always true, but it's true widely enough for it to be a useful law.
Re: (Score:2)
Re: (Score:2)
The original post was "there's no law, there's never been one. There was a mere observation".
Ohm's law was also a mere observation at the time it was made. There was no theoretical understanding behind it. That didn't stop it being called a "law".
Re: (Score:2)
Re: (Score:2)
The problem with that analogy is that the definition of a scientific theory is just that.
Re:No Moore's Law (Score:5, Funny)
Cole's Law: When making cole slaw for a large group, there will be 50% left over even when you account for Cole's Law.
Re: (Score:1)
It's more of a guideline, really.
Re: (Score:1)
This isn't a month-long Usenet thread, which is what Godwin's Law applies to.
Slashdot discussions automatically extinguish in about three days so Godwin's Law really can't ever apply to them.
Yeah right... (Score:3, Insightful)
Moore's Law isn't dead, that's why Intel already has the 3rd 14nm CPU family and is planning another one, Coffee Lake, in 14 nm before moving on to 10nm.
Intel isn't making 4 different CPU families on 14nm cause the process works so well and is so cheap.
First 14nm, Broadwell, was released 2014, released abysmally late and very underperforming, and the first 10nm is expected to be released 1h 2018. They may sample a few trial wafers in 2017, but there won't be a chip sold. 4 years is not what Moore's Law promised back then, and the Tick-Tock model is totally dead and buried as well.
This IEEE Spectrum rag sounds worse like Popular Mechanic with that much paid cheerleading bullshit.
Re:Yeah right... (Score:5, Informative)
Moore's law has been decelerating for a long time but is far from dead. What's really surprising is how far visible light lithography has been pushed, when everybody thought EUV would be needed long ago. Now, feature size is _way_ less than the wavelength, nice trick that. Even less than EUV wavelength. Probably, EUV will be used for 5nm nodes. Nanoimprint might take over when EUV reaches its limits. This is while staying with silicon. A 1 nm [arstechnica.com] transistor (gate size) has already been demonstrated, and it won't stop there.
Atom scales (Score:3)
What is the diameter of a silicon atom in nm? Anybody know?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Around 1998 a guy in a university lab I worked at made a diode out of a single atomic layer of gallium arsenide on a silicon substrate, and he was nowhere near the first, but making a lot of junctions in the right places is a hell of a lot harder than putting a very thin coating on something.
Size does not matter anymore. (Score:1)
Re: (Score:3)
That's because we're hitting multiple problems. We have heat, die size, and electrical limitations(bleed over in the substrates). It means in the end, that having multiple physical cores on one chip is the only direction that things will be going until those other problems can be solved. There's also the other issues with memory across the system bus being too slow and causing problems. HBM solves some of those issues, but it's still too cost prohibitive to use on CPU's at least right now. Where in th
Re: (Score:3)
That's because we're hitting multiple problems. We have heat, die size, and electrical limitations(bleed over in the substrates). It means in the end, that having multiple physical cores on one chip is the only direction that things will be going until those other problems can be solved. There's also the other issues with memory across the system bus being too slow and causing problems. HBM solves some of those issues, but it's still too cost prohibitive to use on CPU's at least right now. Where in the case of GPU's it's not.
GPU ram is different than CPU ram for a good reason. THe data is fast but narrow. CPU needs wide loads and ram optimized for that. GDDR 5 is great for a few things fast where the GPU goes massive parallel. BUt it would cripple your i7 easily which needs more bandwidth and lower latency.
Re: Size does not matter anymore. (Score:1)
Re: (Score:3)
Moore's Law isn't about performance, it's about economy.
Actually, it is about density.
Re: (Score:2)
So the rule of thumb about increasing density was for the purpose of economy.
But (Score:2, Insightful)
User experience still sucks (Score:3, Informative)
According to this law, our computers are 1024 times more powerful today, than they were 15 years ago. And they are.
But the user-experience still sucks. Web-browsers are still bloated and slow — and need an occasional restart. You still can't talk to computers reliably [cnbc.com] — Alexa is considered the best [slashdot.org], yet it is pathetic. Being able to reliably show something to a computer will take another 15 years, if not more.
Spammers may be able to generate spam faster, but reliably detecting and blocking their crap — without occasionally blocking real e-mails — remains elusive.
The fanciest UIs — be they by open source or commercial projects — would just stupidly hang or otherwise behave erratically every once in a while.
Hardware-makers may be doing their jobs, but the software-engineers aren't doing theirs... Not well enough, anyway.
Re: (Score:1)
The fanciest UIs — be they by open source or commercial projects — would just stupidly hang or otherwise behave erratically every once in a while.
Hardware-makers may be doing their jobs, but the software-engineers aren't doing theirs... Not well enough, anyway.
You can't prove that those issues aren't because the processor decided 1 + 1 = 3 for one particular instruction.
Re: (Score:1)
Re: (Score:2)
Touchy
That's spelled touché.
Re: (Score:2)
According to this law, our computers are 1024 times more powerful today, than they were 15 years ago. And they are.
Bullshit. I lived through the exponential increases before the 2002 wall, and it was glorious. If that had continued, it would make today's computers look like ancient relics.
Sure, today we have more cores, obscene amounts of ram, and you can fit a decent computer in a mobile phone, but when it comes to general purpose computing the exponential increases in performance died a long time ago. There are young adults alive today that will never have experienced what it was like.
Re: (Score:2)
You can run virtual machines on a modern desktop so that you've got a whole cluster of 2002 era desktops at your fingertips.
As I said, we have more cores and obscene amounts of ram. That's good for parallel computing and doing stuff like running a bunch of VMs (talk about software bloat). It's not the same as the exponential general purpose sequential computing we had experienced up till then.
Re: (Score:2)
Few tasks require serial performance — most desktop stuff is, in fact, parallelizable. It just is not done — not done right anyway.
Consider Firefox for just one example — it has gone from event-driven (Netscape) to multi-threaded and now to multi-process [mozilla.org]. Because loading and rendering even one page offers ample opportunities for parallelizing — you can load multiple elements of
Re: (Score:2)
Few tasks require serial performance â" most desktop stuff is, in fact, parallelizable.
It's much harder to write concurrent code, and there's also Amdahl's law [wikipedia.org]. It really would be amazing if trends had kept up and my computer ran 1,000 times faster for the general purpose serial case. Sadly, it did not.
Re: (Score:2)
Of course! And that's my point — software engineers aren't keeping up with the hardware advances.
Re: (Score:2)
And my point is that the hardware advances didn't keep up the way they used to. It really isn't a difficult or controversial point.
Re: (Score:2)
And your point is wrong. Moore's Law never promised serial speed-ups. It promised greater number of elements (transistors) on the same-size chip — and that keeps working, according to TFA. We just don't feel it like we used to.
Where the increase could be translated into serial speed-ups, no effort is required from software folks — the same program would run faster automatically. But when the advances provide for larg
Re: (Score:2)
And your point is wrong. Moore's Law never promised serial speed-ups. It promised greater number of elements (transistors) on the same-size chip â" and that keeps working, according to TFA. We just don't feel it like we used to.
No, my point is perfectly on target. You are the one who brought in performance. While Moore's law is technically about transistor density, it so happens for a very long time, many other things went along for the ride, resulting in exponential serial performance that lasted for decades.
But when the advances provide for larger caches, RAM, new processor-instructions, more and wider IO-pipes, and multiple threads of execution, a rewrite may be necessary.
Bits and pieces have been made faster or grown in size. Your general claim that "computers are 1024 times more powerful today, than they were 15 years ago" is bullshit. Not that we have 1000 core desktops anyways, but even if
Re: (Score:2)
Performance is not just serial speed. And what I "brought in" is user experience anyway.
That depends on how one defines "powerful", does not it? I didn't say serially faster, I deliberately said more powerful — because power is about more than serial speed.
Re: (Score:2)
That depends on how one defines "powerful", does not it?
You didn't define it in any way. You referenced Moore's law, but that only talks about transistor density and not "power". Lacking any specifics at all, it's only fair to compare it with traditional improvements in "power" that everybody recognized as going along with Moore's law -- those that occurred for decades before the 2002 wall.
But the improvements, which we continue to get, can still be put to a good use improving usage.
You made the hefty claim that computers today are 1,024 times more powerful than 15 years ago, and placed blame on software for not following along. That's bullshit.
For one example, try make -jN â" Unix kernel and make really scales with the number of CPUs. A build will finish about N times faster â" provided, you have N CPUs.
So where i
Re: (Score:2)
"Transistor costs" (Score:2)
Intel says transistors produced in this way will be cheaper than those that came before, continuing the decades-long trend at the heart of Moore's Law -- and contradicting widespread talk that transistor-production costs have already sunk as low as they will go.
Err, what now? I thought smaller transistors were desirable for performance reasons. Has the marginal per-transistor cost been what's holding us back all these years?
I was under the impression that the costs for microprocessor fabrication had to do with their design and then building the foundry. The per-unit cost (and thus per-transistor cost) is utterly negligible, right?
This is a salient point because it implies that in decades to come we're eventually going to see a steep drop-off in prices for
Re: (Score:2)
Re: (Score:2)
Moors law is about cost
No it isn't. It's about transistor count.
Chip cost is based on area, so making stuff smaller reduces the cost per transistor.
... explain that, if you would. Chip cost is not driven by the cost of raw materials, yes?
Point #1: Calculators can be bought at dollar stores and have been sold in dollar stores for at least a decade, if not two. (Pocket calculators used to cost hundreds if not thousands of dollars.) Correct me if I'm wrong, I do not think that these calculators are using the latest sub-90nm technology. I suspect they're using very old fab technology.
Point #2: I don't have
Re: (Score:2)
First, define "raw materials". Sand is cheap, but producing pure silicon from it is not (and that's one of the things you just buy at a foundry), so there is a significant material cost.
Definite significant. More than, say, 5% of the cost of the retail price (CPUs, GPUs, memory) or more than 20% the cost of the wholesale price?
. The process is very mature, that also does its part via very high yields (very few of the chips produced are defective).
What does that have to do with Moore's law and the shrinking of transistors? If smaller scale plants happen to have better yields as a side effect of being newer, I don't see how that's relevant. I don't think you meant to imply that smaller transistors are inherently more durable?
because it is more expensive
Well I'm definitely up against the limits of my knowledge here, but I would assume that
Re: (Score:2)
Intel says transistors produced in this way will be cheaper than those that came before, continuing the decades-long trend at the heart of Moore's Law -- and contradicting widespread talk that transistor-production costs have already sunk as low as they will go.
Err, what now? I thought smaller transistors were desirable for performance reasons. Has the marginal per-transistor cost been what's holding us back all these years?
I was under the impression that the costs for microprocessor fabrication had to do with their design and then building the foundry. The per-unit cost (and thus per-transistor cost) is utterly negligible, right?
This is a salient point because it implies that in decades to come we're eventually going to see a steep drop-off in prices for not just CPUs, but also RAM and flash memory once enough patents expire and enough high-output fabs come online, which promises to be a utterly world changing solution-in-search-of-a-problem. (Specifically, I predict this will be the point at which AI really takes off.)
Moore's law has *always* been primarily of economic importance. Decreasing the cost per transistor is what makes later fabrication node economically feasible.
Re: (Score:2)
Re: (Score:2)
Costs are roughly proportional to area but if more transistors can be placed into the same area, then the cost per transistor is less and that is what primarily drives investment into new process generations even at the expense of performance.
Intel's William Holt gave a recent lecture on the subject - Moore’s Law: A Path Forward [vimeo.com].
Re: (Score:2)
Costs are roughly proportional to area
That's what I assumed. "Cost per transistor" is technically accurate, but the primary factors in cost are in the design of the chip, the creation of the die, the cost of the foundry and the time it takes 'em to create the chip (which is a function of die size, sure. Among other things.) As I've speculated elsewhere, if a government were to drop hundreds of billions of dollars on large, high-output fabs (while either not caring about patents or buying out the patent holders or waiting until they expire or
Re: (Score:2)
Watch that video again. At the end, Holt shows the cost of *not* investing in the next node with the intention of lowering the cost per transistor. It amounts to spending 3 times as much money in production 10 years later just to keep even with competitors that *did* invest spending a fraction of that amount.
At some point when the investment is a lot greater, it will not pay but he gives the numbers showing just how much money that would be and it is a lot.
Re: (Score:2)
It amounts to spending 3 times as much money in production
"Spending money on production" is not the same thing as "spending money on silicon wafers". If the machine time is limited[1] and/or expensive and the die size is held constant, then obviously shrinking your transistors lets you do more with the same amount of machine time. Summarizing that gain as "we made the transistors cheaper!" misses the point in my view. The focus should be on the cost, output, speed, efficiency, availability (if third party) and IP status of the fab machinery. Shrinking the transi