Intel Is in an Increasingly Bad Position in Part Because It Has Been Captive To Its Integrated Model (stratechery.com) 238
Once one of the Valley's most important companies, Intel is increasingly finding itself in a bad position, in part because of its major bet on integration model. Ben Thompson, writing for Stratechery: When Krzanich was appointed CEO in 2013 it was already clear that arguably the most important company in Silicon Valley's history was in trouble: PCs, long Intel's chief money-maker, were in decline, leaving the company ever more reliant on the sale of high-end chips to data centers; Intel had effectively zero presence in mobile, the industry's other major growth area. [...] [Analyst] Ben Bajarin wrote last week in Intel's Moment of Truth. As Bajarin notes, 7nm for TSMC (or Samsung or Global Foundries) isn't necessarily better than Intel's 10nm; chip-labeling isn't what it used to be. The problem is that Intel's 10nm process isn't close to shipping at volume, and the competition's 7nm processes are. Intel is behind, and its insistence on integration bears a large part of the blame.
The first major miss [for Intel] was mobile: instead of simply manufacturing ARM chips for the iPhone the company presumed it could win by leveraging its manufacturing to create a more-efficient x86 chip; it was a decision that evinced too much knowledge of Intel's margins and not nearly enough reflection on the importance of the integration between DOS/Windows and x86. Intel took the same mistaken approach to non general-purpose processors, particularly graphics: the company's Larrabee architecture was a graphics chip based on -- you guessed it -- x86; it was predicated on leveraging Intel's integration, instead of actually meeting a market need. Once the project predictably failed Intel limped along with graphics that were barely passable for general purpose displays, and worthless for all of the new use cases that were emerging. The latest crisis, though, is in design: AMD is genuinely innovating with its Ryzen processors (manufactured by both GlobalFoundries and TSMC), while Intel is still selling varations on Skylake, a three year-old design.
The first major miss [for Intel] was mobile: instead of simply manufacturing ARM chips for the iPhone the company presumed it could win by leveraging its manufacturing to create a more-efficient x86 chip; it was a decision that evinced too much knowledge of Intel's margins and not nearly enough reflection on the importance of the integration between DOS/Windows and x86. Intel took the same mistaken approach to non general-purpose processors, particularly graphics: the company's Larrabee architecture was a graphics chip based on -- you guessed it -- x86; it was predicated on leveraging Intel's integration, instead of actually meeting a market need. Once the project predictably failed Intel limped along with graphics that were barely passable for general purpose displays, and worthless for all of the new use cases that were emerging. The latest crisis, though, is in design: AMD is genuinely innovating with its Ryzen processors (manufactured by both GlobalFoundries and TSMC), while Intel is still selling varations on Skylake, a three year-old design.
simply? (Score:5, Interesting)
"instead of simply manufacturing ARM chips for the iPhone"
What's simple about it? Intel's ARM was Xscale [wikipedia.org], which was based directly on DEC's StrongARM (which they purchased.) It was the fastest ARM core at the time, but while it [x]scaled up, it didn't [x]scale down. It had the highest power consumption at low clock rates of all the ARM cores.
Intel did not have an ARM-based product which would have been a viable core for the iPhone.
Re: (Score:2, Interesting)
Intel was a strong early player in the ARM market. During the height of the PDA era they made the best chips in the most popular pocketPC devices.
Intel has beat out nearly all other chip makers in the laptop,server, workstation, desktop space. Professional and home. History is littered with a dozen dead CPU architectures from dozens of makers, all of which were 'serious' processors next to Intel's 'toy' or 'consumer' offerings with their 'inferior' architecture.
Intel has the most advanced chip fabrication t
Re: (Score:2)
Intel was a strong early player in the ARM market. During the height of the PDA era they made the best chips in the most popular pocketPC devices.
They made the fastest chips. I had the fastest one IIRC, the PXA255, in (also IIRC) an iPaq H2215. Battery life was abysmal. "Best" is defined both by performance and battery life, and they only had one of those things.
Re: (Score:2)
Intel had the most advanced fab technology. Right now, Intel is struggling to ship engineering samples of 10nm parts, and they aren't expected to go into volume production at 10nm until next year. Meanwhile, TSMC and Samsung have been doing volume production at 10nm for a year or more.
Worse, TSMC has already started volume production at 7nm, and is expected to be doing 5nm by next year. So barring
Re:simply? (Score:5, Interesting)
Much like I've said about Microsoft being not a "software" company, but a "Windows" company, Intel is not a Microprocessor company, it is an x86 microprocessor company.
This isn't to say that Microsoft doesn't make software for other platforms, because it does, but its focus isn't on software, it is on Windows. Likewise, Intel makes other chips besides x86, but its primary focus is x86.
I learned a long time ago in school, how Railroad companies got themselves into similar bind by not realizing they were in the Transportation business, by being focused on being in the Railroad business.
And they all have shorted themselves in the long run over their myopic outlook. And to be honest, ARM is in the ARM business, and will likely go down the same path in about 20 years.
Re: (Score:2)
Intel tries to be in the processor business, it just turns out that they can't do anything but x86.
The x86 itself saved Intel's ass when the iAPX432 crashed and burned hard. They salvaged a few features as they advanced from 8086 to 80386. But the real advancements in archetecture stopped at the '386. Everything since has been all about making a faster '386 rather than fundamental archetecture improvements.
They did try to move past that with the Itanium, but it turned out to be Itanic instead.
Don't forget t
Re: (Score:2)
x86 is largely tied to Windows. Including the 386/486 base for all their future designs. Itanium failed because it was largely made for Windows, but nobody wanting Windows wanted to try a new architecture. Itanium was RISC processor, but it was still largely a subset of x86 (iirc) with 64 bit extensions. It also failed partly because PowerPC chips by IBM (also RISC) were outperforming it out of the box.
Re: (Score:3)
Itanium had an entirely new instruction set. But it turned out that it was practically impossible for a compiler to produce an instruction stream that would get decent performance. It didn't help that Itanic was priced north of $10K. It could run Windows in an emulator, but that was much slower than Windows running native on a 32 bit processor. Linux could run natively on Itanic, but it ran faster on a 32 bit processor. Intel kept saying "just wait till next year". Then AMD came out with x86_64 and nobody w
Re: (Score:3)
x86 is largely tied to Windows. Including the 386/486 base for all their future designs.
That is literally backwards: ITYM "Windows is largely tied to x86". x86 isn't tied to anything; everyone uses/supports it and there's nothing Windows-specific about it whatsoever. Microsoft has helped guide its development by asking for specific features, but those features are useful to anyone doing what they do.
Re: (Score:2)
Much like I've said about Microsoft being not a "software" company, but a "Windows" company,
The Microsoft that makes all of it's money through cloud services and office applications? Is that the "windows" company you're talking about? You know at present trends the Xbox division is going to overtake the Windows division in profits before the end of the decade right?
You're on point about Intel though. The entire company there really is based on one product, unless their memory division actually starts delivering on its promises.
Re: simply? (Score:2)
Re: (Score:2)
You're not even going to try to normalize for silicon area, or number of transistors delivered?
If neither the silicon area nor the number of transistors matters, and it's only about the raw numbers, how about let's just concede the whole show to those tiny little flutter filters (capacitors) that are ten to a small chip ... and more to a l
Re: (Score:2)
Not yet (Score:5, Interesting)
I think this kind of analysis is quite premature. Presently, there is no mobile-worthy x86 option -- for lots of reasons. Until there is, I don't think you can judge Intel for their direction.
Presume, for a moment, that in a few years, Intel successfully produces an x86 proc for mobile specifications. It's distinctly possible, indeed even probable, that ARM becomes useless, and the entire mobile market moves to x86. What a boon for Intel to have not wasted time and effort during these middle-ground years.
We've lived through this before. I refer you to WAP. How many web developers spent how many hours fumbling through WAP-limited options, before the entire mobile market moved to full web technologies? What a wasted investment for any small company. And what a horrible experience in was for consumers.
We'll wait and see.
Re: (Score:2)
ARM has been successful because they license their design, and the customer then integrates the ARM core with all the peripherals, memory interface, local memory, and possibly other cores into a SoC.
Do you expect Intel to adopt a licensing program ?
Re: (Score:2)
Also, they are enabling an arms race (pun unavoidable). TI bowed out of the market because there were just too many competitors that drove them to either leave the market or go negative cash flow to stay in, for example.
So the functional benefits are nice, but more critically they enabled super dirt cheap chip vendors.
Re: (Score:2)
For years, industry watchers have debated which would come first: Intel lowering power consumption enough to create viable mobile chips, or ARM increasing performance enough to create viable desktop and server chips.
IF Intel wins, why do you think "the entire mobile market moves to x86"? If anything, the legacy software shoe is on the other foot.
Re: (Score:2)
If you're going to quote partial sentences, please include the primary predicate. I said "distinctly possible".
My comment wasn't about predicting the future. My comment was about Intel's choice being a valid business gamble, given a distinctly possible future.
What actually winds up happening has absolutely nothing to do with my comment.
Re: (Score:3)
I doubt very much that in a couple of years the mobile industry is likely to change architecture and instruction set just to jump on x86.
Re: (Score:2)
I doubt very much that ten years from now, mobile devices won't be able to run any software that exists today.
Re: (Score:2)
Note that I have two Intel based devices and they both can pretty much run all the ARM applications.
They both suck terribly at anything that is vaguely demanding, but in *theory* an x86 based future would be able to run today's software.
Practically speaking I don't see any way for Intel to have some promise of value for x86 architecture in mobile form factor that would overcome the current market situation, but they at least did do their homework and made it technically possible.
Re: (Score:2)
Aside from power consumption, and by that we mean battery life, there's no problem with intel in mobile. So we're really just waiting for much better batteries. Maybe all Intel needs to do is to wait.
Re:Not yet (Score:5, Insightful)
"Can you image intel graphics on a tablet? It's already been done, hint, they suck, have poor performance, and are power hungry. Guess what, I want to be able to watch more than one video in HD before the battery dies, or the tablet becomes hotter than the surface of the sun."
Meanwhile, my Chromebook does 16 hours of 1080p video using a Celeron N3150.
Apparently your definition of high performance means "emulate the universe" when in reality the performance issue is with the people coding their applications and web pages.
Re: (Score:2)
My Bay Trail Celeron ChromeBook has really reset my preconceptions about Intel in the low-power area.
My N2830 can play a 60fps Steam game stream from my PC for about 8 hours straight (using hardware decoding, of course). Idle web browsing- 15+ easily.
That Intel GPU may not be an amazing piece of hardware, but it does run Plasma 5 fluidly, and has precisely zero driver-related issues unlike my AMD full-size laptop. Really, this ChromeBook is the be
Re: (Score:3)
Must be really sad when the people with mod points are essentially defending me, a felon, by modding your ass down. How do you like that, APK? You're so hated that people will defend a felon before they defend your ass.
Re: (Score:2)
I don't think the death of geocities had anything to do with WAP and everything to do with being replaced by MySpace.
The point about having to embrace dead-end transitional technologies is valid, but WAP just didn't matter (it was too crappy to deliver the value of websites and also restrained to the high end of the cellular phone users with the devices and the plans to even get those pathetic chunks of data).
Here I think it's a huge leap to consider use of ARM on handsets a WAP-like fad. WAP was just so n
Re: (Score:2)
Mobile's not mature at all. It's still fraught with daily problems. Battery life doesn't fill a day. Displays are too small. It's too big to hold. It's too thin to hold. It can't do anything more than one thing at a time. It can't project. It can't transfer peer-to-peer. It breaks very easily.
In oh so many ways, current mobile is much much much worse than my 486, or even my AT from thirty years ago.. Let's compare all of the things that make my AT from 1985 better than the iphone.
- a flopp
Intel lost mobile due to power-per-watt (Score:5, Interesting)
For all of you bashing Windows for ARM (Score:5, Interesting)
Re: (Score:3)
Re: (Score:2)
ARM just isn't there yet. Benchmarks of the CPUs look good, but the entire system just isn't up to par with an Intel or AMD based hardware stack. My 8-core 2.45Ghz ARM laptop feels like a 1.5ghz dual-core Celeron.
A large part is the ARM chip lacks the high speed, low latency RAM subsystem. Those things draw a lot of power, so they'd start to seriously lose the power advantage.
The main disadvantage of x86, namely the high complexity instruction decoder has become an increasingly small part of the power budge
Re: (Score:2)
Is that true even with a light Linux distro?
Sadly, I don't know. You can't run linux yet on an HP Envy x2, as stupid as that sounds.
I am curious, there isn't much information available other than on platforms like the RaPi which have bad IO congestion holding them back.
Agreed... And I suspect that's the real cause of the perceived slowness. Not I/O specifically, but just chunks of the hardware stack in general that just don't have the throughput full OS stacks designed to run on x86 machines are accustomed to.
Now, I've never thrown a full Linux DE on my RPi 3, but I have put it on an Exynos big.LITTLE SBC I got from hardkernel, and while it performs decently enough... It's the same st
Re: (Score:2)
From my perspective the ironic part is that if it was not for Microsoft screwing up Windows in an attempt to leverage their desktop monopoly into the tablet and PDA market, the desktop market would be stronger. Microsoft is in a position to kill the x86 desktop market but Intel has not taken any steps to save it.
Intel and Microsoft (Score:3, Interesting)
This was clear a long time ago. Intel was making X86 mobile chips for Intel to gain market share. Not because the phone makers wanted x86 chips. It was Intel-focused, not customer focused. Microsoft did similar things with Windows 8 and that metro junk.
Recently Intel has branched out into lots of other growth businesses though, buying Movidius, Altera, and MobileEye. They're making silicon photonics chips for optical networks, DOCSIS chips for cable modems and 3D Xpoint RAM to bridge the gap between DRAM and NAND. They integrated an AMD GPU and they are building a new GPU of their own.
It’s ironic that articles like this gain traction after Intel has already turned around and started to gain traction.
x86 is memory-optimised (Score:2, Interesting)
i've pointed this out here on slashdot a number of times, dating back at least... six years possibly more. the first really clear signs were when ARM came out with the first dual-core ARM Cortex A9 side-by-side demonstration of running a web browser (linux desktop OS) side-by-side with a 1.6ghz intel Atom. it kept up and in some cases loaded pages before the intel processor. at the end of the demo they showed the clock rate of the ARM chip: only 600mhz.
intel was a memory company. they're proud of their
Re:x86 is memory-optimised (Score:5, Informative)
Any time someone throws out the word RISC in the context of modern superscalar processors, they invariably have no fucking idea what they're talking about.
The denotation between RISC and CISC existed because once upon a time, CISC processors had richer instruction sets at the cost of more cycles per instruction.
These days, all processors (relevant to this discussion) are essentially CISC, and run at more than 1 instruction per cycle. The terms RISC and CISC are dead terms.
All superscalar ARMs have instruction decoders that break them into smaller micro-operations, a la microcode.
Re: (Score:2)
This could be my ignorance showing, but there is something I've never understood about Intel's architecture strategy. It's well known by now that Intel chips don't execute x86 instructions. Rather, they decode x86 instructions into more RISK-like micro-ops and execute micro-ops. Why not expose the micro-ops to compilers? Let programs bypass x86 and get closer to the hardware. That would allow software companies to transition gradually away from x86, rather that jumping in feet first in to an unfamiliar
Re: (Score:2)
Why not expose the micro-ops to compilers?
Micro ops take up more space to perform the same operation, which would worsen the memory bottleneck.
Also, by exposing the micro ops, you lose all backwards compatibility.
Thirdly, the translation to micro ops can be optimized dynamically based on context.
Re: (Score:2)
decoding those instructions takes time. you now have to run the clock at twice the speed of a RISC core in order to decode those "compact" instructions into the same equivalent RISC ones.
It used to work like this, but hasn't for a long time.
Desktop x86 CPUs have high maximum clock rates because they don't need to worry as much about heat, not because of complex instruction decode. CPUs are pipelined, and instruction decode is just one extra piece of that long pipeline. It definitely increases CPU size to need to dispatch and break down so many instructions, but really doesn't have any bearing on clock speed.
Re: (Score:2)
Intel moved too soon, it was too expensive, there was no transition plan, and they didn't get enough industry partners to buy in. It wasn't the first time nor will it be the last when a major industry player thought they could strong-arm everyone into a new platform only to be shown that they can't.
Re: (Score:2)
Intel moved too soon, it was too expensive, there was no transition plan, and they didn't get enough industry partners to buy in. It wasn't the first time nor will it be the last when a major industry player thought they could strong-arm everyone into a new platform only to be shown that they can't.
The problem was that the architecture solved the wrong problem. it should have reduced the ratio computetion power vs transistor. But transistors are (and were) cheap, so it didn't really matter. At the expenses of scalabilty and optimization at runtime, because they tought that the compiler can do the job offline.
But the runtime optimization in the cpu have more information about how the code is executed. For exemple branch prediction is easier when you can see how the conditional branch was executed in t
Re: (Score:2)
Now that I've had some time to churn through the memory banks, I recall at the time we were bumping into the 4GB memory limit regularly. We customers said to Intel that we needed 64bit memory addressing in x86. Intel told us that we couldn't have it; I forget all the excuses for why not. Instead they tried to sell us a new platform without software compatibility. Then AMD said we could have it, with backward compatibility, and gave us AMD-64. Intel had to eat crow when they followed up about 2 years la
Re: (Score:2)
If they were smart, Intel would buy up all the Mill Computing [millcomputing.com] IP and base a new architecture off of that. They should think about it whilst they're still sitting on a decent pile of cash.
Intel dependant on proprietary Windows apps (Score:2)
Part of the picture... (Score:3)
Well 'integration' isn't the word I'd pick, they have some lockin if they make a market x86-dependent, and so that was their goal. The assumption would be that if the mobile market became mostly x86, then sure Android would have ARM compatibility, but x86 would be optimal and no one would tolerate the crappy non-native experience. Of course, the glaring flaw is that Intel would have to *live* in that unacceptable non-native experience to begin with, and Intel was right about one thing, no one would put up with such a crappy experience.
Larabee was hubris that a lot of sort-of x86 cores would mitigate the GPU accelerated demand, because even if you couldn't be quite as quick as nvidia, you could use a familiar programming model. Problem was that Phi *also* required developers to be more careful and picky, so it wasn't like programming in x86. By the time Intel could have possibly made it easier, the world was just so used to CUDA that the market was slim. They may have better luck with AVX512 in Xeon Skylake, but who knows. It was always doomed as a GPU because they have no competency.
Another problem is being in denial, taking a long time from changing gears from 'no competitor' to 'oh, AMD is competitive again'. In the datacenter, Intel had crazy high core counts. In the desktop? quad-core, because no competitive pressure. When zen was rumored, Intel was skeptical, and when Ryzen came out they were slow to change. Compared to desktop offerings, AMD was so much better. On the server side, things are a bit more mixed (where Intel actually *has* continued to invest in meaningful advances). For example, desktop core counts have been stagnant, as were clock speeds, and no AVX512, meanwhile server chips moved on and had all those improving. AMD still has more PCIe lanes and memory channels, but it's a caveat, more like 4 processors with 2 memory channels and 16 pcie lanes each rather than 1 processor with 48 pcie lanes. This is a distinction that doesn't matter for many workloads, but for a few, it matters (the memory performance of a single threaded application is much better on intel server than AMD server).
In trouble (Score:3, Informative)
As of 11:37 EST, Intel's stock price is $50.16, and AMD's stock price is $14.61.
Re:In trouble (Score:4, Informative)
Market Cap : Net Income
INTC: 235.5B : 4,450M
AMD: 14B : 81M
Re: (Score:2)
Apple has been pulling off that move for a while now.
Re: (Score:2)
Tell that to the CEOs of Intel and AMD. In fact tell that to any CEO in the world, or anyone who owns stock, or who follows business, or has in interest in the economy.
By the way, the DOW dropped another 300 points today, due to too much #WINNING.
Re: (Score:2)
You are welcome to take market cap into account. A helpful commenter added that information below my comment.
x86 was not the big issue with Larrabee (Score:2)
The use of the x86 instruction set wasn't the big issue with Larrabee. Larrabee would have been a bad idea no matter if it used ARM or MIPS opcodes instead. Using x86 didn't help, but that was just one among many issues of that architecture. The issues of the Larrabee architecture are things such as no fixed function hardware for things such as z-buffering or rasterization, not enough hardware threads to hide the memory latency, memory interface with not that much bandwidth but expensive but not that often
Too much of integration (Score:2)
Reengineering Work: Don’t Automate, Oblitera (Score:2)
You're kidding right? (Score:2)
Intel dumped ARM (Xscale) over 10 years ago (2006), and it's not clear even with hindsight that it would have been a successful strategy for Intel to use that ARM license. It seems doubtful that an Apple-Intel alliance around Xscale would have been possible given that iPhone's development (2006-2007) likely began when Intel still had Xscale. I can assume it was explored by Apple or Intel, even if only on a whiteboard, but history shows us that Xscale wasn't used by Apple. (probably price, performance, and l
Warning: 1990's pop culture reference (Score:2)
Intel could have been first in mobile space except (Score:2)
March 1994: "IBM has a LOTUS NOTES
Dec 1996: "we have a conference call with them (intel) re NetPC [edge-op.org] today at 9
It's not that they can't do anything else (Score:2)
They hold all of the keys to x86 (you need a license from them), why would they give that up?
I highly recommend people go read about X86 on Wikipedia, it tells you all you need to know about why Intel is not going to give up on x86. And before anyone says the patents expired, that is true for the original instruction set, however there has been quite a few improvements since the patent expired. SSE, MMX, PAE, virtualization and a whole host of others have co
x86 architeture is now a liability (Score:2)
Intel cannot postpone a crash with a truth that has been in the air for the last 20 years: x86 architecture is a beast of the past.
At once accumulated expertise on it made it win over new designs, but it is not the case anymore.
Re:The End of an Era (Score:5, Interesting)
This just marks the end of an era. Moores Law is dead (and has been dead for quite some time). Intel will need some other way to innovate. All they have been doing is adding cores and trying to push up clock speeds.
Moore's law is about transistor counts. Adding cores adds transistors.
Even this is running into a dead end: because of physics.
They can still add cores for some time, if they can improve yields. First there is a process shrink and cores shrink, then the process is improved and cores grow again. Then we get a new process...
Re:The End of an Era (Score:5, Insightful)
Moore's law is about transistors per unit area. Adding cores increases both. Only new manufacturing techniques to cram in more transistors will let the trend continue, and they are indeed pushing the limits of what's physically possible.
At 7nm we're talking features that are only about three dozen atoms wide. The current roadmap has 5nm production in a few years. This kind of thing is well outside my knowledge but I'm pretty confident you can't make devices smaller than a single atom, so they are rapidly approaching a wall one way or another!
=Smidge=
Re: (Score:2)
Moore's law is about transistors per unit area. Adding cores increases both. Only new manufacturing techniques to cram in more transistors will let the trend continue, and they are indeed pushing the limits of what's physically possible.
At 7nm we're talking features that are only about three dozen atoms wide. The current roadmap has 5nm production in a few years. This kind of thing is well outside my knowledge but I'm pretty confident you can't make devices smaller than a single atom, so they are rapidly approaching a wall one way or another! =Smidge=
Let's break this down and make it simple. Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years. Simply put this is the founded definition. Also, Moore's prediction proved accurate for several decades, and has been used in the semiconductor industry to guide long-term planning and to set targets for research and development. Advancements in digital electronics are strongly linked to Moore's law: quality-adjusted microprocessor prices, m
Re: (Score:2)
Well except if you read the actual paper [google.com] (PDF warning) that Moore wrote which created the whole concept, everything seems to be framed in terms of square area and component size.
=Smidge=
Re: (Score:2)
I have often seen Moore's Law formulated as transistors per unit cost, which I think is a useful measurement. If we hit a wall on feature size, this could still allow continually improved performance (and to a lesser extent, performance per watt).
Re: (Score:2)
They can add cores, but it will not matter much. Most workloads are not core-limoted these days.
Re: (Score:2)
Moore's law is about transistor counts. Adding cores adds transistors.
Moore's law is about cost per transistor whether it is achieved through density or area.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:3)
Guys like you are delusional and think that CPUs are continually going to get faster and faster.
Moore's Law does not say CPUs will get "faster", only that transistor density will increase.
Re: (Score:2)
Re:The End of an Era (Score:4, Insightful)
Guys like the poster think that is going to continue forever. But it won't (and hasn't).
Nobody said it will "continue forever". He just said it is not dead yet, and it isn't. Single thread CPU speed has stalled, but transistor density is still increasing.
Why do you have so much invested in insisting that "Moore's Law is dead" anyway? You go to every discussion about this topic and post the same nonsense over and over. You do the same thing in every discussion about AI or machine learning, insisting that AI isn't "real" because it doesn't match what you see in the movies. Maybe you should see a psychiatrist and find out what is driving your weird obsessions.
Re: (Score:2)
CPUs are getting faster, not GHz wise but every new generation has better benchmark numbers than the previous. I'd call that "faster".
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Funny how you claim to know more than the man who came up with the law. He predicts it will end around 2025. Time will tell https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:3)
I don't think there is anything they can do. Eventually, they will share the x86 market with AMD about even and maybe even some additional manufacturers. And they will come under increasing pressure from ARM, or at least Intel will. AMD does make ARM chips, even if not at volume at the moment.
It is the typical way giants fail: By sleeping, sleeping, sleeping, until they find that they are not very relevant anymore.
Re: (Score:3)
I don't think there is anything they can do. Eventually, they will share the x86 market with AMD about even and maybe even some additional manufacturers. And they will come under increasing pressure from ARM, or at least Intel will. AMD does make ARM chips, even if not at volume at the moment.
It is the typical way giants fail: By sleeping, sleeping, sleeping, until they find that they are not very relevant anymore.
Intel has not been sleeping. They have engaged in one failed project after another. There's a difference.
Re: (Score:2)
You mean like Microsoft? You have a point.
Re: (Score:2)
Intel is itself a victim of Intel's success. Just like PCs find it difficult to shed the x86 model because of compatibility requirements, even intel can't compete with the x86, newer chips that aren't a part of the x86 line have been flops, they couldn't even get their first 64-bit alternative failed to gain popularity because it wasn't compatible enough with the 32-bit x86, so they got beaten at it by AMD.
And when a product is not a big initial success, Intel loses interest in it and considers it a failure
Re: (Score:2)
I agree. Competition is good. And the only things I have tied to a specific CPU architecture is MS Office (for working on customer documents) and games. Hopefully that will change as well.
Re: (Score:2)
Re: (Score:3)
It won't die, but it will need to find new markets in order to continue growing like investors demand.
Intel has a PE below 15. The S&P average is over 25. So investors aren't expecting much growth.
Reducing market (Score:2)
They won't die. No.
The problem is that their market is shrinking:
there are plenty of people who still need or want high performance processors.
Yup, earning millions on specific contracts to build giant HPC center every few months seems lucrative.
(What will eventually become of Intel according to this trend).
Until you realize that there are billions of people on this planet, thus billions of pocket to fill with a smartphone.
("Pocket computer" metaphor in full force. In some region (older, second hand) smartphones are the only computers that people will ever come into contact with).
Int
Re:Wait - I thought this was an article about Inte (Score:5, Interesting)
Re: (Score:2)
Eagle was earlier than Compaq - until the founder drove his brand-new Ferrari over a cliff.
Re:Wait - I thought this was an article about Inte (Score:5, Interesting)
When IBM couldn't force MCA as the standard it became plain that it had lost control over the direction of the design, and that we were now in a commodity hardware multivendor world.
Re: (Score:2)
Historically, Intel was huge for Silicon Valley in the early days. It had the first commercial CPU on a semiconductor chip. Intel was prominent amongst the early semiconductor companies that gave Silicon Valley it's name, founded by two of the original Traitorous 8. Intel kept a big hold when it became the chip used in the PC, moving microcomputers out of the hobbyist realm, thus keeping itself highly influential and relevant even when Silicon Valley started being more about software than silicon.
Re: (Score:2)
Please let me know if I can help you with anything else.
Re: Wait - I thought this was an article about Int (Score:2)
Every single tech company in Silicon Valley depends on the success of other tech companies, in or out of the Valley.
Re: (Score:2)
Not the ones that tap into the consumers themselves. (Google. Apple. Facebook. Uber. Etc.) Those companies have such market presence that they can line up quality, even name vendors (e.g., Foxconn, Intel) around them, and replace them like interchangable commodities as needed.
Re: (Score:2)
Re:It's not that bad. (Score:4, Interesting)
Agree they have these, but both of these are working against Intel right now: the inertia is what drove them into the ditch while more nimble chipmakers were passing them by, and their ample resources are blinding them from the danger because they assume they can always write checks to get back on the right track if they ever figure it out.
See "Sears"...
Re: (Score:2, Insightful)
They've done worse like spending $7.7billion on buying McAfee in 2010. To put into perspective how stupidly expensive that was, Disney paid $4billion for Marvel in 2009 and $4.04billion for Lucasfilm in 2012. Instead of spending all that money on crappy cybersecurity software which is more of a problem because of windows, Intel should have spent that money developing more user controllable security measures in the cpu and chipset such as a secure enclave, the ability to write protect the hard drive through
Re: (Score:2)
Why do people here insist on bringing up the SJW-altright in every article?
Re: (Score:2, Insightful)
Why do people here insist on bringing up the SJW-altright in every article?
Why do people insist on looking the other way when a swath of political culture is causing harm to the rest of society, but really believe "they're not the baddies." Let's be honest here, western society was making a pretty hotass line towards a colour blind belief system through the 1980's, 90's and mid-00's. Then progressives and feminists decided that the most important thing in the world wasn't skill and ability, but your sex organs, declared sexual partners, and/or the colour of your skin. Then they
Re: (Score:3)
You sound like vegans. Sorry, but I just don't care about the drama of sex frustrated feminists and mgtows.
Re: (Score:2)
Yep, but you forgot the punchline: All that zero-tolerance leftwing angst on social issues demonized allies while simultaneously throwing away hard fought ground on hard issues. Respecting pronoun choices was far more important than food on the table or not going bankrupt by medical bills. The result of all that is Trump in the White House and everything that brought about.
Re: (Score:2)
I thought Intel was in a bad position because it decided to dump $300m at diversity initiatives and fire a bunch of engineers
Yes well you would think that, because you're a plonker.
Re: (Score:2, Insightful)
And the hostility towards diversity here on Slashdot is misguided and idiotic. When I see white boys from upper middle class families complain about being oppressed and how it's a meritocracy in technology, I find it hysterical that they can't see outside their little bubble and realize that they had all of their opportunities handed to them.
Yes, us half-asians(or asians), who are penalized in US(don't live in the US anyway) university admissions akin to whites and are born to poor working class families, where name brand kraft dinner was a luxury sure are 'white boys from upper middle class families.' Nothing like finding out that a university specifically penalizes you because of your race, instead of making the selection based on best candidates? Yeah, those of us who climbed up from the bottom really do like meritocracy, because we know th
Re:Hysterical (Score:5, Insightful)
Yeah, those of us who climbed up from the bottom really do like meritocracy, because we know that people got there on skill, ability, and competence.
I am also one of those people. And I know why so many of us are so bitter about seeing handouts. We struggled, and we want to see other people struggle too. It's some kind of messed up desire for fairness, when really, we should be trying to make sure nobody else has to go through the bullshit we did to succeed. The part you missed in the above formula is actually the largest factor- luck. Get over yourself, asshole.
Re: (Score:2)
And there's nothing wrong in trying to be a good corporate citizen - or even to appear to be one.
It's the 'appear to be one' that feels like the big problem in the industry. Trying to 'look good' ends up with less qualified employees *and* being patronizing toward classes of people all at the same time.
Someone who is of a particular minority that is very skilled and has really worked hard finds the position filled by the first minority hire that came along before him, because the organization is biased, but knew it hired to hire 'a' minority and wasn't too picky. This is frankly insulting to that min
Re: (Score:2)
full of empty air and wasted space
More space = more powerful convective cooling.