Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel AMD Businesses

Intel Is in an Increasingly Bad Position in Part Because It Has Been Captive To Its Integrated Model (stratechery.com) 238

Once one of the Valley's most important companies, Intel is increasingly finding itself in a bad position, in part because of its major bet on integration model. Ben Thompson, writing for Stratechery: When Krzanich was appointed CEO in 2013 it was already clear that arguably the most important company in Silicon Valley's history was in trouble: PCs, long Intel's chief money-maker, were in decline, leaving the company ever more reliant on the sale of high-end chips to data centers; Intel had effectively zero presence in mobile, the industry's other major growth area. [...] [Analyst] Ben Bajarin wrote last week in Intel's Moment of Truth. As Bajarin notes, 7nm for TSMC (or Samsung or Global Foundries) isn't necessarily better than Intel's 10nm; chip-labeling isn't what it used to be. The problem is that Intel's 10nm process isn't close to shipping at volume, and the competition's 7nm processes are. Intel is behind, and its insistence on integration bears a large part of the blame.

The first major miss [for Intel] was mobile: instead of simply manufacturing ARM chips for the iPhone the company presumed it could win by leveraging its manufacturing to create a more-efficient x86 chip; it was a decision that evinced too much knowledge of Intel's margins and not nearly enough reflection on the importance of the integration between DOS/Windows and x86. Intel took the same mistaken approach to non general-purpose processors, particularly graphics: the company's Larrabee architecture was a graphics chip based on -- you guessed it -- x86; it was predicated on leveraging Intel's integration, instead of actually meeting a market need. Once the project predictably failed Intel limped along with graphics that were barely passable for general purpose displays, and worthless for all of the new use cases that were emerging. The latest crisis, though, is in design: AMD is genuinely innovating with its Ryzen processors (manufactured by both GlobalFoundries and TSMC), while Intel is still selling varations on Skylake, a three year-old design.

This discussion has been archived. No new comments can be posted.

Intel Is in an Increasingly Bad Position in Part Because It Has Been Captive To Its Integrated Model

Comments Filter:
  • simply? (Score:5, Interesting)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday June 25, 2018 @10:50AM (#56842176) Homepage Journal

    "instead of simply manufacturing ARM chips for the iPhone"

    What's simple about it? Intel's ARM was Xscale [wikipedia.org], which was based directly on DEC's StrongARM (which they purchased.) It was the fastest ARM core at the time, but while it [x]scaled up, it didn't [x]scale down. It had the highest power consumption at low clock rates of all the ARM cores.

    Intel did not have an ARM-based product which would have been a viable core for the iPhone.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Intel was a strong early player in the ARM market. During the height of the PDA era they made the best chips in the most popular pocketPC devices.

      Intel has beat out nearly all other chip makers in the laptop,server, workstation, desktop space. Professional and home. History is littered with a dozen dead CPU architectures from dozens of makers, all of which were 'serious' processors next to Intel's 'toy' or 'consumer' offerings with their 'inferior' architecture.

      Intel has the most advanced chip fabrication t

      • Intel was a strong early player in the ARM market. During the height of the PDA era they made the best chips in the most popular pocketPC devices.

        They made the fastest chips. I had the fastest one IIRC, the PXA255, in (also IIRC) an iPaq H2215. Battery life was abysmal. "Best" is defined both by performance and battery life, and they only had one of those things.

      • by dgatwood ( 11270 )

        Intel has the most advanced chip fabrication technology and makes the fastest chips in the world. Full stop.

        Intel had the most advanced fab technology. Right now, Intel is struggling to ship engineering samples of 10nm parts, and they aren't expected to go into volume production at 10nm until next year. Meanwhile, TSMC and Samsung have been doing volume production at 10nm for a year or more.

        Worse, TSMC has already started volume production at 7nm, and is expected to be doing 5nm by next year. So barring

    • Re:simply? (Score:5, Interesting)

      by Archangel Michael ( 180766 ) on Monday June 25, 2018 @11:36AM (#56842452) Journal

      Much like I've said about Microsoft being not a "software" company, but a "Windows" company, Intel is not a Microprocessor company, it is an x86 microprocessor company.

      This isn't to say that Microsoft doesn't make software for other platforms, because it does, but its focus isn't on software, it is on Windows. Likewise, Intel makes other chips besides x86, but its primary focus is x86.

      I learned a long time ago in school, how Railroad companies got themselves into similar bind by not realizing they were in the Transportation business, by being focused on being in the Railroad business.

      And they all have shorted themselves in the long run over their myopic outlook. And to be honest, ARM is in the ARM business, and will likely go down the same path in about 20 years.

      • by sjames ( 1099 )

        Intel tries to be in the processor business, it just turns out that they can't do anything but x86.

        The x86 itself saved Intel's ass when the iAPX432 crashed and burned hard. They salvaged a few features as they advanced from 8086 to 80386. But the real advancements in archetecture stopped at the '386. Everything since has been all about making a faster '386 rather than fundamental archetecture improvements.

        They did try to move past that with the Itanium, but it turned out to be Itanic instead.

        Don't forget t

        • x86 is largely tied to Windows. Including the 386/486 base for all their future designs. Itanium failed because it was largely made for Windows, but nobody wanting Windows wanted to try a new architecture. Itanium was RISC processor, but it was still largely a subset of x86 (iirc) with 64 bit extensions. It also failed partly because PowerPC chips by IBM (also RISC) were outperforming it out of the box.

          • by sjames ( 1099 )

            Itanium had an entirely new instruction set. But it turned out that it was practically impossible for a compiler to produce an instruction stream that would get decent performance. It didn't help that Itanic was priced north of $10K. It could run Windows in an emulator, but that was much slower than Windows running native on a 32 bit processor. Linux could run natively on Itanic, but it ran faster on a 32 bit processor. Intel kept saying "just wait till next year". Then AMD came out with x86_64 and nobody w

          • x86 is largely tied to Windows. Including the 386/486 base for all their future designs.

            That is literally backwards: ITYM "Windows is largely tied to x86". x86 isn't tied to anything; everyone uses/supports it and there's nothing Windows-specific about it whatsoever. Microsoft has helped guide its development by asking for specific features, but those features are useful to anyone doing what they do.

      • Much like I've said about Microsoft being not a "software" company, but a "Windows" company,

        The Microsoft that makes all of it's money through cloud services and office applications? Is that the "windows" company you're talking about? You know at present trends the Xbox division is going to overtake the Windows division in profits before the end of the decade right?

        You're on point about Intel though. The entire company there really is based on one product, unless their memory division actually starts delivering on its promises.

    • What they are saying is that Intel doesn't have any of their own ARM chips that can compete in mobile but they did have modern chip foundries and could manufacture other ARM chips. Samsung has that business model where they make their own designs and manufacture other people's designs too. Your premise is that Intel only use their XScale design which was never adequate. Granted Intel may not want to be a chip foundry for hire but mobile chip numbers have easily eclipsed desktop/laptop chip numbers so Intel
      • by epine ( 68316 )

        Granted Intel may not want to be a chip foundry for hire but mobile chip numbers have easily eclipsed desktop/laptop chip numbers so Intel is missing out on a huge market.

        You're not even going to try to normalize for silicon area, or number of transistors delivered?

        If neither the silicon area nor the number of transistors matters, and it's only about the raw numbers, how about let's just concede the whole show to those tiny little flutter filters (capacitors) that are ten to a small chip ... and more to a l

        • Dude, what the hell are you ranting about? I merely said that Intel could be a chip foundry if it wanted to be as they have 14nm fabs.
  • Not yet (Score:5, Interesting)

    by holophrastic ( 221104 ) on Monday June 25, 2018 @10:53AM (#56842192)

    I think this kind of analysis is quite premature. Presently, there is no mobile-worthy x86 option -- for lots of reasons. Until there is, I don't think you can judge Intel for their direction.

    Presume, for a moment, that in a few years, Intel successfully produces an x86 proc for mobile specifications. It's distinctly possible, indeed even probable, that ARM becomes useless, and the entire mobile market moves to x86. What a boon for Intel to have not wasted time and effort during these middle-ground years.

    We've lived through this before. I refer you to WAP. How many web developers spent how many hours fumbling through WAP-limited options, before the entire mobile market moved to full web technologies? What a wasted investment for any small company. And what a horrible experience in was for consumers.

    We'll wait and see.

    • ARM has been successful because they license their design, and the customer then integrates the ARM core with all the peripherals, memory interface, local memory, and possibly other cores into a SoC.

      Do you expect Intel to adopt a licensing program ?

      • by Junta ( 36770 )

        Also, they are enabling an arms race (pun unavoidable). TI bowed out of the market because there were just too many competitors that drove them to either leave the market or go negative cash flow to stay in, for example.

        So the functional benefits are nice, but more critically they enabled super dirt cheap chip vendors.

    • For years, industry watchers have debated which would come first: Intel lowering power consumption enough to create viable mobile chips, or ARM increasing performance enough to create viable desktop and server chips.

      IF Intel wins, why do you think "the entire mobile market moves to x86"? If anything, the legacy software shoe is on the other foot.

      • If you're going to quote partial sentences, please include the primary predicate. I said "distinctly possible".

        My comment wasn't about predicting the future. My comment was about Intel's choice being a valid business gamble, given a distinctly possible future.

        What actually winds up happening has absolutely nothing to do with my comment.

    • by sjames ( 1099 )

      I doubt very much that in a couple of years the mobile industry is likely to change architecture and instruction set just to jump on x86.

      • I doubt very much that ten years from now, mobile devices won't be able to run any software that exists today.

        • by Junta ( 36770 )

          Note that I have two Intel based devices and they both can pretty much run all the ARM applications.

          They both suck terribly at anything that is vaguely demanding, but in *theory* an x86 based future would be able to run today's software.

          Practically speaking I don't see any way for Intel to have some promise of value for x86 architecture in mobile form factor that would overcome the current market situation, but they at least did do their homework and made it technically possible.

          • Aside from power consumption, and by that we mean battery life, there's no problem with intel in mobile. So we're really just waiting for much better batteries. Maybe all Intel needs to do is to wait.

  • by JoeyRox ( 2711699 ) on Monday June 25, 2018 @10:53AM (#56842194)
    And that power-per-watt disadvantage vs ARM predates Intel's integration strategy and also their current process-size disadvantage. I don't see any evidence to the contrary in the linked story.
  • by Solandri ( 704621 ) on Monday June 25, 2018 @11:06AM (#56842278)
    This is why Microsoft is doing it. The realized they are not beholden to Intel. They made Windows RT (port of Win32 to ARM) so if the Intel x86-64 ship ever sank, it wouldn't take Windows down with it. They don't need it to sell like hotcakes; heck they don't need it to sell at all. They just need to to be there and ready if ARM overtakes Intel. It's insurance - a hedge against Intel imploding. If that should happen, they'll just transition to Windows for ARM, and all the software companies making Windows apps will (more or less) simply recompile their programs for ARM64, and Windows will carry on as if Intel never existed.
    • Agreed. But I own an ARM laptop running Windows 10. I bought it because I'm a long-term ARM aficionado, and have been dreaming of the day they were available. ARM is nowhere near overtaking Intel. I can see why ARM ChromeBooks and laptops simply don't sell vs. their low-power Intel equivalents. ARM just isn't there yet. Benchmarks of the CPUs look good, but the entire system just isn't up to par with an Intel or AMD based hardware stack. My 8-core 2.45Ghz ARM laptop feels like a 1.5ghz dual-core Celeron.
      • ARM just isn't there yet. Benchmarks of the CPUs look good, but the entire system just isn't up to par with an Intel or AMD based hardware stack. My 8-core 2.45Ghz ARM laptop feels like a 1.5ghz dual-core Celeron.

        A large part is the ARM chip lacks the high speed, low latency RAM subsystem. Those things draw a lot of power, so they'd start to seriously lose the power advantage.

        The main disadvantage of x86, namely the high complexity instruction decoder has become an increasingly small part of the power budge

    • by Agripa ( 139780 )

      From my perspective the ironic part is that if it was not for Microsoft screwing up Windows in an attempt to leverage their desktop monopoly into the tablet and PDA market, the desktop market would be stronger. Microsoft is in a position to kill the x86 desktop market but Intel has not taken any steps to save it.

  • by Kohath ( 38547 ) on Monday June 25, 2018 @11:14AM (#56842326)

    This was clear a long time ago. Intel was making X86 mobile chips for Intel to gain market share. Not because the phone makers wanted x86 chips. It was Intel-focused, not customer focused. Microsoft did similar things with Windows 8 and that metro junk.

    Recently Intel has branched out into lots of other growth businesses though, buying Movidius, Altera, and MobileEye. They're making silicon photonics chips for optical networks, DOCSIS chips for cable modems and 3D Xpoint RAM to bridge the gap between DRAM and NAND. They integrated an AMD GPU and they are building a new GPU of their own.

    It’s ironic that articles like this gain traction after Intel has already turned around and started to gain traction.

  • by lkcl ( 517947 )

    i've pointed this out here on slashdot a number of times, dating back at least... six years possibly more. the first really clear signs were when ARM came out with the first dual-core ARM Cortex A9 side-by-side demonstration of running a web browser (linux desktop OS) side-by-side with a 1.6ghz intel Atom. it kept up and in some cases loaded pages before the intel processor. at the end of the demo they showed the clock rate of the ARM chip: only 600mhz.

    intel was a memory company. they're proud of their

    • by DamnOregonian ( 963763 ) on Monday June 25, 2018 @01:22PM (#56843114)
      This is nonsense.
      Any time someone throws out the word RISC in the context of modern superscalar processors, they invariably have no fucking idea what they're talking about.
      The denotation between RISC and CISC existed because once upon a time, CISC processors had richer instruction sets at the cost of more cycles per instruction.
      These days, all processors (relevant to this discussion) are essentially CISC, and run at more than 1 instruction per cycle. The terms RISC and CISC are dead terms.
      All superscalar ARMs have instruction decoders that break them into smaller micro-operations, a la microcode.
    • This could be my ignorance showing, but there is something I've never understood about Intel's architecture strategy. It's well known by now that Intel chips don't execute x86 instructions. Rather, they decode x86 instructions into more RISK-like micro-ops and execute micro-ops. Why not expose the micro-ops to compilers? Let programs bypass x86 and get closer to the hardware. That would allow software companies to transition gradually away from x86, rather that jumping in feet first in to an unfamiliar

      • Why not expose the micro-ops to compilers?

        Micro ops take up more space to perform the same operation, which would worsen the memory bottleneck.

        Also, by exposing the micro ops, you lose all backwards compatibility.

        Thirdly, the translation to micro ops can be optimized dynamically based on context.

    • decoding those instructions takes time. you now have to run the clock at twice the speed of a RISC core in order to decode those "compact" instructions into the same equivalent RISC ones.

      It used to work like this, but hasn't for a long time.

      Desktop x86 CPUs have high maximum clock rates because they don't need to worry as much about heat, not because of complex instruction decode. CPUs are pipelined, and instruction decode is just one extra piece of that long pipeline. It definitely increases CPU size to need to dispatch and break down so many instructions, but really doesn't have any bearing on clock speed.

  • Since open source applications can simply be recompiled to any processor archtitecture.
  • by Junta ( 36770 ) on Monday June 25, 2018 @11:30AM (#56842436)

    Well 'integration' isn't the word I'd pick, they have some lockin if they make a market x86-dependent, and so that was their goal. The assumption would be that if the mobile market became mostly x86, then sure Android would have ARM compatibility, but x86 would be optimal and no one would tolerate the crappy non-native experience. Of course, the glaring flaw is that Intel would have to *live* in that unacceptable non-native experience to begin with, and Intel was right about one thing, no one would put up with such a crappy experience.

    Larabee was hubris that a lot of sort-of x86 cores would mitigate the GPU accelerated demand, because even if you couldn't be quite as quick as nvidia, you could use a familiar programming model. Problem was that Phi *also* required developers to be more careful and picky, so it wasn't like programming in x86. By the time Intel could have possibly made it easier, the world was just so used to CUDA that the market was slim. They may have better luck with AVX512 in Xeon Skylake, but who knows. It was always doomed as a GPU because they have no competency.

    Another problem is being in denial, taking a long time from changing gears from 'no competitor' to 'oh, AMD is competitive again'. In the datacenter, Intel had crazy high core counts. In the desktop? quad-core, because no competitive pressure. When zen was rumored, Intel was skeptical, and when Ryzen came out they were slow to change. Compared to desktop offerings, AMD was so much better. On the server side, things are a bit more mixed (where Intel actually *has* continued to invest in meaningful advances). For example, desktop core counts have been stagnant, as were clock speeds, and no AVX512, meanwhile server chips moved on and had all those improving. AMD still has more PCIe lanes and memory channels, but it's a caveat, more like 4 processors with 2 memory channels and 16 pcie lanes each rather than 1 processor with 48 pcie lanes. This is a distinction that doesn't matter for many workloads, but for a few, it matters (the memory performance of a single threaded application is much better on intel server than AMD server).

  • In trouble (Score:3, Informative)

    by PopeRatzo ( 965947 ) on Monday June 25, 2018 @11:39AM (#56842476) Journal

    As of 11:37 EST, Intel's stock price is $50.16, and AMD's stock price is $14.61.

    • Re:In trouble (Score:4, Informative)

      by Logger ( 9214 ) on Monday June 25, 2018 @12:33PM (#56842828) Homepage

      Market Cap : Net Income
      INTC: 235.5B : 4,450M
      AMD: 14B : 81M

  • The use of the x86 instruction set wasn't the big issue with Larrabee. Larrabee would have been a bad idea no matter if it used ARM or MIPS opcodes instead. Using x86 didn't help, but that was just one among many issues of that architecture. The issues of the Larrabee architecture are things such as no fixed function hardware for things such as z-buffering or rasterization, not enough hardware threads to hide the memory latency, memory interface with not that much bandwidth but expensive but not that often

  • The CEO's love life and work life are too closely integrated, I heard.
  • from 1990 Harvard Business Review... https://hbr.org/1990/07/reengi... [hbr.org] Equally don't follow what has been successful if you want to be disrupted...
  • Intel dumped ARM (Xscale) over 10 years ago (2006), and it's not clear even with hindsight that it would have been a successful strategy for Intel to use that ARM license. It seems doubtful that an Apple-Intel alliance around Xscale would have been possible given that iPhone's development (2006-2007) likely began when Intel still had Xscale. I can assume it was explored by Apple or Intel, even if only on a whiteboard, but history shows us that Xscale wasn't used by Apple. (probably price, performance, and l

  • Intel's trying to shoehorn x86 everywhere reminds me of that scene in The Brady Bunch Movie [imdb.com] where Mike Brady (played by Gary Cole) keeps designing every project as a clone of his house.
  • Intel could have been first in mobile space, except for Microsoft repeatidly leaning on them to stick to making chips for the IBM PC, that would be IBM, 'the PC company' as they referred to them in internal emails:

    March 1994: "IBM has a LOTUS NOTES .. We have entered another round of "partnership" talks with the PC company [edge-op.org] and mentioned this as an issue, but they claim thay can't fix this for us."

    Dec 1996: "we have a conference call with them (intel) re NetPC [edge-op.org] today at 9 .. yup, it would be crazy to In
  • It's that they don't want to do anything else.
    They hold all of the keys to x86 (you need a license from them), why would they give that up?

    I highly recommend people go read about X86 on Wikipedia, it tells you all you need to know about why Intel is not going to give up on x86. And before anyone says the patents expired, that is true for the original instruction set, however there has been quite a few improvements since the patent expired. SSE, MMX, PAE, virtualization and a whole host of others have co
  • Intel cannot postpone a crash with a truth that has been in the air for the last 20 years: x86 architecture is a beast of the past.

    At once accumulated expertise on it made it win over new designs, but it is not the case anymore.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...