Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Businesses

Intel's Revival Plan Runs Into Trouble. 'We Had Some Serious Issues.' (wsj.com) 79

Rivals such as Nvidia have left Intel far behind. CEO Pat Gelsinger aims to reverse firm's fortunes by vastly expanding its factories. From a report: Pat Gelsinger is keenly aware he must act fast to stop Intel from becoming yet another storied American technology company left in the dust by nimbler competitors. Over the past decade, rivals overtook Intel in making the most advanced chips, graphics-chip maker Nvidia leapfrogged Intel to become America's most valuable semiconductor company, and perennial also-ran AMD has been stealing market share. Intel, by contrast, has faced repeated delays introducing new chips and frustration from would-be customers. "We didn't get into this mud hole because everything was going great," said Gelsinger, who took over as CEO in 2021. "We had some serious issues in terms of leadership, people, methodology, et cetera that we needed to attack."

As he sees it, Intel's problems stem largely from how it botched a transition in how chips are made. Intel came to prominence by both designing circuits and making them in its own factories. Now, chip companies tend to specialize either in circuit design or manufacturing, and Intel hasn't been able to pick up much business making chips designed by other people. So far, the turnaround has been rough. Gelsinger, 62 years old and a devout Christian, said he takes inspiration from the biblical story of Nehemiah, who rebuilt the walls of Jerusalem under attack from his enemies. Last year, he told a Christian group in Singapore: "You'll have your bad days, and you need to have a deep passion to rebuild." Gelsinger's plan is to invest as much as hundreds of billions of dollars into new factories that would make semiconductors for other companies alongside Intel's own chips. Two years in, that contract-manufacturing operation, called a "foundry" business, is bogged down with problems.

This discussion has been archived. No new comments can be posted.

Intel's Revival Plan Runs Into Trouble. 'We Had Some Serious Issues.'

Comments Filter:
  • by zenlessyank ( 748553 ) on Tuesday May 30, 2023 @10:02AM (#63560953)

    Instead of engineers. Maybe it's not too late.

    • by SomeoneFromBelgium ( 3420851 ) on Tuesday May 30, 2023 @10:10AM (#63560981)

      Anyone see a link with Boeing?
      I can still see the face of that CEO telling everyone (including families of the deceased) that their planes were OK.
      At least this one knows they screwed up!

      • They don't care if they screw up and tank the company. All the higher ups get paid no matter what happens.

      • by rickb928 ( 945187 ) on Tuesday May 30, 2023 @10:24AM (#63561033) Homepage Journal

        Boeing has another layer of responsibility and delivery; life safety.

        Yes, CPUs are part of many products that have life safety impacts, but commercial aircraft have, as an integral part of their purpose, life safety, and Boeing has stumbled on that lately, the 737MAX a salient example.

        From all I know about Intel's problems, they did not keep up with the state of the art, and the why of that is possibly, but not yet obviously, impacted by purely management decisions. Perhaps a lack of urgency in development and research, but blaming the 'suits' is simplistic. Not every chipmaker can be expected to excel. Some will fall behind from time to time. How they react determines their ultimate success.

        Me? I'm watching the RISC-V development. New architectures and the open-source paradigm might propel it beyond what ARM has accomplished, and all that leapfrogging the old-school CISC/X86-based hegemony. We are in for even more interesting times.

        • ...Some will fall behind from time to time. How they react determines their ultimate success...I'm watching the RISC-V development...

          I remember my Mac being stuck at 500MHz because...Motorola. PowerPC was RISC wasn't it?

          • by rickb928 ( 945187 ) on Tuesday May 30, 2023 @10:56AM (#63561105) Homepage Journal

            PowerPC was the Motorola/IBM-flavored RISC chip. It exceeded 500MHz, reaching 1.5GHz and further, but not in time for Apple. They petered out over time, not much interest after the x86 architecture developed so rapidly. But radiation-resistant PPC chips were the mainstay of space probes, and held the majority of the auto ECU industry for a long time. NeXT was another RISC challenger that fell by the wayside.

            I think the PowerPC, 88000, was Motorola's effort to replace the 68000, Apple's primary performance chip at the time.

            IBM adopted it for their AIX line, where they did a few really interesting things, one was the huge plasma terminal everyone coveted. I knew a fey guys that happily used that with emulators to access mainframes and midrange systems without the 'clunky' CRTs of the day. That plasma terminal was one of many, many innovations derived from the UICU PLATO project.

            • by _merlin ( 160982 ) on Tuesday May 30, 2023 @11:16AM (#63561165) Homepage Journal

              The 88000 was Motorola's RISC architecture. It was unrelated to PowerPC. It powered some UNIX workstations, and was used on graphics accelerator cards for PC VR headsets, but it never ended up in an Apple product. (I believe there were internal prototypes using it, but Apple never sold anything with an 88000 in it.) It was over-complicated, and had a bunch of 68000 holdovers in the FPU in particular.

              PowerPC was really IBM's baby. It evolved from the ROMP, RSC and POWER architectures. PowerPC removed some of the POWER instructions that were expensive in terms of the amount of silicon required, and got rid of the dedicated multiply/divide registers.

              Motorola bought in because they were struggling to compete at the time and didn't have the money to keep developing their own architectures entirely in-house. The 68000 was turning into a dead end. The '040 was an absolute beast for integer performance, but it removed a pile of FPU instructions. The '060 removed even more instructions to make a multi-issue design more practical, but it still ran hot and delivered limited performance improvement - it wasn't going to compete with the Pentium. The 88000 was proving to be over-complicated and at the very least the MMU and FPU would need a complete redesign. With all the CPU architectures hitting the market at the time (HP-PA, Alpha AXP, SPARC, MIPS, PowerPC, ARM), it was going to be hard to get a software ecosystem around the 88000.

              IBM lost interest in PowerPC on the desktop when it became increasingly difficult to compete with Intel and AMD. For embedded applications and game consoles, they could sell far higher volumes over a longer time period. For servers, they could set much higher prices and didn't have to worry so much about power, cooling and noise.

              • by MysteriousPreacher ( 702266 ) on Tuesday May 30, 2023 @11:56AM (#63561261) Journal

                Yep. Apple was looking increasingly to portables, and it didn't make much sense for IBM to focus on the low power designed needed for that. While the G5 was great on desktop, there was no hope of getting it into portables. That and having essentially one customer for these designs, Apple made little financial sense.

                • PowerPC was a short term fix for Apple's chip architecture issues. The main problem with it was that Apple needed constant (sometimes yearly) chip improvement which neither IBM nor Motorola wanted to do for just one customer. Even Apple was ordering millions of chips, there were always going to be a small but expensive (R&D wise) customer.

                  Incidentally the Xbox 360 processor a more ideal product for IBM to make. The chip design itself did not change during the entire 11 years of manufacturing. The only c

                  • by _merlin ( 160982 )

                    And the Wii U CPU was a derivative of the 750 architecture, introduced in late 1997. That's a 20-year lifetime.

                  • And it perhaps Apple's fault that there weren't any other customers. IBM and various partners were working on a platform that was going to support multiple operating systems; it went by a few different names over the years, but the best known was PReP (PowerPC Reference Platform). The goal, among other things, was for it to become the future of macOS hardware, replacing Apple's own proprietary designs. Versions of Unix and Windows NT were also planned, and of course Linux would have been ported to it becaus

                    • by _merlin ( 160982 )

                      PReP wasn't what anyone wanted. I had a PReP machine. It could boot special versions of AIX, Windows NT and Solaris. There was no application software for any of them. There was a version of OS/2 in development for it, but it was never released.

                      PReP was essentially an ISA PC with a PowerPC CPU replacing x86. It even had an x86 emulator in ROM to run the VGA BIOS. The motherboards were wired for little endian operation, which is the opposite of what Apple needed and the majority of IBM's RS/6000 POWER

                    • Licensing Mac OS for clones was a market success; at the time when Apple abruptly killed it, clones had taken over half the market by volume. (Somewhat less by dollars because Apple's own hardware still dominated the high end; most of the clones targeted the mass market.) What it was not was a financial success for Apple, though perhaps that could have been corrected with adjustments to the license fees. PowerPC systems for running Windows NT were caught in the usual problem of "there's no software because

                • by dgatwood ( 11270 )

                  Yep. Apple was looking increasingly to portables, and it didn't make much sense for IBM to focus on the low power designed needed for that. While the G5 was great on desktop, there was no hope of getting it into portables. That and having essentially one customer for these designs, Apple made little financial sense.

                  There actually was a single-core PPC 970FX [wikipedia.org] that was at least ostensibly designed for portables, but besides being a power pig compared with the 74xx series, it would not have been much faster than the G4 and its idle consumption was too high (not to mention that as the Wikipedia article notes, Apple would have had to redesign their Northbridge, but solving that problem was entirely Apple's job).

                  Nobody at IBM cared about reducing idle power for portable applications, because their market is big iron (which d

                • Apple was looking increasingly to portables, and it didn't make much sense for IBM to focus on the low power designed needed for that.

                  But that's exactly where PowerPC ended up! It became a low-power, embedded processor...

              • In it's heyday 68000 was a HUGE seller and was used in a wide variety of different things. It also outperformed x86 designs for much of its lifetime.

                Trying to move your customers to a new incompatible processor architecture is a recipe for disaster. It didn't work for Motorola with 68k->88k/PPC and it didn't work for Intel with x86->IA64.
                When Motorola stopped 68k development, most customers jumped ship. Apple stayed and moved to PPC, but although PPC was significantly faster a lot of software ran under emulation so users often didn't see the benefit. The first PPC macs were a lot slower in typical use cases than the higher end m68k ones.
                Intel at least were smart enough to continue x86 development alongside IA64.

                At the time x86 couldn't keep up with the new RISC architectures either, m68k wouldn't have been any worse off given similar development effort and likely could have retained a significant proportion of their existing customers. Some of the RISC architectures (HPPA, SPARC, MIPS) were actually created by former 68k customers looking for a migration path.
                Most customers moved before the end of the line, for instance Sun never used the 68040 or 68060 and Apple never used the 68060.

                • by _merlin ( 160982 )

                  Oh sure, the 68k line performed better than x86 all the way from when it was launched to the '040. For integer performance and I/O throughput, an '040 with 33MHz bus clock and 66MHz ALU clock could easily beat a '486DX4-100 (33MHz bus, 100MHz core). However, in the early days a 68k system was far more expensive due to needing 16 bit RAM and using more of it. RAM was very expensive.

                  SPARC easily outperformed 68k clock-for-clock, and scaled up to multi-issue designs more easily. The '060 wasn't competitive

                  • People seem to forget that AMD uses RISC architecture to support their extended (Intel was not interested in 64bit) x86 microcode

                    On top of that, AMD has always been in a competitive battle with Intel, and seeks huge generational improvements, along with a shortened generation cycle (as compared to IBM/Motorola)

                    Those reasons are why Apple moved away from 88k/PowerPC and continues to drive Apple's choice to design their own silicon and lease fab space with TSMC

                    Competition matters, rate of change matters, capa

                  • by Bert64 ( 520050 )

                    What also didn't help was that the 040 was marketed using the bus clock, whereas the 486DX2/4 was not, which made people perceive the 040 to be slower.

                    The 68008 came out a bit later than the 68000 and could be used with cheaper 8bit memory, but this would only have mattered very early on with very lowend systems. Even relatively cheap systems like the Amiga were using 16bit memory from the start.

                    The 060 was too little too late, they had basically abandoned it and lost the vast majority of their customers by

                    • by _merlin ( 160982 )

                      The 68008 had terrible performance though. The 68k instruction set is less space efficient than the x86 instruction set, so it suffers a lot more when it's limited to an 8 bit bus. That actually made a significant difference early on. The 8088 can do better with limited memory size and width than 68k can. Of course, memory prices came down to the point where that didn't matter.

            • NeXT was another RISC challenger that fell by the wayside.

              NeXTs were 68k-based and then the OS went to x86. Maybe you were thinking of something else?

              • Yeah, Jobs' big plan to migrate to PA-RISC ran out of time, and the migration to other architectures also sort of dies on the vine. NeXT computers were 68000 based, and the move to PA-RISC and SPARC did not save them.

                Weirdness and confusion back then.

                • The most interesting things about NeXT
                  1. Based on BSD w/MACH kernel
                  2. Object oriented development environment
                  3. Postscript based GUI display
                  4. Became OSX/macOS

                  So yes, it still lives on multiple chip platforms, maybe Jobs did get what he wanted, NeXT on RISC (a rose by any name)

        • It's a rather familiar story for juggernauts. Large companies with such vast reach that they effectively control whole segments of a market will inevitably start coasting on the inertia developed from their dominance. IBM was like that for decades, a monster that basically held the business computing market, from PCs all the way up to mainframes, in the palm of their hands. The saying in the enterprise world at the time was "nobody ever got fired for buying IBM". And then, in the late 1980s as their relatio

          • I'm not sure Intel coasted. They seem to have lost the technological edge to competitors.

            IBM surely didn't coast during the Fat Lou Gerstner era. That was a time when you could be a SMB and investigating a new system. Your S/34 was tapping out. You got pitched the AS/400 (S/38 being a steppingstone), and lo, someone from 'mainframe' would pitch the low end of that to you from a different division. OMG, some salesperson from the AIX group explaining their systems. The AS/400 team would counter that Windows v

            • Intel "maximized shareholder value" which is MBA-speak for reducing expenditures on research to pay out to shareholders

              This is further demonstrated by their recently announced layoffs, which is yet another MBA fantasy of cutting your way to profit

              These short term strategies have gutted Intel and SHOULD be taught to future MBAs as the pitfalls they are

              • Yes, layoffs are often the path to reduced performance and failure. They are often a reaction to losses, reduced revenue, and future risk of shortfalls.

                I was just 'displaced', the result of a major reorganization. After 2-3 years of hiring for growth, especially in technology and developing new products, my employer is cutting back the excess hiring and reassessing operations, intended to become leaner and proactively preparing for a recession. Even if that doesn't come true, reduced expenses without sacrif

      • Re: (Score:3, Insightful)

        by Anonymous Coward
        Boeing was fine until they got raped by the McDonnell Douglas management after the merger.
      • by klubar ( 591384 )

        See also GE (and other companies that were previously engineering-driven and are now run by financial types).

    • the whole industry is taking a beating. Nvidia's sales are plummeting too. Their stock price is high because of AI hype and the expectation that their GPUs will be used in data centers. But it's pretty cheap to build ASICs these days so it's very possible their lunch will get eaten.

      If anything Intel waited too long to get into the GPU / Parallel computing market. I will give you that. But they were getting a lot of pressure from AMD with Ryzen along with demand for CPUs collapsing up until COVID lockdow
      • Re: (Score:3, Interesting)

        by buck-yar ( 164658 )

        Nvidia doesn't care about consumer GPUs any more. Why should they, the H100 sells for $25k. Cloud time training AI is $37k/hr. AI is a goldmine for them. Businesses are throwing money at them. Avg poverty gamer can barely scrape up $500 for a video card, Nvidia knows where the money's at (business and govt).

      • by sjames ( 1099 )

        It doesn't help that Intel still tries to price their products like they are the undisputed king of performance. That after years of faking up benchmarks to maintain the illusion.

    • Instead of engineers. Maybe it's not too late.

      Just like Boeing did. When the Bean counters make the technical decisions, the outcome is always the same.

      For people who will tell you that they are the smartest people in the company, they seem to never learn the lessons provided.

      • Just like Boeing did. When the Bean counters make the technical decisions, the outcome is always the same.

        That is what happened to Honda a while back. The folks with their fingers on the purse strings essentially dictated how the cars should be built. With the inevitable results. At least that's changed [carbuzz.com].
    • Instead of releasing some neat features to accelerate machine learning, core load balancing and memory profiling for optimizing database loads to compete with AMD/NVidia/et al, they paywalled those features on their already more expensive processors, all but eliminating their cost advantage over simply buying more AMD cores.

      Actually, that doesn't even make business sense, from a suits perspective. "Here, pay a bunch more to make our CPU as fast as the cheaper AMD version!"

    • by KiloByte ( 825081 ) on Tuesday May 30, 2023 @10:19AM (#63561011)

      Building a new factory takes a few years at least. The pipeline for a new CPU is even longer. So there's no way results could be seen yet, and they won't for quite some time.

      The big risk is that idiot next-quarter shareholders may overrule Pat and make Intel go back to the practice of selling yet another piece of the company to buy nothing but stock buybacks. Otherwise, Intel is a large company that can survive a downturn.

      And letting suits amuck happens to most companies these days. Red Hat/Fedora are practically dead now -- all while giving fat revenue like Solaris and AIX business does.

    • Re: (Score:3, Interesting)

      by buck-yar ( 164658 )

      "A generally well-liked San Francisco native with an M.B.A. from Berkeley, Mr. Otellini was considered a break from Intel’s norm when he became chief because he was not formally trained in engineering." nytimes
      Intel's best era, core-2, non-engineer businessman Paul Otellini was in charge. He instituted the tick-tock architecture change then fab change roadmap cadence. When he left, they lost all focus, buying eyeware companies, getting into drones.. all sorts of bizarre stuff.

      • by _merlin ( 160982 ) on Tuesday May 30, 2023 @11:04AM (#63561127) Homepage Journal

        Otellini's biggest achievement happened before he became CEO. It was changing the general perception of CPUs from a commodity to a brand during the Pentium era. As stupid as we all think "Intel Inside" stickers are, it was part of Otellini's successful push to make normal people actually think about the CPU inside their computer.

        I think he screwed up a bunch of things as CEO. Coming from the x86 processor group, he was too single-mindedly focused on x86. He sold off their ARM assets to Marvell just before the smartphone revolution happened, convincing himself that they could make x86 competitive in that space. We all know how much of a failure that was - the Atom was never a real contender. He tried forcing x86 into the GPU space with Larrabee, which ended up using too much power for too little performance. Yes, it had a second life powering a generation of supercomputers as Xeon Phi, but they've lost that space to NVIDIA now. He oversaw the mismanagement of multiple acquisitions.

        Intel was ahead on desktop and server CPUs during Otellini's tenure as CEO because AMD was floundering and IBM had given up on that market, leaving them without much competition. AMD got ahead of Intel during the NetBurst era, but that was as much because Intel were headed down a dead end path. NetBurst got high clock speeds at the cost of actual performance - it was the perfect opportunity for AMD to seize, but they failed to follow it up. Now the roles are reversed, and it's Intel with no worthy successor to the *bridge and *well microarchitectures, allowing AMD to pull ahead now that they've actually produced a decent design.

        It'll be interesting to see how things go from here. Will Intel turn themselves around? Will AMD get stuck in a rut again? Only time will tell. But it would be more interesting if upstarts appear. I was excited about PA Semiconductor, but then Apple bought them, ensuring their CPUs only end up in Apple products.

        • From the outside, when Intel was king they assumed that they would always be the leading edge of technology. Then they spent 5 years trying to get 10nm to get decent yields as the likes of TSMC and Samsung caught up and passed them. AMD has surpassed them thanks to TSMC and now Samsung.

          Back then Intel was asked if they would open their foundries to manufacture other CPUs. While the answered publicly they would, insiders said Intel never took that seriously as they set their prices so high it was not worth t

        • AMD got ahead of Intel during the NetBurst era, but that was as much because Intel were headed down a dead end path. NetBurst got high clock speeds at the cost of actual performance - it was the perfect opportunity for AMD to seize, but they failed to follow it up.

          AMD had a hard time those days, not cos they failed to seize the opportunity, but cos Intel was busy bribing / threatening major CPU customers like Dell, etc.

          Intel had multiple legal actions taken against it after that, all over the world. Had to pay out a bunch of money, but the times had changed and it was too late for AMD to seize that opportunity.
          https://en.wikipedia.org/wiki/... [wikipedia.org].

          Let's see if Intel attempts similar things now that AMD is ahead again.

    • Running a big company is not an engineering task

    • Engineers are good at designing things - they're not necessarily good at running companies. The issue isn't who's running it its the communication between them and the engineers.

  • Solid leadership (Score:3, Insightful)

    by AmazingRuss ( 555076 ) on Tuesday May 30, 2023 @10:14AM (#63560995)
    "he takes inspiration from the biblical story of Nehemiah, who rebuilt the walls of Jerusalem under attack from his enemies" Let's build a strategy around a myth that has absolutely no relevance to our situation! Jesus will fix it!
    • Nehemiah

      I'm glad I'm not the only one that thought that was incredibly bizarre.

    • by kbrannen ( 581293 ) on Tuesday May 30, 2023 @10:53AM (#63561097)
      Instead of mocking him, maybe go read the story and consider if there are "concepts" or "inspiration" from the story that might apply, like having a leader who uses both wisdom and practicality to get a job done. People take inspiration from many stories from many places.
      • "a leader who uses both wisdom and practicality to get a job done" This needs to be a lesson?
        • "a leader who uses both wisdom and practicality to get a job done"

          This needs to be a lesson?

          Yes, and we see examples of the need all the time. Lots of people - including CEOs - have trouble keeping their eye on the Big Picture.

      • Yes, it doesn't matter where your insoirstion comes from. But still, he's described as a "devout christian", a.k.a. at least a little delusional. Time will tell if that works out for intel.
    • Re:Solid leadership (Score:5, Informative)

      by Anubis IV ( 1279820 ) on Tuesday May 30, 2023 @12:58PM (#63561419)

      Let's build a strategy around a myth

      A) The majority of historians disagree with you that it's a myth.

      That particular book in the Bible is largely made of firsthand accounts from Nehemiah—a known historical figure—that are considered to be historically reliable by the majority of scholars. As confirmed in records from Persia, Nehemiah was a real person who served in the court of Artaxerxes I. Later, as recorded in both the Biblical account and Persian records, he became governor of the then-Persian province of Yehud Medinata [wikipedia.org] where Jerusalem is located. His firsthand accounts largely deal with his activities while serving in an official capacity as a Persian governor, but also deal a bit with some of the conversations he had while in the court of the king.

      B) Are you telling me that you've never referenced the Ship of Theseus, advised people to not "cry wolf", or otherwise mentioned any of the lessons that come from our pool of historical and mythological stories?

      that has absolutely no relevance to our situation!

      Had you even skimmed a summary you'd know that to be untrue. The story is about how the walls have fallen into disrepair and how the people need to rebuild them while facing threats of attack from enemies. It's about how they buckled down, did the hard work, and got the job done in record time, despite the challenges along the way. If you don't see immediate parallels to Intel's current situation in which they've allowed their technology to stagnate, need to rebuild their lead, and are facing attack from competitors on all sides, then I don't know what to do for you.

      Jesus will fix it!

      You've actually got it exactly backwards. In the story, they didn't trust in God to fix the wall for them. They went and did the hard work themselves. They didn't trust that God would keep them from attack. They literally had swords strapped on while they worked, and they set up lookouts on the wall so they could call the workers to the aid of anyone who came under attack. This isn't a lame-brained "Jesus, take the wheel" sort of faith that everything will magically work out. It's a "boots on the ground" belief that there's important work to do and that you need to do the hard work to make it happen.

      Also, Jesus didn't arrive on the scene for another few centuries.

  • by evanh ( 627108 ) on Tuesday May 30, 2023 @10:21AM (#63561017)

    ... "both designing circuits and making them in its own factories." Many many companies did exactly that.

    Intel came to prominence because everyone wanted an IBM PC clone. Throughout the 1980's, Intel got shitloads of money for little work ... to Intel's credit it wisely spent that on supporting the PC's growth generally. Any growth of the one ISA grows Intel's bottom line as well.

    • by Anonymous Coward

      Totally agree.
      If intel wants to remain relevant they need to focus on competing in the current relevant markets:
      - Arm CPUs- every phone and tablet uses arm--go after a bigger slice of this
      - AI - they are now david to nvidia's goliath--they need to catch in the GPU/AI market, OpenVino is great, but not w/o the intel ARC hardware to use it well
      - x86 - they may still be bigger than AMD, but AMD is chomping at their heals with better execution, they need to at least keep up here.
      - networking and embedded chips-

    • Intel came to prominence because everyone wanted an IBM PC clone.

      Intel retained dominance (once gained) through a combination of underhanded dealing and superior process technology. For many years, theirs was the best. Now it isn't.

    • by sjames ( 1099 )

      They also started buying in to their own hype. They forgot that they backed in to the success with x86. The 8086 was supposed to be a support chip for a truly elephantine platform centered on the iAPX432. Their idea was a VERY complex CISC processor. the 8086 was to be an I/O channel processor. But then it was noticed that it was actually much faster to run on the I/O channels than it was to run on the CPU.

      The 8086 was salvaged from it and the 8088 was tacked on as an 8 bit bus version.

      Meanwhile they forgot

  • Intel gave up on their 64bit ISA ~2012 and I imagine that gutted a lot of their engineering. Whether the engineers actually left or were just super jaded doesn't matter... licensing AMD's x86-64 ISA for all their future chips had to hurt.

    For those that don't know; Intel's earlier ISAs (IA-8, IA-16, IA-32) were the standard for Windows PCs. Intel's IA-64 was called "Itanium" and was really neat from an engineering perspective. IA-64's biggest flaw was that it ran Windows IA-32 code poorly compared to AMD6
    • by _merlin ( 160982 ) on Tuesday May 30, 2023 @11:45AM (#63561233) Homepage Journal

      IA-64's biggest flaw was that it ran Windows IA-32 code poorly compared to AMD64.

      Itanium's biggest flaw was that it performed poorly on anything besides hand-optimised assembly code.

      Itanium had a ridiculously complicated architecture, where you had to manually mark where dependencies between instructions could happen, the processor wouldn't reorder instructions to allow it to keep the pipeline full, it didn't just have register windows but also had rotating groups within register windows, register stack spill/fill happened implicitly in the background which could make latency unpredictable, it had non-faulting speculative load instructions that required you to check whether the result was "not a value" and retry, and it only had a single "global pointer relative" addressing mode which meant you needed to use function descriptors that included a GP value as well as the code address and save/restore GP across calls that weren't guaranteed to be local.

      In particular, the lack of any kind of out-of-order issue or automatic dependency analysis meant that code needed to be optimised for the specific microarchitecture it would run on or it would perform very poorly. Intel's marketing people said this wouldn't matter because soon all code would be JIT-compiled, like Java and .NET CLR languages, so it would automatically be optimised for the CPU it was running on. This obviously didn't happen. On top of that, trying to optimise for the ridiculously complex architecture was very expensive, making JIT compilers targeting Itanium inefficient at the best of times.

      Compilers that could make the most of Itanium never materialised. You always needed to hand-optimise performance-sensitive code. If you had people who were good at that, it performed very well on integer-intensive code. But you could get 80% of the performance on POWER just using a good optimising C compiler with a lot less development effort. Itanium made more sense if you thought of it as a kind of overgrown DSP that you'd use for running carefully optimised algorithms. It never made sense as a general-purpose CPU.

      • Itanium wasn't a great design in many ways, but it did have really neat little bits like VLIW which was supposed to make JIT work. Intel never delivered on JIT for Itanium but Transmeta had a really performant way of running IA-32 code on their 128bit VLIW processor during the same time.

        We seem to be on the threshold of JITed regular programs being the norm. MacOS runs AMD64 code on their M1/M2 Arm CPUs faster than Intel/AMD CPUs(check out Apple's "Rosetta 2" that lets x86-64 apps run on M1/M2). Windows
        • by _merlin ( 160982 )

          Rosetta 2 isn't primarily a JIT - it statically recompiles the executable the first time you use it, and only uses a JIT for problematic parts, self-modifying code, and when the x86-64 code is generating code itself. (Rosetta used for running PowerPC code on x86 was a JIT.) AArch64 is a relatively simple architecture for the compiler to optimise for, and Apple's M series CPUs have some features to make running recompiled x86-64 code simpler (e.g. they can operate with a total store order memory access mod

          • Every modern JIT heavily relies on 'ahead of time' optimizations for speed(Including Rosetta1). Rosetta 2 takes any AMD64 binary and runs it immediately(including AMD64 programs that are other JITs which emit AMD64 code which is then executed). I would be very surprised if you can find a commonly used JIT that doesn't do ahead of time optimizations. Also, very very few programs are dynamically generated so in the real world the performance the performance of the truly dynamic part of a JIT doesn't matter.
      • Itanium also just wasn't particularly fast in the best case, and PC processors were continuing to get faster so rapidly that the writing on the wall was clear, legible, and available in a dozen languages. It was just too expensive for what it did, which wasn't even that exciting.

      • The first iteration of Itanium, Merced, had crap performance both in X86 emulation mode and in its own native IA-64 mode. The second iteration, McKinley, had much better performance. But it got rid of X86 emulation. And it had huge power consumption and footprint. It was never going to scale down into the desktop or workstation market.

    • by sjames ( 1099 )

      IA-64 was interesting, but even running native code, it's performance never matched the theory. For one, it turns out that compilers suck at doing the parts that Intel decided the compiler should do.

      In spite of x86_64 being able to kick it's ass, they priced it around $8000 each. It was widely known as Itanic and frequently inspired analogies like "kicking a dead whale down the beach" and "strapping JATOs to a pig".

      As a self-coup de grace their business deals and hype with partler companies that already pro

  • and Intel hasn't been able to pick up much business making chips designed by other people.

    Duh.

    Intel is a vertically integrated company and cannot possibly be a rent-a-fab because of it (nothing good comes from paying your competitors to produce your stuff)

    Its the entire reason Intel is fucked and has been fucked for 6 years and counting, and because Intel still hasnt floated breaking the company up yet as a valid solution, it will clearly only do so as a last resort, ie, when its too late.

    • "Intel is a vertically integrated company and cannot possibly be a rent-a-fab"

      Samsung Electronics is vertically integrated and also runs a successful foundry operation.

  • is a bad idea because Jesus walked everywhere and never learned to drive.
  • Don't believe a word of it, investment firms have been spreading this narrative for the last couple of years to try and get intel to spin off its fab business, so they can get themselves a nice big fat cash pay out. They actually know jackshit about the industry, and many of the claims made in this article are complete bullshit. I wish they'd just stop, because it's just infuriating to read.
    • by sapgau ( 413511 )
      Clearly they need to successfully complete their upgrade plans.
      Sleep at the factory floor, whatever it takes!
      But they should be applying their knowledge and resolve issues quickly, another delay will just be more bad news for their future.
  • One of the reasons companies like NVidia and AMD are surpassing in Intel is that in a weak market they have make cutbacks, streamlined their operations, now this fool thinks he can grow Intel profitability by investment into a weak market instead of cutbacks. They over invested at the height of the market, now to back that up they are planning to over invest in a trough. And hes talking about serious issues in leadership, well Intel still has them.

    • Austerity won't save Intel, because they are behind, and it will make them fall farther behind.

      It's not clear that spending money can save them either, but not spending money certainly will not.

  • Gone AWOL? Never existed?

  • Thank you! I've been wanting a reason to move to AMD. That was it.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...