Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Businesses

AMD Grabs Xilinx for $35 Billion as Chip Industry Consolidation Continues (techcrunch.com) 37

The chip industry consolidation dance continued this morning as AMD has entered into an agreement to buy Xilinx for $35 billion, giving the company access to a broad set of specialized workloads. From a report: AMD sees this deal as combining two companies that complement each other's strengths without cannibalizing its own markets. CEO Lisa Su believes the acquisition will help make her company the high performance chip leader. "By combining our world-class engineering teams and deep domain expertise, we will create an industry leader with the vision, talent and scale to define the future of high performance computing," Su said in a statement.
This discussion has been archived. No new comments can be posted.

AMD Grabs Xilinx for $35 Billion as Chip Industry Consolidation Continues

Comments Filter:
  • Since the value of property is depressed right now, so if you've got money you buy. It's the same as in 2008, and it's still just as bad for consumers.
    • by Junta ( 36770 )

      Of course, in the tech sector this has been going on for years with no pandemic to encourage it.

      • by AmiMoJo ( 196126 ) on Tuesday October 27, 2020 @11:55AM (#60654622) Homepage Journal

        I'm actually surprised it has taken this long. I thought it might have been sooner once Intel bought Altera.

        Having relatively small programmable logic cores on CPUs could provide some massive performance improvements in certain tasks. Maybe the problem is that they are too specialized, and not of much use in the kind of things consumers care about. As such peripheral FPGAs make more sense.

        • Having relatively small programmable logic cores on CPUs could provide some massive performance improvements in certain tasks

          IIRC, it was also going the other way. Some Xilinx parts have embedded ARM or PowerPC cores in the middle of a sea of gates. Sometimes you can boost the performance of your FPGA with some dedicated circuitry.

        • by Junta ( 36770 )

          Intel tried to release a set of CPU SKUs with FPGA integrated, but the product isn't *that* interesting to drive it.

          The problem is that performance wise, a traditional chip will always run circles performance and power wise compared to an FPGA. So you need scenario where the flexibility of an FPGA is critical and/or you managed to get a bunch of Verilog designs tailored to a suite of applications and you need to switch between them.

          A handful of places get some value out of playing in Verilog and getting an

          • by Anonymous Coward

            >a traditional chip will always run circles performance and power wise compared to an FPGA
            What's a traditional chip? A CPU or an asic?
            If you're saying a CPU will always run circles around an FPGA, the answer is no, that's not always true. You can heavily parallelize and pipeline computation on an FPGA in ways that a CPU simply can't. Really depends on the problem.
            But asics yes.

            • by Junta ( 36770 )

              The issue being that for most problems, one of the off the shelf processors is pretty well mapped to most needs. Sure ASICs are faster if you have a specific application, but the chance that you can verilog up something to put in a FPGA for most problems and outperform trying to solve the same problem on a CPU or GPU is slimmer than most would admit. The processors are advanced, carried variety of capabilities and big vector instructions, many cores, and are really good nowadays at idling currently irreleva

          • by ncc74656 ( 45571 ) *

            The problem is that performance wise, a traditional chip will always run circles performance and power wise compared to an FPGA.

            So that's why cryptocurrency mining on FPGAs was never a thing, and never led to the development of mining ASICs...oh, wait.

            Not every task will benefit from offloading to programmable logic or custom silicon, but that doesn't mean that no such tasks exist.

            • by Junta ( 36770 )

              I mean, you had it right in the comment, FPGA prototyped ASICs. If you could put the minining design into an FPGA or buy an ASIC of that design.... Well you'd lose out trying the FPGA.

              My point was that there are precious few scenarios where you'd go FPGA and *not* go ASIC in a high performance scenario, and that most people don't have it in them to do the work needed for either and are served well enough by general CPU or GPU programming.

              • by ncc74656 ( 45571 ) *

                By "traditional chip," I assumed you meant "CPU," since you were referring to CPUs with some integrated FPGA fabric (?), not chips in general.

                • by Junta ( 36770 )

                  Yeah, I tried to cover too much with too few words, but basically either you aren't ready to do verilog and/or your problem as well covered by CPUs, or you are invested and you head on through to ASIC after FPGA, leaving FPGA as the long-term approach only if you have a library of FPGA payloads you need to switch between, or it's a relatively low power general chip on a board basically there for the purpose of 'soft-reworking' mistakes in your board layout and/or some little housekeeping responsibilities..

                  T

    • No this is a hedge against Intel with Altera and there reconfigurable server market

  • by account_deleted ( 4530225 ) on Tuesday October 27, 2020 @10:19AM (#60654158)
    Comment removed based on user account deletion
    • Is there not also the possibility of FPGA chiplettes?

      Most of AMD's lineup has space to spare under their lid's because they are only a 1 or 2 chiplette design but their primary i/o die and layout allows for 4 chiplettes. A highly parallel logic cascade across even a relatively small input space like a 128 byte buffer is quite useful.
    • As I think I've said here before, another aspect of this is that FPGA fabric, which is highly repeatable in places and easily polyfused out if faulty, is very good for testing new process nodes whilst still getting a reasonable yield. FPGA companies get early access to help debug nodes because of this, so perhaps another less obvious win for AMD there.

      AMD has been out of IC fabrication for 11 years now. The old AMD manufacturing assets were spun off as a separate company called Global Foundries.
      In 2018 AMD abandoned GlobalFoundires for 7nm parts, making GloFo abandon all 7nm development, dropping out of the race (they were on track until AMD abandoned them).

      So no, no win for AMD, obvious or not so obvious. The only company who would have a win like you describe is intel...

      • Actually you have it backwards. Its not an advantage for Intel/Altera because they roll their own fabs so they already get a first come anyways.

        Colourspace's logic would be a benefit for AMD because it means they would get in with foundry's new processes sooner because they like to prove process nodes on FPGAs.

      • Comment removed based on user account deletion
    • by tlhIngan ( 30335 )

      As I think I've said here before, another aspect of this is that FPGA fabric, which is highly repeatable in places and easily polyfused out if faulty, is very good for testing new process nodes whilst still getting a reasonable yield. FPGA companies get early access to help debug nodes because of this, so perhaps another less obvious win for AMD there.

      Not so easy actually - FPGA fabrics aren't identical - some blocks connect to other blocks and such, but you can't always substitute one block for another. Fl

  • .. we joked it should be Xilinx to buy AMD. Things have changed a lot in 5 years.

  • Ouch to everyone else
  • I admit I don't understand the numbers on this one. However, I am a believer in Lisa Su. She has a BS, MS and a PhD in electrical engineering from MIT and has a proven track record. It is very rare to see a competent and qualified person as the CEO of a chip company. Intel has been struggling with leadership challenges. Several of their recent CEOs were complete duds.
    • by Anonymous Coward on Tuesday October 27, 2020 @11:21AM (#60654456)

      Blink twice if you are in danger

    • There is a reason the technical among us (me included) are not usually good executive managers. Such work would definitely not be in my natural wheelhouse because I just don't think that way. I suppose I could do it, but I don't hold any illusions that it would be easy, or that I'd be somehow good at it. I know I laugh at the "bean counter" and "MBA's" that are always getting in my way, but deep down I know what they are doing is necessary even if I don't understand why.

      Engineers solve problems, some of u

      • Generalizations usually don't work. Consider Intel, Noyce/Moore/Grove all engineers, all did well. Barrett was iffy, Otellini good then not so good. Krzanich, awful. Of those only Otellini was non-tech.
    • I agree, Lisa Su, whatever her degrees, has steered AMD nigh perfectly since she took over, for about a decade. She's certainly one of the best CEOs in tech. If only I had realised when AMD shares were at 16 USD...
    • There is a reason the technical among us (me included) are not usually good executive managers. Such work would definitely not be in my natural wheelhouse because I just don't think that way. I suppose I could do it, but I don't hold any illusions that it would be easy, or that I'd be somehow good at it.

      I had the privilege of seeing her speak in person in front of a few hundred people about 4 years ago. She was clearly a PhD engineer speaking to us. She did not have the polish yet. She has clearly been tryi

  • by dskoll ( 99328 ) on Tuesday October 27, 2020 @11:11AM (#60654384) Homepage

    I wonder if the combined company will release a version of the Ultrascale+ with AMD core(s) instead of ARM? That could be pretty interesting for data centre applications and high-powered AI/ML workloads.

    • by SWPadnos ( 191329 ) on Tuesday October 27, 2020 @12:02PM (#60654666)

      I wonder if the combined company will release a version of the Ultrascale+ with AMD core(s) instead of ARM? That could be pretty interesting for data centre applications and high-powered AI/ML workloads.

      Yeah, I was wondering about something like this - using the low power cores in the Epyc 3000 [amd.com] series.

      Also, there's a great opportunity to make a GPU that actually creates a hardware processing pipeline on the fly. The Xilinx Versal [xilinx.com] is a lot like what I'm thinking of, just with Zen and GPU cores instead of ARM and DSP slices.

      The Versal also gives some clues as to the value of Xilinx to AMD - they have PCIe Gen5 and several other really high-speed communications interfaces (600G ethernet, anyone?). The on-chip networking tech may help improve Infinity Fabric. It'll be interesting to see what they come up with.

  • by aRTeeNLCH ( 6256058 ) on Tuesday October 27, 2020 @01:12PM (#60654962)
    Very exciting times. I'm looking forward to what comes out of this. There should be lots of synergies to be had with two so complementary companies. Reconfigurable CPUs and APUs..? Scaling interconnects? Reconfigurable Cache connectivity?
    • Intel bought Altera five years ago, and as far as I've seen, the synergies are pretty close to zero so far. I wouldn't expect anything different from this disaster.

  • AMD will synergize in what intel should have done with Altera!
    • You jest, but you might be partly true.
      By just having a complete FPGA ecosystem in-house, they are countering Intel's Altera move. Perhaps they see something around the corner/have real plans or they could be just hedging their bets. Either way, Intel won't have a leg up on them in the FPGA business any more.

A Fortran compiler is the hobgoblin of little minis.

Working...