Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel

Arm Takes Aim at Intel Chips in Biggest Tech Overhaul in Decade (bloomberg.com) 57

Arm unveiled the biggest overhaul of its technology in almost a decade, with new designs targeting markets currently dominated by Intel, the world's largest chipmaker. From a report: The Cambridge, U.K.-based company is adding capabilities to help chips handle machine learning, a powerful type of artificial intelligence software. Extra security features will lock down data and computer code more. The new blueprints should also deliver 30% performance increases over the next two generations of processors for mobile devices and data center servers, said Arm, which is being acquired by Nvidia. The upgrades are needed to support the spread of computing beyond phones, PCs and servers, Arm said. Thousands of devices and appliances are being connected to the internet and gaining new capabilities through the addition of more chips and AI-powered software and services. The company wants its technology to be just as ubiquitous here as it is in the smartphone industry.
This discussion has been archived. No new comments can be posted.

Arm Takes Aim at Intel Chips in Biggest Tech Overhaul in Decade

Comments Filter:
  • This is no surprise, its entirely expected given NVIDIA. Its the best path for ARM cpus that will compete with Apple's.
    • by rgomezc ( 992326 )
      This is my ignorance speaking... but: aren't Apple's ARM chips just designs over the ARM "core" ? Or they license some parts but develop their own other parts of the CPU? Or what's the deal there? I have no real idea on how the ARM licensing deal and the different designs based on ARM relate to each other...
      • by guacamole ( 24270 ) on Tuesday March 30, 2021 @03:46PM (#61218218)

        Apple designs its own ARM cores. That's why they beat ARM Holding's own designs. I believe the only thing that Apple licenses is the instruction sets.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        Some ARM chips are basically stock cores. Samsung gave up on customizing for instance and many of their newer ARM chips are closer to a "stock" design.

        Apple has added custom instructions to accelerate javascript and video encoder/decoders for instance in their chips. They had a team that built their own GPU for it. It's pretty custom at this point in some ways and quite stock in others. This is also why apple wins some benchmarks over other ARM chips. They put in the effort to improve the earlier desig

      • by perpenso ( 1613749 ) on Tuesday March 30, 2021 @03:54PM (#61218236)
        My understanding is that Apple has a perpetual license. I do not know if that covers only the ARM designs at the time of the signing or anything in the future.

        At worst it would seem to be something like Apple has a perpetual license to the arm32 and arm64 instruction sets. They would seem have a license to whatever reference designs ARM was offering at the time. So on day zero they have working hardware. However for every day since then they have been able to update or replace elements of that design, while maintaining instruction set compatibility. The M1 is the results of that process after many years.

        Various other companies have done similar things. For example we have the Broadcom ARM cpus in the Raspberry Pi and the Qualcomm ARM cpus in various non-Apple phones and tablets. Apple seems to have quite a lead over these two.

        Now enter NVIDIA, a company that may be more competitive with Apple than the others with respect to design due to relevant experience and money.
        • by ShanghaiBill ( 739463 ) on Tuesday March 30, 2021 @03:59PM (#61218258)

          My understanding is that Apple has a perpetual license.

          Yes, Apple bought a perpetual license from ARM in 2008. They can do anything they want with the design, make as many chips as they want, and pay no royalties, ever.

          The designs have diverged for 13 years. About the only thing still in common between ARM and Apple CPUs is the instruction set.

          • Not just perpetual but also an architectural license which means they design what they want from an ARM core instead accepting the core as it is. They means being able to tweak any machine learning.
          • Does it even matter though? Apple is rapidly moving towards a closed, processor agnostic developer environment. If they wanted to extend the ARM instruction set, or add their own extensions, it would be pretty trival to support. It seems to me they just use ARM because even if you start from scratch now, you're pretty much just going to end up with a RISC instruction set that looks a lot like ARM (compare with RISC-V). Since they have the licences (they were shareholders if I recall correctly) they have no

            • Does it even matter though?

              It matters that it doesn't matter.

              The more that software moves to processor and architectural agnosticism, the better. CPU designers can be free to innovate both for performance and efficiency.

              We have been stuck in an architectural cul-de-sac for 35 years. The x86 arch is responsible for more than a gigatonne of annual CO2 emissions.

              • It matters that it doesn't matter.

                The more that software moves to processor and architectural agnosticism, the better. CPU designers can be free to innovate both for performance and efficiency.

                it matters a bit, and there's still inertia. Back in the olden days porting to a new arch was a pain. I worked (interned) for a CAD company back in the 90s, and they of course supported all the major chips: x86, MIPS, SPARC, Alpha, HPPA, PPC, and making sure it built everywhere was a huge pain.

                It's much better now. Th

            • by Koen Lefever ( 2543028 ) on Tuesday March 30, 2021 @07:18PM (#61218852)

              they have the licences (they were shareholders if I recall correctly)

              ARM was originally "Acorn RISC Machine", designed as the CPU for Acorn's 1987 Archimedes computer series.

              When Acorn spun off ARM as "Advanced RISC Machine Ltd." in 1990, it was structured as a joint venture between Acorn Computers, Apple Computer, and chip manufacturer VLSI Technology.

              The 1992 ARM6 was designed by Acorn and Apple together (the ARM610 being used in Apple's 1993 Newton PDA).

          • Apple did not buy anything from ARM.
            They owned ARM.

            When ARM got sold they retained a license.
            Simple.

            • This is incorrect (as usual).
              Apple purchased their perpetual license in 2008, long after they had divested in Arm.
              They purchased the license after acquiring PA Semi so that they could leverage that purchase to build custom processors for the iPhone.

              Furthermore, Apple has never been anything but a minority holder of Arm.
              At their peak, they owned 30% of Advanced Risc Machines Ltd., with VLSI and Acord owning the remaining 70%
              As of 1999, they only owned around 14%.
          • They can do anything they want with the design

            Close to anything, but not quite. Every core you design has to pass Arm's compatibility test suite.

        • by rgomezc ( 992326 )
          Ok, so they won't be able to automatically benefit from this new features. They would have to license this again, if it even makes sense... as they have their own design for ML and that kind of stuff.
          • Ok, so they won't be able to automatically benefit from this new features. They would have to license this again, if it even makes sense... as they have their own design for ML and that kind of stuff.

            Most likely. Sort of like once upon a time AMD had an Intel license and was a second source manufacturer, but today we have wildly different designs from both.

            So we might be moving from an Intel v AMD type of world to an Apple v NVIDIA sort of world. Purely speculation but fun to think about. Nothing like competition to get us cooler "stuff". :-)

        • I think you're mistaken about nVidia vs. Apple, being more competitive due to money. nVidia isn't even in Apples class regarding money.
          • I think you're mistaken about nVidia vs. Apple, being more competitive due to money. nVidia isn't even in Apples class regarding money.

            Yes and no. Apple has more money but it has to spread it over more things. NVIDIA is more focused.

            But as I wrote in another comment we can still draw parallels despite financial disparity: "So we might be moving from an Intel v AMD type of world to an Apple v NVIDIA sort of world. Purely speculation but fun to think about. Nothing like competition to get us cooler "stuff". :-)"

            • And I think Apple is RAZOR SHARP focused on ARM chip design. They have the fastest (consumer), with the lowest power draw, and now it powers all their hardware (iPad, iPhone, Apple Watch, Apple TV & Macs. ARM design has to be their number one focus, to stay on top.
              • And I think Apple is RAZOR SHARP focused on ARM chip design. They have the fastest (consumer), with the lowest power draw, and now it powers all their hardware (iPad, iPhone, Apple Watch, Apple TV & Macs. ARM design has to be their number one focus, to stay on top.

                I am not saying Apple's SoC (System On a Chip) engineers are any less focused than NVIDIA's SoC engineers. What I am saying is that Apple has to spread the money over a lot more than SoC design compared to NVIDIA.

          • I don't think either company is likely to be limited by money. They are both very rich. The difference between spending $1 billion or $10 billion (or $100 billion) probably doesn't make as much a difference as making good use of the resources. Where they will both be limited is in management talent (i.e. people who will be making the decisions) and top-level engineering talent (people advising on what is possible) - probably a similar number of people at both companies. The lower-level engineering and manag

      • by teg ( 97890 )

        This is my ignorance speaking... but: aren't Apple's ARM chips just designs over the ARM "core" ? Or they license some parts but develop their own other parts of the CPU? Or what's the deal there? I have no real idea on how the ARM licensing deal and the different designs based on ARM relate to each other...

        No, Apple has a very large in-house CPU design team. They have licensed (perpetually) the ARM instruction set, so they make their own chips that are ARM compatible - but has additions, like their Neural Engine. That's why Apple's CPUs for mobile is 1-2 years ahead of the competition, which hasn't caught up with 2019s A13 Bionic [wikipedia.org] yet, much less 2020s A14. One reason here is that Apple uses its CPU across the entire phone range, while the CPU vendor used by the competition (Qualcomm) sells multiple ranges -

        • But we arent comparing apples and oranges. Apple the company creates SOC solutions where everything is colated, rather than spread out on a large mobo.
          • by teg ( 97890 )

            But we arent comparing apples and oranges. Apple the company creates SOC solutions where everything is colated, rather than spread out on a large mobo.

            All of them do that... and then they add one or more ARM cores in different configurations to the SOC, or make their own (as Apple do). Sometimes, things that are part of the SOC are split off to allow vendor choice - e.g. 4G vs 5G, which Qualcomm (at least for a while) put on a separate chip. This allows the phone manufacturers to choose the solid and well performing 4G, or the "new, marketing friendly, but still hot and power hungry" 5G with the same main SOC. And then later, they can put it back [anandtech.com].

      • Apple did things that are known to improve speed like locating memory and so on closer and on the same chip as the cpus.
        • by willy_me ( 212994 ) on Tuesday March 30, 2021 @07:20PM (#61218860)

          Apple did things that are known to improve speed like locating memory and so on closer and on the same chip as the cpus.

          People keep on saying this but it is not exactly true. Locating the memory closer to the CPU has minimal only a impact on speed. Latency is minimally reduced and bandwidth is not impacted at all. It takes 0.7 ns for a signal to travel 10 cm. It takes 10 ns to access a word in memory. So there is a slight improvement (~10%) but this improvement is minimized by the CPU cache. I could see that as being a contributing factor for Apple's decision, but not the main reason.

          The real improvement with the Apple design is that they can achieve high bandwidth with low power consumption. They can operate the memory at a lower voltage and do not need to drive the memory lines as hard while still maintaining high speeds. So it is a great design choice considering where the M1 is being deployed.

          It will be interesting to see how Apple handles their Pro machines where power is less of a concern. The Ryzen CPUs demonstrate that external memory can go just as fast as the M1. Apple might opt for an approach using more traditional memory (DIMMs) and some extra CPU cache to balance it out. Using the same solution as the M1 really does not make much sense in their pro machines - except possibly their laptop. If they do insist on soldering in memory they should use HBM2 memory - expensive but with massive bandwidth.

          • Well the more the electrons need to travel it all adds up. A few nanos here and there. It probably also means DMA between cpu memory and grfx is also that much faster. I cant help but wonder with old machines like the Amiga, and if you look at the mobo, how much of their slowness was due to lines between all the chips being tens of tens of cms.
            • But it does not really add up. The latency is only for the first word - not every bit of data that is transferred. It would be different if it were cache and directly accessed but that is not the case. Memory access is different and has a minimum latency applied to the initial request. That latency is an order of magnitude greater then the time it takes the signal to travel between chips. Having the trace length does not double the speed - not even close.

              It probably also means DMA between cpu memory and grfx is also that much faster.

              Not really that much faster. First, the M1

              • Will need some kind of support for video cards (not usb based ones)
                Also with out forcing people who need pci-e slots or just cpu power to pay for an high end video card set up like the mac pro ass tray.

                Maybe an high bus video card with it's own ram? Muilt cpu / gpus on cards? With there own ram?

                Nice to have real m.2 slots

              • > The latency is only for the first word - not every bit of data that is transferred.
                Thats because the systems are designed to operate within those time constraints, the buses and delays are what they are so the cpus work within those bounds. What im trying to say is if the latency was less etc then stuff would be running at higher frequencies etc.
                > Not really that much faster. First, the M1 is designed to eliminate the need for such transfers but that is besides to point. Transferring a block or
                • And the 486 and intel machines have a diff arch compared to machines like the amiga. The consoles which had higher performance and the 486 all went down where the graphics had their own memory which was separate from the cpu.
              • So I stick by my statement that the speed benefits originating from the M1 memory design are minimal at best. Let us wait to see what Apple does with their Pro hardware - it will be interesting.

                PoP allows for 2 potential speedups.
                Initial latency (important on cache-unfriendly code) and higher clocks on the memory bus.
                As for the latter, I don't think that's actually done since they're still just using "off-the-shelf" DDR RAM I assume.

                Apple's design isn't quite PoP (it's worse, really- it's "RAM off to the side, but in the same package") but the possible performance improvements are the same.

                While I disagree that lower latency is minimal at best, it is certainly not "huge" by any means.

          • There are two choices really.

            The obvious one is to move to sockets RAM and a DDR? controller on board. That's what AMD and Intel did obviously. The other option is to go for multiple CPUs and have a high bandwidth interconnect between them, which is what Fujitsu do. Both will cost power and reduce overall bandwidth.

        • PoP DRAM has been standard on ARM SoCs for longer than Apple has owned an architectural license.
  • Thats what I see ML used most for. I do not see it in my everyday life beyond maybe DDG.
  • I expect AI needs will grow beyond chips and "AI databases" will be the next frontier as the factors needed to get to the next level to will grow beyond RAM's size. The "brain" will need to be networked and be somewhat asynchronous. We don't know if dedicated neural network chips will be a good fit for this change.

  • ARM runs Windows 10 and any and all applications without any Kluge software or hardware that slows it down.
    • ARM runs Windows 10 and any and all applications without any Kluge software or hardware that slows it down.

      Good news, we are there. Actually we have been there for a little while. Of course I am excluding software that includes assembly language source code (that would include various intel intrinsics in C/C++).

    • Microsoft is in a hard push to move desktop computing into the cloud, and they have initiatives to support ARM for over a decade now. I doubt that renting an ARM virt in the cloud will ever be as expensive as a desktop computer and will likely take the place of office computer fleets before Windows 10 ARM support is more effective for real work.

    • But Windows is already done.

      The corpse just takes some time to stop moving and cool off.

  • Whatever happened to the Chinese mutiny?

  • by ytene ( 4376651 ) on Tuesday March 30, 2021 @06:41PM (#61218756)
    Over the last say 20-25 years, the evolution of the microprocessor has been driven by a combination of design decisions, competition and market forces. We know, for example, that Intel proposed massively parallel solutions to Microsoft back when it looked like the Inmos Transputer could gain traction, only for Bill Gates to ask Intel for clock speed on the basis that at the time we didn’t have compilers with the ability to develop efficient and massively parallel code.

    The world has moved on considerably from that time. Not only do we see some really amazing solutions with cluster-based super-computers - literally all the world’s top supercomputers today are clusters... but we’re also seeing some insane levels of multi-core scaling from the major players. What have AMD reached now? 64 Cores? More?

    This leads me to think about a series of design decisions that we’re making elsewhere in our compute fabric, especially at the workstation level... For example, we still design PCs largely around a single CPU on a motherboard, even when we know that we could get scalable power with a different design. Presumably this is an economic decision because it’s cheaper to go with what we know, especially while a single desktop processor meets all our power needs.

    But as has been pointed out higher up this thread, Intel CPU’s are responsible for gigawatts of power consumption annually. A lot of the inefficient comes from the waste heat generated by clocking things so fast. Maybe - just maybe - there are was to bring together some of the design features used on Transputer clusters [distributed loops, for example] with larger numbers of less powerful but much more power efficient cores?

    You can tell I’m not a system or CPU designer, right? But it just feels like we’re inching towards being able to look beyond current design preferences. If we could get a seriously fast bus technology, we might be able to achieve similar power and far better efficiency with a larger number of less powerful cores.

    I don’t know if this would be possible, but it would be nice if Arm could try...
    • well with ram in each socket and pci-e as well. You need to fill all sockets to get full pci-e and fill ram channels to get max ram io.

  • Looks like AI is a new buzzword, overtaking 'Synergy In CS101 a CPU executes intructions. Simple. Add a GPU and you get GPU and or graphics. Add MMU and IO management. Add DRM and User lockout, and you get early obsolence and non-upgradability. ARM has been loaded down with cruft. Apple went back to basics - CPU with all extras on chip, so we can own the handheld spaces, and tell leechs to back off. Maybe they will licence Chinese 5G and add it to their chipset, and tell Qualcomm to take a hike,
  • A literally ridiculously weak knock-off!

    It's performing about 100 times worse than real neurons.

    And even real neurons still don't make it an AI. It needs autonomy for that.
    And no, pre-programming does not count. And "training" is pre-programming.

Don't panic.

Working...