Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Intel

AMD Threadripper 1950X Trounces Core I9-7900X In Multithreading Benchmark (pcper.com) 114

dryriver writes: The Cinebench R15 benchmark is a popular tool for measuring how well CPUs cope with multithreaded compute loads. AMD's Threadripper 1950X 16 core CPU, priced at $999 according to AMD, benchmarks 41% faster in Cinebench R15 than Intel's also $999 10 core Core i9-7900X CPU. While Intel's Core i9-7900X scores 2186 points on Cinebench, AMD's Threadripper 1950X scores 3046 points. Even the cheaper 12 core $799 Threadripper 1920X is over 200 points faster in Cinebench R15 than Intel's Core i9-7900X. Intel has its own 16 core Core i9-7960X in the works, performance yet unknown, priced at $1,699, but AMD's 16 core part currently appears to be a full $700 cheaper than Intel's MSRP. It remaines to be seen who is faster in single-threaded performance -- Intel may take that crown --and what the power consumption of a fully loaded Threadripper looks like compared to its Core i9 counterpart.
This discussion has been archived. No new comments can be posted.

AMD Threadripper 1950X Trounces Core I9-7900X In Multithreading Benchmark

Comments Filter:
  • but it looks like my Xmas present to myself will be a new AMD box.

    • by Anonymous Coward

      There's no such thing as real life single threaded these days. The OS is always running a lot of threads and every service uses them. So whether Intel wins a thing that doesn't exist in real life is kinda irrelevent.

      I also read a report that Intel is making a single core chip for embedded devices to compete on price. They need a wakeup call.

      The processor market is simple, it's 94% ARM based, and 6% AMD/Intel based and Intel's mobile Core i3s are slower than ARM's 8xA53 and A72's.
      They need to up their game a

      • Those ARM chips only cost a fraction of the AMD and Intel processors, though, and in any sort of real computation are only a fraction of the speed as well (and power consumption, to be fair). Intel and AMD should stick to the big, powerful, and expensive chips.

        Moreover, there are plenty of single-threaded workloads. Most modeling applications, for example. True, you wouldn't really want a single core processor - but you are far better off with a 4 core at high clock speed than an 8, 12, or 16 core. It will

  • by duke_cheetah2003 ( 862933 ) on Friday July 14, 2017 @10:38PM (#54812539) Homepage

    I surely hope it's servers. These processors would be silly in a desktop computer. We're not even fully loading down 2-8 core machines now. Gaming performance has and still is a single core endeavor, and even now, most of my stuff has trouble pegging any cores to 100% for any length of time.

    About the only thing I do that consumes a lot of cpu time is compiling. Not very many computer users compile stuff.

    Again, it's ultimate more of the same lackluster improvements. Throwing more threads/core at stuff, when it's still who's got the FASTER single core that matters at the end of the day. At least in my opinion.

    For servers however, running virtualization stuff, these CPU's should be great, squeeze even more out every physical server unit.

    • High end workstations, that traditionally are built with server CPUs.

      I suspect the next generation of AMD's CPUs will be based on these.

    • I surely hope it's servers. These processors would be silly in a desktop computer.

      The quad-channel memory could help in a lot of situations. There will be plenty of applications that can benefit. But the 12/24 core model could be the better choice for many.

      • by koomba ( 2882339 ) on Friday July 14, 2017 @11:10PM (#54812639)
        The quad channel memory most definitely will help, even more so than than on Intels new processors.

        There are many reasons, but one crucial one is the very architecture of these new Threadripper CPUs. These higher core count processors are literally multiple lower core Ryzen chips in one die.

        I won't get into any pros or cons of that aspect, but just mention it to explain the significance of the quad channel DDR4. The way AMD has designed these smaller "packages" to work together as one CPU is, to put it very simply, to have them communicate through the DDR4 bus.

        This is significantly different than Intels so-called ring bus, or uncore. So it's a pretty big change for HEDT users who have essentially been exclusively using Intel since around 2007.

        I don't claim to know every technical detail of TR/Ryzen, but I do know the end result of this is that your DDR4 memory speed with TR can have a large impact on performance. In particular, running higher than the official platform speed memory, or just overclocking above standard gives very nice increases in many scenarios.

        I'm on mobile so I can't look it up right now, so don't hold be to this, but I remember what I saw being something like maybe 20-30%ish(?) improvement in some gaming benchmarks I believe. And there were a couple others that I only remember the numbers right now, but it was around 30% and 40% even in one case.

        So I think that couple persuade a decent part of the HEDT community who isn't super hardcore, and probably games more or as much anyways as they utilize the massive multithreaded advantages of a HEDT platform. So I think that's pretty exciting, and gives TR at least a decent ranking in parts of the HEDT user base.
        • by AmiMoJo ( 196126 )

          Threadripper has some other nice features. Loads of PCIe lanes, great if you want multiple NVMe SSDs or RAID cards etc.

          For me one of the biggest is encrypted RAM. Ryzen has is too to some extent. RAM is encrypted in real-time with very minimal performance loss. VMs can also have their own private keys for RAM encryption to make them more secure (even the host OS can't spy on them).

        • by jon3k ( 691256 )

          The way AMD has designed these smaller "packages" to work together as one CPU is, to put it very simply, to have them communicate through the DDR4 bus.

          I thought they communicated via Infinity Fabric?

        • by red_dragon ( 1761 ) on Saturday July 15, 2017 @11:45AM (#54814585) Homepage
          "I won't get into any pros or cons of that aspect, but just mention it to explain the significance of the quad channel DDR4. The way AMD has designed these smaller "packages" to work together as one CPU is, to put it very simply, to have them communicate through the DDR4 bus." Wait... no, that's not right. The cores talk to each other via Infinity Fabric. To talk to cores on a separate module or to access memory managed by a different memory controller (Threadripper has two, Epyc has four), Infinity Fabric uses PCIe inside the package. Epyc dedicated 64 lanes for this purpose, so I assume that Threadripper uses 32 lanes. The memory buses never come into it.
    • by Anonymous Coward

      No, multicore is getting rather standard in gaming, least in the last 5 years.

    • by Anonymous Coward

      I do 3D rendering which is very CPU heavy and would LOVE to get my hands on one of these.

    • by Craig Cruden ( 3592465 ) on Friday July 14, 2017 @10:48PM (#54812577)
      I managed to peg my 8 core Xeon at nearly 100% CPU usage for about 6 months straight - 7 days a week, 24 hours a day doing video transcoding on a library. But yes, any computer with more than 2 cores is really a niche computer these days since 90+ of the people run computers with CPUs idling 90%+. The $100 Ryzen 3 will be more than enough power for the masses. The greatest "performance" boost for personal computers in the last few years -- for the masses -- has been flash based SSDs...
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Some games do indeed use many cores. Some RTS games can use them (usually the types with lots of units) and even Battlefield 1 correctly loads all 12 of my threads - 6 cores 80-90% and 6 threads ~30% - just about the ideal load for a 6/12 CPU to achieve max throughput without starving any threads. (BF1 was maxing out all 8 threads on my old i7-2600 so it does like CPU, current CPU is an i7-6850k @ 4.3Ghz for reference).

      Most of my games are not single threaded and those that are are not CPU bound (often not

      • People on a budget, especially hobbyists, want to run free software. Much free software, like SPICE and most GIMP plugins, is single-threaded. Waiting 20 minutes for a SPICE run to complete, then changing a component 10% and running again - and again - and again - is no fun. The same goes for a blind deconvolution filter for GIMP, where a single repetition can be several hours.

        That these programs could and should be made parallel or multi-threaded is irrelevant; and paying several thousand dollars for a com

    • Re: (Score:3, Interesting)

      > We're not even fully loading down 2-8 core machines now. Gaming performance has and still is a single core endeavor, and even now, most of my stuff has trouble pegging any cores to 100% for any length of time.

      1 - I load up my 8 core machine every day.

      2 - Gaming is not single threaded unless you're an idiot or living in 1993. At the very least, physics can run separate from display, and every modern game on the planet runs at least 1 frame lag for that same reason.

      3 - Vulkan is designed from the ground

      • by Anonymous Coward

        You seem very angry.

      • by Anonymous Coward

        2 - Gaming is not single threaded unless you're an idiot or living in 1993. At the very least, physics can run separate from display, and every modern game on the planet runs at least 1 frame lag for that same reason.

        Perhaps you're the idiot living in the early 2000's with cpu driven physics. Meanwhile the rest of us have moved on to GPU accelerated physics.
        Yes, games are multi-threaded, but sound, AI and other tasks only need so much cpu time. Most games are still dominated by a single thread.

        • by Anonymous Coward

          > Yes, games are multi-threaded, but sound, AI and other tasks only need so much cpu time. Most games are still dominated by a single thread.

          Multi-threaded rendering has been mainstream for at least a decade. CPUs aren't getting any faster, just more numerous. Game after game after game is using as many CPUs as you can provide it.

        • by aliquis ( 678370 )

          Battlefield 1 is fine using ~10 threads here:
          https://www.youtube.com/watch?... [youtube.com]

          Lots of games would run like complete garbage if you forced them to run on one core without hyper-threading.

      • The real game you play is 'uptime' and the leaderboard is 'top', amirite?

        Yer a real operator, son. We are very impressed.

      • 2 - Gaming is not single threaded unless you're an idiot or living in 1993. At the very least, physics can run separate from display, and every modern game on the planet runs at least 1 frame lag for that same reason.

        Gaming is not single-threaded, but multi-threaded games still only tend to fully load a single core. The other cores tend to only be very lightly used. Having more than 4 cores is simply not helpful for 99% of games, AAA or otherwise.

        I literally just finished playing a game that uses 100% GPU, 6 cores, and the other 2 I used for encoding the video recording.

        I'd love to know which game, because I find this extremely hard to believe.

    • by Khyber ( 864651 )

      "These processors would be silly in a desktop computer."

      You must not run multiple GPUs and multiple M.2 drives.

    • when it's still who's got the FASTER single core that matters at the end of the day.

      I'm not sure how much single core performance even matters at this point. My work machine is a modern i7 clocked at some crazy high speed while my home machines are a ThinkPad X220 and an old school dual Xeon X5690 setup. The work machine is actually worse than my old dual Xeon setup on multi-threaded stuff and, for a single core, is indistinguishable for real life performance. Yes, it compiles a lot faster than my X220 but, if I weren't compiling stuff, I wouldn't know the difference between the machine

    • by Ramze ( 640788 ) on Saturday July 15, 2017 @12:11AM (#54812757)

      They're also useful for video encoding, animation, multimedia production, simulation, and AI.

      Have you ever tried to transcode MPEG2 video to x.265 or VP9 on a desktop PC? 2 hrs of VHS-quality video can turn into 10 hours of transcoding easily on a 4core/8thread PC. Transcoding 1080p or 4K from MPEG2 or MPEG4 to HEVC can take even longer. Lots of art school students use animation on their home laptops, plenty of people work with video encoding and online streaming at home, too.

      Gaming is mostly a GPU-bound task, but these also have a lot of PCIe lanes to help with that, and lots of games are being compiled for multi-cpu now.

      That's great if you can do everything you need with what you have. I'd say that's the case for most people. I know some who do everything they need at home on their cell phones and/or tablets, but other people have different use-cases.

      • by Anonymous Coward

        Have you ever tried to transcode MPEG2 video to x.265 or VP9 on a desktop PC? 2 hrs of VHS-quality video can turn into 10 hours of transcoding easily on a 4core/8thread PC. Transcoding 1080p or 4K from MPEG2 or MPEG4 to HEVC can take even longer.

        Yes, I have, and no it's not that long of a process. I can encode a 2 hours 1080p x264 video to HEVC in ~20-30 minutes. Ffmpeg + NVENC makes re-encoding to HEVC much faster, you just need an nvidia video card that supports it.

    • We're a tiny shop and one of our products is a compression product. Lots of data is processed, and it's worth it to our clients to have beefy hardware, as they run jobs that take hours to complete.

      Out top test machine is an Ivy Bridge -era 2 socket workstation that we had Puget Systems custom build for us -. 2x Xeon E7-4650v2 CPUs - 20 cores / 40 threads. Spent over $7K on the CPUs alone.

      We're needing a couple more high thread count boxes for our newest product. We're waiting a couple months to see wha

    • by AaronW ( 33736 )

      I tend to do a lot of builds and can definitely benefit from something like this since I can take advantage of all of those cores. There are numerous workloads that can certainly take advantage of more cores. Even at home, they come in handy when batch processing photos.

    • I've got a four core, eight thread 4790K, which is one of the fastest stock CPUs around. I routinely use most of the cores while gaming. A lot of games are threaded in some way, and there's always the OS doing stuff in the background to consider. That said, Threadripper isn't for glorified internet appliances. These will be great for workstations and low-cost simulation/computation/VM systems for your office desk or home development environment. Personally I'm going to use these for some POV-Ray stuff unle
    • by aliquis ( 678370 )

      Gaming performance has and still is a single core endeavor

      lol, absolutely not.

      By now you can expect the games to take advantage being able to run four rather than two threads at-least.

      Sure one of those may be the one with the highest load and hence performance be limited by how fast that thread can be executed but that DEFINITELY NOT mean that games only use one core/threads. There's definitely advantages of having more and being able to just run one thread at the time would completely destroy game performance in lots of games well beyond what having a say 30% low

    • There is always someone that stresses the single-threaded performance, how most software is single-threaded....

      Everything that I run that I have to wait for, is either multi-threaded, or bottlenecked on the ssd speed. All of it. I really don't dont give a crap about single-threaded performance.
    • The company I work for has a software product that takes roughly 10 minutes to compile. You don't always have to compile everything, but sometimes you do. The developers get the most cores per dollar they possibly can, as every core cuts down on compile time by a couple of minutes, which can save hours over the course of a month.

      12 cores for $800? Yes please.

    • These processors would be silly in a desktop computer. We're not even fully loading down 2-8 core machines now.

      I'm not sure what you're saying. A desktop that is anywhere near being "loaded down" is a chore to use and unresponsive. I wish I had more cores for all the crap that my operating systems run against my will these days.

  • The top AMD chip has 40MB of cache which is enough to run an entire Linux distro from cache alone. However, if you use it's virtualization technology then you could have an entire Beowulf cluster on a chip.

    It turns out that AMD has been reading my weekly email demands this whole time! ;)

  • Going to be comical to see the Intel fanbois spending the next year or so justifying why they bought the slower, more expensive chip. LOL.
    • by GuB-42 ( 2483988 )

      Intel still has better single core performance, or so it seems.
      What is best depends on your workload.

      Anyways, I don’t think many people would really benefit from these top of the line CPUs and a lot of these will serve mostly as bragging points rather than actual performance considerations.

  • The article is silent about vectorization, and Intel invests a lot in that lately. Do we know anything about the compilation flags of that copy of cinebench? If not, the assessment could be extremely unfair. A newer set of vectorization instructions corresponds to a longer vector size for arithmetic operations that can be carried out concurrently. For example, in HPC applications, enabling the highest available level of AVX can lead to 2x gains compared to code compiled for legacy systems.
    • Do we know anything about the compilation flags of that copy of cinebench? If not, the assessment could be extremely unfair.

      According to Tom's Hardware [tomshardware.com], Cinebench doesn't use AVX instructions at all. There's no source or discussion of the assertion though.

  • I don't think I will ever spend more than $300 on a CPU. It was always enough to buy the fastest for home. What happened?
  • On a related note, Tom's Hardware called the 7900X "a factory overclocked chip". It generates so much heat at that it needs water cooling to run without throttling.
      http://www.tomshardware.com/re... [tomshardware.com]

Bus error -- please leave by the rear door.

Working...