Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

NVIDIA To Push Into Supercomputing 103

RedEaredSlider writes "NVIDIA outlined a plan to become 'the computing company,' moving well beyond its traditional focus on graphics and into high-profile areas such as supercomputing. NVIDIA is making heavy investments in several fields. Its Tegra product will be featured in several mobile devices, including a number of tablets that have either hit the market already or are planned for release this year. Its GeForce lineup is gaming-focused while Quadro is all about computer-aided design workstations. The Tesla product line is at the center of NVIDIA's supercomputing push."
This discussion has been archived. No new comments can be posted.

NVIDIA To Push Into Supercomputing

Comments Filter:
  • more nukes :/ (Score:3, Insightful)

    by magarity ( 164372 ) on Wednesday March 09, 2011 @11:47AM (#35431600)

    I just hope enough nuclear power plants come online before their first supercomputer customer turns on a new rig. The latest GPUs already use more power than the hungriest Intel or AMD x86 ever did.

    • by Anonymous Coward

      Um, yes, of course. Because they have 292 cores instead of 4/6/8. While those two designs do remarkably different things, the point remains, for the tasks that GPUs are well suited, you cannot possibly beat it with an Intel/AMD.

      • by Xrikcus ( 207545 )

        No they don't, let's stop such silliness.

        The GTX580 has 16 cores. The GTX280 has 32. The AMD 6970 has 24. The AMD Magny-Cours CPUs can have up to 16 (ish, if you don't mind that it's an MCM).

        292 indeed. NVIDIA does an even better job of marketing than they do of building chips.

        • You're right. The new NVIDIA Teslas (C2070) have 448 cores, not 292. If you're doing work that a super-computer needs to be doing, your software is massively parallel. Otherwise, run it on your laptop at home.
    • Re:more nukes :/ (Score:5, Insightful)

      by MightyYar ( 622222 ) on Wednesday March 09, 2011 @11:53AM (#35431672)

      The latest GPUs already use more power than the hungriest Intel or AMD x86 ever did.

      And when used for the types tasks designed, pump out 10x the performance for maybe twice or three times the power.

    • As far as I know, while present GPUs do use a lot of power they, also, produce a massive number fo FLOPS compared to general processors. This means they, actualy, have a lower power cost per FLOP.

    • Would you rather have to power a supercomputer sporting 1024 Intel CPUs? Which is going to be a bigger power hog? Which will scale better?

      • Re:more nukes :/ (Score:4, Informative)

        by TheRaven64 ( 641858 ) on Wednesday March 09, 2011 @12:49PM (#35432530) Journal

        If all you're measuring is pure FLOPS, then here are some numbers: Cray X1, 250MFLOPS. nVidia Fermi: 1GFLOPS. ARM Cortex A8: 10MFLOPS. Of course, that doesn't tell the whole story. Getting 250MFLOPS out of the Cray required writing everything using huge vectors. Getting 1GFLOPS from Fermi requires using vectors within independent concurrent processing tasks which access memory in a predictable pattern and rarely branch.

        GPUs are not magic, they are just optimised for different workloads. CPUs are designed to work well with algorithms that branch frequently (every 7 instructions or so - so they devote a lot of die area to branch prediction), have good locality of reference (cache speeds up memory in these cases), and have an integer-heavy workload. GPUs generally lack branch prediction, so a branch causes a pipeline stall (and, on something like Fermi, if two kernels take different branches then you drop to 50% throughput immediately). Their memory architecture is designed to stream large blocks of data in a few different orders (e.g. a texture cube, pixels in order along any axis). So, depending on your workload, either one may be faster than the other.

        • GPUs run well over 1GFLOP - my AMD chip runs about 10GF/core on linpack, while the GPU is around 1400GF. Serious business.
          • by Bengie ( 1121981 )
            Yeah, my ATI card does about 2.5TF, but the Fermi blows the ATI cards away with DP flops.

            nVidia has some serious processing power.
        • Nvidia Fermi (GTX 400 series [wikipedia.org])
          GTX 470: 1088.64GFLOPS (32-bit) (215W(mfg. claim) $350; 3e9 transistors; 1280MB GDDR5; 448 Unified Shaders:56 Texture mapping units:40 Render Output units);
          GTX 480:1344.96GFLOPS (32-bit) (250W(mfg. claim)-(500W tested max.) $500;3e9 transistors; 1536MB GDDR5; 480 Unified Shaders:60 Texture mapping units: 48Render Output units).

          Tesla M2050 1030GFLOPS(32-bit), 515GFLOPS(64-bit) 3GB ECC (M2070 is same but 6GB ECC GDDR5)
          IBM linpack test May 2009 [hpcwire.com]: $7K Xeon, 48GB : 80.1 GFLOPS, 11GFLP

    • Re:more nukes :/ (Score:4, Informative)

      by gupg ( 58086 ) on Wednesday March 09, 2011 @11:56AM (#35431722) Homepage

      5 of Top 10 most green supercomputers use GPUs:
      Green 500 List [green500.org]

      Each GPU is very high performance and so high power. Performance / watt is what counts and
      here GPUs beat CPUs by 4 to 5 times. This is why so many of the new supercomputers are using
      GPUs / heterogenous computing.

    • They also can do matrix computations up to 40 times faster than a CPU. This is incredibly useful for scientific applications. I would use this if I had a Nvidia card since several things exist to use CUDA with Matlab, until then I have to teach myself OpenCL.
      • Today, parallelism seems mostly limited to "scientific" applications. But I think possibly, our computing model may evolve towards more parallelism for lots of new applications, that compute more like a brain - that is, massively parallel pattern matching. Of course we'll still use more direct algorithms where applicable, such as word processors and web browsers, but as computers integrate better with the natural world they'll need much more algorithms rooted in signal processing, pattern matching, and ge
      • Exactly, CUDA has become a major player in the field of supercomputing. Just like IBM's PowerPC/BlueGene systems. With support for Floats/Doubles and amazingly fast math functions and tons of data in Matrices, the only other way to do all that math fast is a FPGA or a PowerPC chip.

        CUDA, Supercomputing for the Masses: Part 1/20
        http://drdobbs.com/cpp/207200659 [drdobbs.com]

        • Frankly, its the only thing that has be eyeballing Nvidia cards these day. I need to learn more about OpenCL as I think it may be a player here shortly once AMD's fusion line starts getting the better desktop processors. Then we can use the GPU on die to speed up matrices calculations.
    • by Bengie ( 1121981 )
      My GPU has 1536 shaders, consumes ~220watts. My i7 has 4cores/8threads and consumes ~130watts.

      On several of my distributed tasks, the SSE2 version takes about 36 hours on one core, or about 6 hours per task average if using all 8 threads(assuming an optimistic 50% scaling from hyper-threading). My GPU, on the same work units, takes only 1min 40sec. The GPU is about 216 times faster and slightly under twice the power. 60% more power draw, 21600% better performance.

      I wouldn't compare GPU vs CPU for power draw
    • Comment removed based on user account deletion
  • I'm nowhere as qualified as everyone here, but Nvidia seems to be pushing more for the pararell supercomuting with rows of Tegra chips working in unision. They had talked about supercomputing when the Tegra 3 was announced.
    • by GC ( 19160 )

      Yes, but if you check out the top500.org - the list of the 500 known fastest 'supercomputers' you'll see that they all achieve their benchmark through parallelizing their tasks across multiple cores.

      I think it is safe to say that all modern supercomputers achieve their 'power' in this way - I've not seen any terahertz single-core/processor systems on the horizon, and don't expect to see them.

  • SGI (Score:5, Funny)

    by 0racle ( 667029 ) on Wednesday March 09, 2011 @11:55AM (#35431702)
    version 2
    • by gl4ss ( 559668 )

      no it's sgi version 4, which is nvidia supercomputing version 3(at least or so).

      they outline a plan like this at least once in two years....

  • by Anonymous Coward

    The company that was setup by disgruntled Silicon Graphics gfx division employees because the SGI gfx tech was suffering from toxic internal politics and the push into Big Iron and Storage... is now moving into 'Supercomputing'. Hope they bring back the Cube Logo :)

  • I doubt it would be truly useful, but I'd like to see a 2 million core processor. Arrange in, let's see, a 1920 x 1080 grid. The 8008 used 3500 transistors per core, so even before memory, it'd be a 7 billion transistor chip.

    More practical might be a 128 x 128 core processor, using a modified 386 or 68020 for cores. That could be less than 5 billion transistors. Each processor is simple and well known enough that hand optimized assembly begins to make sense again.

    Run the little bastard at just 1 GHz a

  • "Supercomputing" almost always means "massive Linux deployment and development." I will spare critics the wikipedia link on the subject, but the numbers reported there almost says "Supercomputing is the exclusive domain of Linux now."

    Why am I offended that nVidia would use Linux to do their Supercomputing thing? Because their GPU side copulates Linux users in the posterior orifice. So they can take, take, take from the community and when the community wants something from them, they say "sorry, there's n

    • by 0racle ( 667029 ) on Wednesday March 09, 2011 @12:39PM (#35432380)
      nVidia shuns linux users? They may 'shun' those that can not have any non-GPL code, but they do make a higher performing and far more feature rich driver for their cards for Linux, FreeBSD and Solaris and keep it (for the most part) up to date. If you don't like it, there are alternatives.

      Gotta love the rabid GPL fans. The GPL doesn't mean freedom for everyone to do things the way you think they should be done.
      • I know they publish a driver for Linux. Trouble is, I can't use it because they won't tell us how to make it work through their "Optimus technology." I had high hopes for my newest machine only to have them dashed to bits with the words "we have no plans to support Optimus under Linux..."

        • by 0racle ( 667029 )
          You're the one who bought an unsupported device without researching, but nVidia is the bad guy here.
          • by EzInKy ( 115248 )

            Let's just let the market forces do their thing here. Personally, I tell anybody I hear thinking about buying NVIDIA to buy AMD instead. Sure, you might get a few more fps today, but tomorrow you may find your card unsupported by the manufacturer with no documentation available to end users on how to fix problems they may encounter in the future. NVIDIA dug their grave, let them sleep in it.

            • by 0123456 ( 636235 )

              I tell anybody I hear thinking about buying NVIDIA to buy AMD instead. Sure, you might get a few more fps today, but tomorrow you may find your card unsupported by the manufacturer with no documentation available to end users on how to fix problems they may encounter in the future.

              AMD no longer support my integrated ATI GPU; I had to manually patch the driver wrapper source to make it work after recent kernel changes and I'm guessing that before long it will be too rotted to work at all.

              There is an open source driver but it doesn't work with my monitor resolution and performance is awful. So my solution before I discovered I could patch the source was going to be buying the cheapest Nvidia card I could fit into the computer.

            • I bought an ATI card (HD 3800) and its Linux driver sucks, I can't use it for gaming or 3d arts. (If I try to run blender, it won't display some menu elements, and looks totally broken.) It only works decently on Windows. So the funny thing is, I can't use an opensource software (Blender) with a video card that's supposedly opensource friendly on an opensource operating system (Linux; I tried it with several distros).
              The funny thing is that only nVidia and Intel have decent drivers for Linux. So it's not a

            • by gl4ss ( 559668 )

              look if you tell everyone something like that you have to change your stance every few years due to the company offerings changing. s3's made sense at one point in time.

              who cares about tomorrow? tnt2's are as worthless as matrox milleniums.

        • Optimus is actually a great tech, if you have Windows. It switches automatically between an integrated graphics card and a discreet nVidia card, saving battery power when you don't need the heavy duty GPU, but giving you the power when you need it. An Optimus-equipped laptop will run Linux. I know, as I am writing this from a Dell XPS 14 running Ubuntu. You will not, unfortunately, be able to use the discreet card. The integrated card, however, works fine. It has more than enough power to run Compiz, which
          • Shouldn't it be easy to have two xorg.conf's: one for the integrated graphics and one for the discrete card? You could start the discrete xorg.conf when you want to run a game and the integrated one when you don't.
            Maybe (I'm not a guru by very far, so I'm going on a limb here) you could even have them running on different tty's and switch semi-on the fly. Would the discrete be shut down if you are on the integrated tty?
            Some explanation: I reserved some space on my nettop with ION2. Some day I might want
      • If we all buy AMD's product on the virtue of their openness, it won't be long before AMD holds the upper hand on features and stability. I think they're heading in a good direction already.

        How much entrenched advantage does inferior need before you lock in? Your personal FIR filter on "what have you done for me lately" seems to have unit delay of hours rather than years.

    • Comment removed based on user account deletion
    • I'm sorry if nvidia won't gut their business to satisfy your irrational request.

  • 3 of the Top 5 supercomputers are already using NVIDIA GPUs:
    NVIDIA press release [nvidia.com]

    Bill Dally outlined NVIDIA's plans for Exascale computing at Supercomputing in Nov 2010:
    Bill Dally Keynote [nvidia.com]

    • by Xrikcus ( 207545 )

      One thing I've been really keen to know is what the utilisation is like on those supercomputers. We know they can do LINPACK really fast and more efficiently than the CPUs do, that's what you get for having a high ALU density, a few threads per core and wide SIMD structures. The question is: out of the algorithms that people intended to run on those supercomputers, then what level of efficiency are they hitting.

      Are they still a net gain over a standard opteron-based machine? They may be, but I don't know th

  • eh.. (Score:2, Interesting)

    by Anonymous Coward

    I've been working with their GPGPU push for a couple of years now. What I notice is they are very good at data parallelism with highly regular data access patterns and very few branches. While they are technically general purpose, they don't perform well on a large portion of high performance tasks that are critical even in scientific computing which are generally compute-bound. This creates some really annoying bottlenecks that simply cannot be resolved. They can give tremendous speedup to a very limited s

  • The current line up of AMD GPUs have far more stream processors than the NVIDIA models, and run at roughly the same clock speed. Why would anybody buy the NVIDIA ones?

    • The number of "stream processors" doesn't necessarily scale as a linear performance metric. As an example (using dated lower midrange hardware, as it's what I still know), a Radeon 3850 sports 320 stream processors. My Geforce 9600GT advertises 64 SPs, yet pulls ahead of the 3850 in many benchmarks. It's not as simple as quoting a number used in marketing material as a universal metric, any more than a 3 GHz Pentium 4 is 50% faster in real-world performance than a 2 GHz Athlon64.

      As for the other issue, N

    • Because AMD's drivers suck?
    • by gmueckl ( 950314 )

      Because nVidia has CUDA firmly entrenched in the scientific community by now. And CUDA almost works by now, that is, the most glaring bugs have been eradicated. Oh, and it even works on Linux!

      Does AMD have support for doubles on their chips by now? Honest question here. It's a practically useless feature for graphics, but it makes a lot of sense for scientific computing.

      • The HD 5870 GPU has very good DP performance. Auto-tuned DGEMM reaches about 65% device utilization in my experience. This agrees with benchmarks done by Dongarra's group, I believe.

        AMD hardware is powerful. But the software stack is relatively behind in supporting it. However, I don't think this is the dominating cost of adoption.

        If you come from the nVidia world, kernels for AMD look completely different. I don't have enough experience to say it is harder in the AMD world compared with nVidia. I can say i

      • by tyrione ( 134248 )
        You're out of date. Even Nvidia knows OpenCL will replace CUDA. At least AMD is more open about it and pushing it hard with their OpenCL 1.1 release in their 2.x SDK.
        • by gmueckl ( 950314 )

          Well, I'm still failing to see nVidia putting their money where the mouth is on that one. The last time I checked their OpenCL implementation, a lot of the demos that were ported over from CUDA ran slower - 10 times slower in the case of the volume rendering example. So this is not how you get to impress people who are solely concerned with performance. Oh, and unlike the CUDA compiler, the in-process OpenCL compiler even segfaulted on me within about 4 hours of playing with nVidia's OpenCL implementation (

          • by Anonymous Coward

            NVIDIA is dragging their heels with OpenCL. They have yet to publicly release an OpenCL 1.1 compliant driver, despite the fact they have had a beta version for about 8 months. They are also slow to respond in their forums, and many problems/bugs that were reported at least a year ago still have not been fixed. They are throwing their weight behind CUDA, plain and simple. CUDA 4.0 just came out, and has some phenomenal technologies that make me wonder if OpenCL has a fighting chance.

            I think it does, but it i

  • They chose to not release the necessary specs to allow others to utilize their hardware the way Intel and to a lesser extent AMD did, and as the current smartphone trend has shown, locked in is the same as being locked out.

  • ... when GPGPU was in its infancy and I was lusting to play with that stuff; that's about 5 yrs ago, at most.

    Alas, our semiconductor department was so content with its orthodoxy and cluster running Fortran WTF hairballs... :`(
    Ah well, no point crying over that spilt milk... it just takes patience and pig headedness... :>

  • Nvidia linux support is getting fixed by nouveau anyway. They reckon the GTX 5xx/4xx series is already upto the same level as the 2xx/9xxx/8xxx cards for drivers. As more resources get spent implementing opengl features in gallium and less on reverse engineering the cards, feature parity with the closed drivers will be achieved. I reckon in 1-2 years Nvidia card open source support will be at near parity with the closed source drivers.

The herd instinct among economists makes sheep look like independent thinkers.

Working...