Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Boost UltraSPARC T1 Floating Point w/ a Graphics Card? 71

alxtoth asks: "All over the web, Sun's UltraSPARC T1 is described as 'not fit for floating point calculations'. Somebody has benchmarked it for HPC applications, and got results that weren't that bad. What if one of the threads could do the floating point in the GPU, as suggested here? Even if the factory setup does not expect an video card, could you insert a low profile PCI-E video card, boot Ubuntu and expect decent performance?"
This discussion has been archived. No new comments can be posted.

Boost UltraSPARC T1 Floating Point w/ a Graphics Card?

Comments Filter:
  • by the_humeister ( 922869 ) on Saturday April 22, 2006 @05:12PM (#15181955)
    Especially since current GPUs don't implement double-precision floating point math. Heh, in that vein you could add a dual Opteron single-board computer into one of the expansion slots...
  • by NekoXP ( 67564 ) on Saturday April 22, 2006 @05:17PM (#15181967) Homepage
    We produce an Open Firmware solution which includes an x86 emulator to bootstrap x86 hardware, specifically graphics cards and the like.

    PowerPC boards, PC graphics chips with x86 BIOS, no driver edits required on the OS side.. it is there like it would be on a PC.

    http://metadistribution.org/blog/Blog/78A3C88E-1CE 7-45B8-9C79-420134DD9B8E.html [metadistribution.org]
    http://www.genesippc.com/ [genesippc.com]
  • Re:No, you cannot (Score:3, Informative)

    by Jeff DeMaagd ( 2015 ) on Saturday April 22, 2006 @05:37PM (#15182021) Homepage Journal
    That problem had been solved for Alpha computers around 1992. I was able to choose from any standard PCI video card, though driver support in the OS was a different issue. There may be some patent issues though, so the approach might need to be different.
  • by Fallen Kell ( 165468 ) on Saturday April 22, 2006 @06:38PM (#15182192)
    All kinds of problems will arise with a setup like this. Performance will possbily boost for certain things, but they need to be coded properly themselves, but code is not written for a unique setup like this. Multi-threaded code will be under the assumption that all CPU's will have approximitely the same abilities (in other words, they do not split floating point ops into one thread and i/o and int operations into other threads). Any thread for the application will potentially have floating point operations mixed with other operations.

    Now even if you custom code an application to do all floating point work in a specific thread, you would need to completely modify the kernel thread management sub-systems. The threads themselves would need meta flag data to signify what "kind" of thread they are so that the "floating point thread(s)" are queued for running on the GPU and not on the T1 (unless there are idle T1 cores and the GPU is already busy).

    Now even if you have the above changed, the only thing this will work on is custom made applications, in other words, you will need to completely re-write anything and everything to take advantage of this setup. This really isn't viable when you may possibly be dealing with non-open-source products like Matlab or Oracle. Even with open source products, it will take MAJOR rework to implement a change like this.

    The T1 is designed as it is, a multi-core processor that would make a very good NFS Data Server, ftp server, or web host server with highly efficient power usage. It is NOT a database, application, or HPC server core. Too many of the latter operations require too much floating point operations to be run efficiently on the T1. In a pinch you can use it for them, but it will not shine in that application.

  • by Anonymous Coward on Saturday April 22, 2006 @08:05PM (#15182493)
    >Combining a T1 and GPGPU offers "best of breed" economies of scale appropriate to each component, like installing 3rd party memory and HD rather than the expensive Sun brands.

    Combining a T1 and a GPU offers you jack, since GPUs use single-precision arithmetic.
  • by nukem996 ( 624036 ) on Saturday April 22, 2006 @10:16PM (#15182865)
    Ive done some simple CAD stuff in school and all they use is AutoCAD and PTC. I guess I dont know to much about this stuff :\
  • by csirac ( 574795 ) on Saturday April 22, 2006 @10:45PM (#15182948)
    Most high-end CAD products that matter run on Solaris. It hasn't been until the last few years that they mostly have a Linux option, which is nice.
  • by Anonymous Coward on Sunday April 23, 2006 @10:27AM (#15184537)
    Unfortunately, and as one of your links mentions, I seriously wonder if many of the current generation of programmers even knows about this issue, nevermind cares (Huh, I sound like a cranky old man now).

    Not cranky and old enough.

    If you care about your answer, no matter how many bits the FPU supports, you do it in software. Period. You use GMP, and don't round until the final result... and while that might not always prove possible due to having finite memory, I highly doubt we'll ever see even a 1024-bit FPU, much less one using 1048576 bits.

    No, you don't. You do an error analysis, quantify the imprecision, and move on. Your point about all floating point ultimately being limited in precision anyway is a good one that the OP seemed to overlook in his advocacy for 80/128/whatever bit floating point as a "gold standard," but the idea that you'd do a black hole simulation completely in software is laughable.

    GMP doesn't solve the problem (incidentally, GMP isn't exactly a high-end scientific math library) because, guess what? You still can't express things like 1/(2 * pi), because pi is irrational. It can't be expressed exactly with any number of digits or amount of memory. So you're right back to doing error analysis, and what's more, your calculations are sucking up more cycles and memory to boot. No thanks.

    There's a reason why supercomputers are rated in FLOPS, and not IOPS. All that expensive floating point hardware on those scads and scads of processors is there for a reason.

    If you absolutely need an exact answer, you either use a computer algebra system, which can do symbolic manipulation, or you stick to problems that can be solved using integer or rational arithmetic.

    If you need an answer that has more precision than the built-in floating point types, then arbitrary precision libraries become relevant. But they aren't a magical fix that can suddenly make the limitations of limited precision disappear.

    Otherwise, the best approach to take is to make sure your algorithms and design are sensitive to the issues involved. (For example, avoiding addition/subtraction wherever possible, especially when the magnitudes are significantly different, which cause losses in precision, unlike multiplication/division).

    Honestly, if you work it right, the 15 decimal places of precision that doubles offer is more than good enough for most scientific computations, as long as you make sure you keep track of the error tolerances. More is always better, of course, but only a lazy scientist would rely on quads suddenly getting the right answer where doubles weren't good enough before, because there'll always be problems where you want more precision, and the naive approach won't work.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...