Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming

Journal dancpsu's Journal: Functional Parallel Computing?

Computers have stopped getting faster. I thought I sensed something years ago, but it wasn't until I saw the graph on The Free Lunch is Over that it finally came through. The funny thing in the graphic is that it is not Moore's Law that stopped (transistor density is still increasing), but the computer speed itself. Most people know about this, and they know it means what we are seeing already: multi-core processors. However, instead of needing to learn multi-threaded programming, or using a parallel processor capable operating system, I think it means something else entirely. Instead of thinking about 2 cores, or 16, think of 1,000. How do you manage all of them? Will you have hundreds of threads running through your program? Will you have to manage each one individually, or in small groups of threads?

Probably you won't. Instead, you will have to learn a new programming paradigm. I don't mean a new language, or some language extensions, but something that looks entirely different. One option is functional programming. From the recent dispersal of functional programming articles on popular sites such as digg, it seems like this may gain some traction. Learning something like functional programming gives CS grads and the software field several advantages. First, a clean functional program is implicitly parallel. If you code your program in a functional language, then the compiler can spread it over as many processors as possible. Second, functional programs are mathematically provable as correct. The mystical idea of bug-free code would be within reach. Also, this gives the field more respect in the engineering sense. If you can document a proof of your software, then you can sign off as an engineer is expected to do in other fields. Certifications could show that you can write an algorithm of a certain complexity and prove it correct. Third, functional programs tend to be much smaller than imperitive programs (even object oriented languages).

But lastly, and the point of this, is that functional programming could bring us all the way down to the hardware, up to a very high level implementation. When a designer goes to design a new IC, he doesn't sketch out a circuit, he codes it in something called a Hardware Description Language (HDL). All the widely accepted HDL's are functional languages because of its implicit parallelism. So IC's themselves are generated as a set of functions in a functional program. The ironic thing is that the functional program makes the chip run an imperitive assembly language (and procedural at that). This is one of the reasons why, while the programs that a computer runs may be bug-ridden, the processor itself almost never has a single error.

Now, of course, it would not be easy to generate an IC that itself was programmed by a functional language, but such a chip would be able to maximize the space that Moore's Law is still granting, while taking full advantage of parallelism in ways that a pipelined processor could only dream of. Fortunately, such a device already exists in the form of a Field Programmable Gate Array (FPGA). Sometimes, an FPGA is used just to implement an imperitive processor, just moving the switch from declarative (functional programming) to imperitive one step up. While this can have advantages, such as custom instructions, functional programming could bring a lot more performance out of such a system.

Quick Recap:

Traditional
  • Design - Declarative
  • Hardware - Imperitive
  • Hardware ISA - Imperitive
  • Operating System - Imperitive
  • Libraries - Imperitive
  • Application Software - Imperitive

Reconfigurable computing:

  • Design - Declarative
  • Hardware - Declarative
  • Hardware ISA - Imperitive
  • Operating System - Imperitive
  • Libraries - Imperitive
  • Application Software - Imperitive

Proposed:

  • Design - Declarative
  • Hardware - Declarative
  • Hardware ISA - Declarative
  • Operating System - Declarative
  • Libraries - Declarative
  • Application Software - Declarative/Imperitive

Custom FPGA designs for scientific data analysis can speed up calculations 20 to 10,000 times. A general architecture that takes advantage of paralellism in functional programming would take advantage of this speedup. So why aren't FPGA's taking over conventional processors? Well, FPGA's aren't built to constantly reconfigure themselves and have access to memory. Also, many programs will be too big to fit on an FPGA. Splitting up implementations has been done over multiple FPGA's, but only with effort. What needs to be done for a complete changeover is allow the program to be split into parts and page in only the functional parts of the program necessary. Something like Virtual Memory needs to be created to allow for rewiring, and providing standard memory access for each block. Something ingenious needs to be invented for this, just as the TLB was ingenious.

Does anyone know if this is already being done?

This discussion has been archived. No new comments can be posted.

Functional Parallel Computing?

Comments Filter:

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...