Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

What About Functional Languages? 415

sdavies asks: "Functional languages like Scheme and Haskell are great! (here is a PS viewer) They give programmers new tools for elegance and abstraction. Unfortunately, to the legions of procedural programmers writing in languages like C/C++(/C#), Java, and VB, functional languages are considered obscure and impractical. What is your experience with functional languages, and what do you think is preventing them from being adopted into the mainstream?"
This discussion has been archived. No new comments can be posted.

What About Functional Languages?

Comments Filter:
  • Actually, I meant all OSS OS <etc>. Since gcc is the only OSS compiler out there with IA64 support, and the OSS OS I know of all depend on some special gcc features anyway, ... <shrug>

    I'm pretty sure most of the corporations out there writing closed source software for Linux are using gcc, because that's the only Real compiler available for Linux. That is, until Borland decides to show up.
  • A program tells you how to reach a goal. Anything else is not a program. This is the definition by which I stand (Prolog or no Prolog :).

    That you disagree with that pretty much sums up our differences.
  • So I was right. You meant all software using GCC. Which has lots to do with whether its open source software or not (by inter-relationship), but doesn't have any <em>necessary</em> corelation.
  • You can program perl as a functional language as well. The syntax might not look as "elegant" if you're used to Scheme, but anyone who hasn't used a functional language because of the parentheses might want to give it a try.

    By declaring all your variables at the top of your subroutines with my, you get lexical scoping throughout your program, which is an excellent feature of Scheme. But if you don't like it, then you don't have to use it; you can try the freaky 'dynamic scoping' instead, or get burned with global variables. (trust me--in a large app, you'll probably get burned; especially if the source is in multiple files)

    Since perl supports references as a first-class data type, closures are *almost* first-class data types, and you can return a reference to a subroutine. For real nested data, you can write it like this: ['abc',[1,2,[3,'b']], which again is using references, and get it all back with the Data::Dumper module, or write some code to parse it.

    Also, if you don't like manipulating references, or didn't write functions to do it for you yet, list operations in perl are built-in; there's really no need to write car and cdr and cons if you're just using arrays, but it's really easy to do.

    Perl is generally interpreted, and capable of doing an eval, and has features of both functional *and* procedural languages. There's More Than One Way To Do It!

    A Functional Perl Example:


    my $count;
    $count = sub {
    ($_[0]) && (&$count($_[0]-1), @_);
    };
    print join " ",&$count(10), "\n";

    Compare to Scheme:


    (define descending
    (lambda (n)
    (cond
    ((zero? n) `())
    (else (cons n (descending (sub1 n)))))))
    (descending 10)


    Okay, okay, the list formatting and declaration is a lot cleaner in scheme, (but of course I could make a perl function like define that just did an =, basically, which would be like using set! to do the same in Scheme...) but the Perl is still pretty short and to-the-point just doing it the simple way.

    Also, you can make perl more expressive and pretty by doing, say, my n = $_[0]; at the beginning, or writing a simple cons function that just does return @_;...
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • by Tom7 ( 102298 ) on Sunday July 16, 2000 @09:17AM (#929697) Homepage Journal

    I've always thought about submitting one of these "why not functional?" or "why not ML?" ask slashdots... but I think I know the answer.

    My favorite functional language is ML (standard). It isn't "purely functional" like haskell (though we often write purely functional programs); it includes imperative features like assignment and arrays and IO, which are usually useful in real programs.

    I work on an ML compiler here at CMU called TILT [cornell.edu] (which I'd like to think is one of the most advanced research compilers around), so I am sort of biased. But I also know what I'm talking about...

    (Incidentally, the FoxNet Web Server [cmu.edu] is written entirely in standard ML, including the network stack (with ethernet, down to the hardware device driver)!)

    Anyway, back to the question. Why does functional programming matter?

    Programming functionally is closer to thinking in terms of math. Lots of algorithms and data structures are expressed more beautifully in a functional style. It's almost impossible to write gross hacks if you're programming functionally (most quick hacks actually turn out looking quite beautiful). Programming functionally has some direct advantages in this vein, and I find that I write better code faster when I write functionally. (I'll admit to hating it for a semester! But once I got used to it, I don't want to go back...)

    There are some awesome features of most functional languages, most notably: Parametric Polymorphism and Higher Order Functions. (more rarely, such gems as functors and higher-order continuations (aka callcc; think a typed and higher-order setjmp). These all deserve their own posts to explain their incredible benefits. You are missing out if you've never written a program using these features.

    But mostly, functional programming is useful for its indirect benefits. Let me explain some of these:

    - Concurrency. Writing concurrent programs in a functional language is so much more natural. It's easier to avoid certain kinds of race conditions too, since you don't update variables in a functional language. SML/NJ [bell-labs.com] has an awesome concurrencly package called CML [bell-labs.com] .

    - A powerful static type system and type safety. It's difficult to design a language (and many smart people have tried) that's imperative, type safe, and powerful. Features of functional languages like garbage collection and non-updatable values make it easier to define a language with a powerful type system. (in case you're still stuck in the 60s, type safety guarantees that your program CANNOT crash at runtime. No more uninitialized pointers, using memory after it's freed, bad casts, or other plagues of C++ programming).

    A powerful static type system gets you a lot:

    - Debugging. It's easier to find mistakes in your program. When I program in ML, I get a list of all the type errors in my program when I compile. I can go back and fix these before I have to run my program on test cases, etc. Debugging is so much easier. It's hard to explain how incredibly useful this is compared to programming in C++. Everyone who's used ML can attest to this fact: once your programs typecheck, they Just Work.

    - Your programs run faster. Java has a somewhat more mature type system than C++ (it guarantees safety, for instance), but most of this is dynamic. That means all your objects are tagged, and these tags are checked frequently to make sure you're not doing anything wrong! There's no way the compiler can optimize these out; mistakes in the definition of the language (array subtyping) make tags necessary for type-safety. In ML, we don't have to tag values or check them at runtime. Yet we guarantee our programs run safely because we verify all of the types at compile-time!

    - Compiler Technology Advances. Most research in programming languages and compilers these days is on languages with interesting type systems. We're seeing fewer and fewer improvements to C compilers, and lots of improvements on "advanced" programming languages. The type system allows you to make more optimizations, because the compiler has more information available to it. Some concrete examples:

    Aliasing - a big problem for C/C++/Java compilers. If you've ever looked at the machine code they produce, you've seen this effect. "Why is it fetching that address again??" ... because two pointers may have pointed to the same thing, and in order to preserve the semantics of the language, redundant work is done. When you're not doing updates (functional programming), the compiler doesn't have to worry about aliasing.

    Function Calls: Practically every C/C++ compiler treats functions as a black box. Languages with stronger type systems are able to optimize around function calls because much more information (types) are available.

    Our TILT compiler that I mentioned earlier does something rather new: Each compilation phase transforms not only the program but its type. We keep the type information around even when we are dealing with assembly language! This enables us to perform some unprecedented optimizations.

    - Machine independence. Making a type system usually means hiding away the details of the machine, and this usually means that the execution of your program is completely predictable. (Compare to C/C++ "undefined" behavior!) ML programs are extremely portable.

    - Modularity. I was able to understand and start working on the (100,000 line+) TILT compiler in a matter of days rather than weeks because of ML's strong modularity features. The most interesting of these are:

    Signature Ascription - This allows you to define abstract data types by naming a type and some operations on it (and their types). This is similar to header files in C (much more refined), but thanks to the type system, you can guarantee that the user can ONLY use your abstract data type the way you intended. They cannot cast, subclass, or use any other tricks to get at your datatype. (Some OO folks have solutions for this too, but they are not as elegant). This is awesome, because it helps you figure out where bugs are. I can attest that this really works; my project this summer is to change the way a very important module works... and so far, I have only experienced one observable effect of changing the representation!

    Functors - This is somewhat like C++'s template system (but more refined); allowing you to write programs which operate on modules. (Ie, you give me a module which implements sets, and I'll give you back a module which implementes maps). This is very useful, and since all the work is done at compile-time, incurs no runtime cost.

    - Proof-carrying code. You haven't seen this yet, but you will. What if you could download a program off the internet and run it, knowing that it won't do anything wrong? What if it wasn't subject to sandboxing (and slowdown) like Java apps? What if you didn't need to trust the source (certificats/signing)? Proof-carrying code carries a proof of its type-safety (and other safety metrics) with it; your computer verifies the proof and then runs the bare code! You can read more about this here [cmu.edu] .

    Now here are some answers to the question of why not functional?

    - RIGHT NOW, functional languages are slower (estimate 2x) than languages like C. Against a "modern" language like Java they fare rather well. Compiler technology is advancing and will fix this! I'd also argue that the other benefits (developer productivity, code maintainability) far outweigh the slowdown.

    - Functional is weird for a lot of people. It took me at least 6 months to figure out why it was good, and I consider myself a pretty good hacker. Most people are more comfortable with imperative languages (at first...), possibly because that's usually their introduction to programming.

    - There are not many commercial applications for functional programming yet (outside of Ericcson), and some people just program for money.

    I would like to see the hacker types of the world pick up some new, interesting languages. Most of these languages don't have powerful marketing engines like Sun or Microsoft behind them, but hacker types are (usually) smart enough to see past that stuff!

  • Sure, I'd love to! I think most of your negative experience comes from using List/Scheme (a cute but not very modern functional language). ML is a whole different world.

    concurrent programs are written in C/C++

    Yes, most software is written in C, C++ and Java. I was just saying that it's easier to do in ML, partly due to the functional style and type system.

    -debugging Lisp/Scheme is ridiculously problematic compared with C/C++/Java. I don't know about the current status of ML.

    Absolutely true. Lisp and Scheme have no type system to speak of, so they allow lots of incorrect programs to run. Debugging could be easier with a good tool (one that showed the current expression and let you step through evaluation is invaluable). But it's easiest with a strong type system like ML's. Most simple programming errors (that nonetheless take forever to find with C/C++/Java/Lisp/Scheme/...) are caught at compile-type by the static type checking. I do program a lot in ML, including big programs, and I've found that most of my debugging needs are solved by:

    Typechecker (~97%)

    print statements (~2%)

    careful consideration of code (~1%)

    ... and trace elements of voodoo or grad students. ;)

    I hear that there are some good debuggers for ML (Harlequin MLWorks had one, for instance), but I've honestly never needed one.

    -and the programs run slower.

    Lisp programs certainly do. Modern ML compilers produce real machine code and are catching up to C++. (It's possible we may surpass them, due to reasons I outlined in my earlier post!) I'd consider the speed difference not important (except as a goal for us compiler hackers!) unless you're writing Quake 4... the other features of the language more than make up for it.

    The major novelty in the course I had taken was the transformations needed to implement continuation based programming.

    Doing CPS conversion by hand sounds painful. But it's the most appropriate way of compiling a functional language and it gets you higher-order functions, among other things. The compiler does it for you, though, so why is that a detriment to functional languages? I happen to think it's pretty elegant, but of course that's just my taste. =)

    Continuations is another very neat programming trick which isn't available in imperative languages. It's sometimes very hard to wrap your head around, but you can do some really nice stuff with them.

    Sorry: but applying the same operation to all elements of a set is not recursion.

    Application is not really recursive or functional (though it can be done very elegantly with recursion). But mapping an operation over a set to get a new set is certainly functional. Doing a set union is definitely recursive. Lots of data structures are recursive (balanced binary trees being the paramount example). The structure of a programming language is usually recursive, as are the operations you perform on it to parse/compile. Nature is quite recursive... remember the fractal craze of the early 90s?

    I can't understand what you mean by "bad math". How can math be bad?

  • The two goals- ease of parsing and ease of programming, are not necessarily opposed to each other, and in fact they're usually complementary.

    If the source code for your parser is 2 megabytes, that means that there are two megabytes worth of syntax rules that the computer needs to know to parse the language correctly- and if a programmer wants to write good code that uses all the features of the language, that programmer has to have 2 megabytes worth of syntax rules in his or her head. Even though I've been programming in C++ for a lot longer than I've been programming in Scheme, I still go to the reference manuals for C++ syntax far more frequently than I go to the manuals for Scheme syntax. (Incidentally, I once wrote a parser for a language that had the same syntax as Scheme, and it was about 300 lines long.)
    --
    -jacob

  • Why is it important for the language to mimic the machine? I find it much easier to understand something defined in terms of rules that are written down and precise, rather than designed in terms of "however the machine does it". (Though I can understand functional programming being uncomfortable to people used to programming imperatively. It took me 6 months to figure out...)

    I've written about 30,000 lines of SML code in the last two years, and I've never needed a classic debugger. The type-checker is the debugger. (There also do exist classic debuggers for SML, btw).

    I do agree that toolkits are lacking, and this is a clear reason why programmers who want to get something done wouldn't use a functional language.
    But lots of people hack just for fun; why not try something new, guys? (maybe you can make a needed toolkit! ;))
  • by Nicolas MONNET ( 4727 ) <nicoaltiva@gm a i l.com> on Sunday July 16, 2000 @09:25AM (#929715) Journal

    You sinner ... you must have written too much perl ... REAL programmers (that is, those at MIT) love to ignore the fact that human languages are above all context-sensitive (as Perl is to some extent), as opposed to Lisp variants which have completely context-free grammars.


  • Scheme is OK, but it's pretty scarce on features compared to languages like Haskell and ML. In particular, it has no type system worth mentioning.

    It's true that ML and Haskell are just as obscure (or more so), but you might find that they are powerful enough to warrant it. I do.
  • here are my reasons for liking functional languages:
    • elegance. abandoning side-effects is a huge step towards understandable code, and it makes math-like operations (such as applying a function over an array - cf. map) clear and easy to write.
    • compilation. functional languages lend themselves to much better optimization than imperative languages (why? no side effects!), and it lets you do really powerful things, like full-code optimization (as opposed to standard block-based optimization). which leads us to...
    • speed. granted, interpretation of functional languages can be pretty slow (then again, so it is with java :) ). but compilation is a different matter. ever tried to use the stalin [nec.com] scheme compiler? the damn thing compiles code that's faster than hand-written C!
    • minimalism. while not necessarily a feature of all functional languages (say, lisp), minimalism is often a design goal in new fp's such as scheme. the single concept of continuations, for example, subsumes a large number of unrelated concepts from other languages, such as longjmps, try..catch loops, non-local exits, or gotos, and by doing that it makes explicit the similarities between them (and also leads to better optimizations, but i mentioned that already).
    • expressiveness. and finally, the most visible aspect of fp - increased expressiveness. fp's make things like higher-order procedures suspiciously easy (and to think it took the whole hoopla over patterns to make the oo community realize the usefulness of higher-order procedures!). or take applications of functions over large sets of data - would you rather hand-craft loops upon loops, or just say (map fn data)?

    any others?
  • Perl can construct programs "on the fly". So can TCL, and so can (I believe?) Python. Probably not exactly with the same ease, since you must do it in ascii strings form (as in, you can't manipulate the syntax tree directly).

    I however challenge your statement: "easier for the programmer to read". Human languages ARE *very* contextual. The real reason for Lisp being context free is because it is simpler to write a parser for it, and consequently, easier to make it bug free. Saying that a maze of parenthesis is easy to read is at best, hmmm, a fallacy.

  • One functional programming language (and, for that matter, procedural programming language and rule-based programming language) that receives quite a bit of use today is Mathematica [wolfram.com]. While it is possible to write FORTRAN-like Mathematica code, it is rarely advantageous to do so, and functional programming is generally a more efficient way to write code in Mathematica. (Prior to executing a Mathematica statement the code is parsed into an equivalent functional form anyway, with this functional equivalent being what is what is fed to the interpreter. If one writes functionally almost all of the extra stuff going on behind the scenes that can make Mathematica dog-slow may be avoided).

    (* Using built-in Table[] function *)
    arr = Table[ (i^2 - 4), {i,1,10} ];
    (* Procedural *)
    Do[ arr[[i]] = (i^2 - 4), {i,1,10} ];
    (* Functional *)
    arr = #^2 - 4 & /@ Range[ 1, 10 ];
    (* Rule-based *)
    arr = Range[ 1, 10 ] /. {i_ -> (i^2 - 4)};
  • Arrogance. There is an annoying elitism among functional programming proponents, implying that all current Perl and Python and C++ programmers are wrong. Similarly, Linux zealots refuse to believe that multi-billion dollar corporations are being run using Windows 98 and NT.

    Fragmentation and infighting. Some FP proponents insist that functional laziness is the key. Others think that laziness is unpredictable and does more harm than good. Some FP proponents insist that static typing systems are The Only Way. Others are very productive with dynamic typing and ignore the first group. Some FP proponents thing that uniqueness types or monads are the correct way to introduce the concept of state into a purely functional framework. Others don't care as much about purity, preferring to have imperative features readily available. Similarly, Linux zealots fight about distributions, text editors, and window managers.

    Ignorance of what the market wants. Many FP language developers would prefer to do research on new type systems, rather than create useful libraries. Many think that language choice is more important than tool choice, as if ML would somehow be better at GUI-driven applications than Delphi. Similarly, Linux zealots think that operating system choice is more important than application choice, and many would prefer to live in a backward 1970s terminal window world without trying to understand why many people don't want to.
  • (Professor Robert Harper is one of the creators of Standard ML, and a very neat guy. BTW, if you could feed me some info on the TILT project, I'd love to study the compiler.. TILT is where the Python LISP compiler was about 8 years ago.)


    I like SML a LOT, but there's a langauge which a lot of people aren't talking about. It's LISP. LISP has a public-domain compiler, an orphan of the Carnegie Mellon University lisp project from about 8 years ago. (CMUCL) [cons.org]


    The compiler (Python) is fast; it compiles down to raw machine code, and it's performance is comparable to C, and has been for the last 5 years. (~30% slower at things like matrix multiplication, bench it yourself) [cons.org], which isn't bad for a compiler that's had a fraction of the effort of EGCS. It can use non-descriptor arguments and structures. [cons.org] It will also use type inference where it can (Roughly, the monomorphic subset of the type system of SML.)


    Now, the language Common Lisp is exremely nice. It has a variety of built-in things like lists, hash tables, structures, vectors, multidimensional arrays... It's got a lot of declarative things too. Loops, 'foreach', 'set'... Lisp programs can't crash because it does typechecks too. (Though if Python infers that they're unnecessary, it'll omit them.)


    It was the first object-oriented langauge to be standardized. CLOS (Common Lisp Object System) is amazing. You can have dispatch based on multiple arguments unlike java/C++ which is only polymorphic based on the first argument. And you've got multiple inheritence. With the MOP, you can even write your OWN OO system on top of it.


    Because the syntax is simple, it makes it easy to have programmed transformations of code 'macros'
    A simple example is a 3-way if-then. (:less, :greater, :equal).
    A slightly more complicated example is adding in c-style for-loops. (done with the 'loop' facility)

    For a fairly complicated example, there's a package called 'SERIES' which adds in the equivalent of pipes to the language. You 'pipe' data between routines and it transforms the code into minimum-sized loops and other iteration constructs.


    For example, if I have a list of triangles. My code looks like I first transform all of the triangles, then texture them, then transform them. again. This requires creating lots of superflouis triangles. SERIES will automagically turn this into a single loop on each triangle 'tranform -- texture -- transform'. Except that it'll handle multiple argument functions that return multiple results, and it'll handle conditionals in the functions. Not all loops can be merged, but it'll do what it can.


    [stanford.edu]
    This is much like the one example of aspect-oriented programming, which was a realtime handwriting recognition program. It needed to do edge detects, averaging, convolutions. To do each operation in turn would have been horrific in time and space. The loops could be merged manually, but obfuscated the core algorithms and made it difficult to modify. The overhead of doing this transformation manually was a 50x code increase. From 700 lines to 35000 lines!

    They implemented a new mini-langauge (Adding 'primitive' things like pointwise, convolve, etc to the language.) and used macro's do that merging automatically made the core algorithm obvious and trivial to change. The result was the core algorithm required only 700 lines of code, and another 1000 lines of code to do the merging and fusing of loops.. 2000 lines of code to do what took 35000 lines of code to do manually!


    If you come from LISP, Aspect oriented programming is stupidly obvious. (If you don't, you think, 'wow' look at the cool stuff that they invented, and think that they created it.)

    As a much much more complicated example, CLOS itself was implemented through macro's. Can you imagine a language powerful enough that you could 'transparently' layer a high-performance and very flexible OO system on top, WITHOUT REWRITING the underlying layer? Aspect oriented programming will never get this good. :)

    Yet another plus of this is that you can runtime-generate and compile code. Want to compile that encryption inner loop to make a custom version for this key? It's as easy as


    (defun twofish-make-fun (key)
    (compile nil `#'(lambda (block) (twofish-encrypt block ,key)))


    This works because the function 'twofish-encrypt' will be declared maybe-inline. Thus it'll be compiled as normal, but the source code will also be saved. Normally, a function call to it will invoke the unspecialized version. But if we compile a call to it that has known arguments, the compiler will fully specialize and inline it, and create a specialized assembly. (This is how CLOS is implemented.)

    There are some nice advantages to having a simple syntax. :)

    For hackers, there's the advantage that you can download ``Common Lisp The Language'' or the ``Common Lisp Hyperspec'' for a full specification of the language. No spending a hundred bux on a manual. (I'd give links, but I use my personal version so I don't know where to find them on the net anymore.)

    Common LISP still has the features of a functional language. It has first-order and higher-order functions or closures. Python has a strong type system and it makes fast code. Your claim that LISP runs slow is false. :) Like SML, it's interactive and incremental compilation. You can redefine functions without quitting. You can even redefine functions that are running in a different thread.

    In fact, LISP was found to be almost 50% faster than C/C++ on average. There was a study done about a year ago where they compared C++ and Java. [nasa.gov] Unlike other study's between langauges, they had a dozen people implement the same program in C++ and Java and then compared the results. They found what you'd expect, Java was slow and sucked memory.


    These guys decided to repeat the study, only comparing LISP and Java. [nasa.gov] Although the fastest implementation was in C++, they found that Lisp programs, as a group, were over 50% faster than the C++ programs as a group. Also, development time was a fraction that of C++ or Java, and the number of lines of code was half. Not only that, the variability in the number of lines of code and development times was signifigantly reduced.


    (Tom, I'll be back at CMU in a month, if you want to talk about this, or let me get my greedy hands on the TILT compiler. Send mail to crosby@qwes.math.cmu.edu if interested.)

  • "One could think of a new type of desktop environment . . ."

    One certainly could. There is one thing wrong with Objective Caml, though: it is written in C, or at least bootstraps through C. While this is good for "interoperability", it is not so good for surveyability. System 3 Oberon is the real thing. Oberon isn't a functional language, but the latest beta of S3 Native Oberon (runs as its own OS on x86) comes with a Scheme module written by Erich Oswald. S3 with the Gadgets desktop is a complete GUI -- a strange one, in that it is completely modeless.

    It is fun to install a new OS on its own partition, although you can also run a version on top of Linux -- the Oberon desktop in an X window.

    From http://www.oberon.ethz.ch/native/:

    "Native Oberon is written in the original Oberon language designed by Niklaus Wirth. The system is an evolution of the operating system
    co-developed by Niklaus Wirth and Jürg Gutknecht and published in the book Project Oberon: The Design of an Operating System and Compiler,
    Addison-Wesley, 1992. The system is completely modular and all parts of it are dynamically loaded on-demand. Persistent object and rich text
    support is built into the kernel. Clickable commands embedded in "tool" texts are used as a transparent, modeless, highly customizable and
    low-overhead user interface, which minimizes non-visible state information. Mouse "interclicks" enable fast text editing. An efficient multitasking
    model is supported in a single-process by using short-running commands and cooperative background task handlers. The basic system is small -
    it fits on one 1.44Mb installation diskette, including the compiler and TCP/IP networking. It is freely downloadable (with source code) for
    non-commercial use.

    An optional GUI component framework called Gadgets is available, with integrated WWW support (FTP, Telnet and HTTP on Ethernet, SLIP or
    PPP). Many useful applications are available, and the system has been used to build embedded systems. Portable applications can be developed
    that run on Native Oberon and the other versions of ETH Oberon hosted on other platforms, e.g., Windows, Linux (Intel x86 and PowerPC),
    Solaris, etc. The LNO version of Native Oberon runs on Linux, but is binary compatible with PC Native Oberon. It was created by replacing a few
    low-level modules of the system with Linux implementations. "
  • by kalifa ( 143176 ) on Sunday July 16, 2000 @09:34AM (#929740)
    There are tons of good reasons which can explain the failure of functional languages. Timing, installed basis, syntax familiarity, performance, fragmentation, worst-is-better (timing, again), marketing, etc...

    I don't think Scheme can't change that. It has been around for 25 years now, hasn't really taken off, and is more fragmented than ever. Besides, many people are still reluctant toward this lisp-type syntax. I don't see why Haskell could change that either: it's nice, but it has a very small installed basis, which is not growing very fast. It cannot be used for system programming and big projects, and it suffers serious competition on smaller projects from fast-growing procedural "elegant" high-level languages, especially Python. Eventually, I don't see why Common Lisp should succeed now, after years of disappointments and decline.

    Here's the problem: try to imagine a functionnal language, whose compilers bring performance that are far superior than Java compilers', and approximately as good as C++ compilers'. This language should be highly portable, suitable for Java-types applets, and have an object-orientation design at least as good ad Java's and CLOS. Its syntax should be more attractive than Lisp's, it should be interpreted and convenient to use and debug via a command-line interpreter just the way Python or Lisp dialects are, and in the same time, as mentioned above, compilable into a very fast executable code. It should also be able to interoperate wich C modules (and maybe others). And, besides all these qualities, it should also be much more than that, and bring other unique advantages.

    Such a language exists, it is called Objective Caml [inria.fr]. One thing only is missing, the most important one, the installed basis. So there is a need to create the ecosystem. Here's a suggestion:

    One could think of a new type of desktop environment which would be based on Objective Caml. Emacs users know that Emacs is an incredibly powerful and convenient Lisp environment, which is unfortunately limited to textual tasks, due to the limitations of Emacs Lisp (at the beginning, Emacs Lisp was supposed to be used solely as a macro language for an editor, and it has gone much beyond that). Imagine an environment in the spirit of Emacs (highly integrated, fully extensible, customizable, reconfigurable and reinterpretable when you use it, etc...), but whose scope would not be limited to textual tasks, and which could actually serve as a full "multimedia-hype-buzzword-whatever" universal desktop. To put it another way, try to imagine an Emacs type environment which would cover all the functionnalities of a, say, MacOS X or Windows user environment. It this is doable, then it's in Objective Caml.

    Now, I know, I have a big mouth, and I should show some code. Anyway, comments appreciated.

  • by Animats ( 122034 ) on Sunday July 16, 2000 @09:35AM (#929742) Homepage
    Functional languages seem to lend themselves to formal correctness proofs.

    Not necessarily. I once spent three years building a proof-of-correctness system for a dialect of Pascal (see "Practical Program Verification" in ACM POPL '83). The big problem is capturing the exact semantics of the language, which is not too hard for Pascal, probably possible for Java, and hopeless for C++. We did this by working on the output of the first pass of the compiler, which was a tree of psuedocode operations annotated with declaration information. Once you get down to that level, most of the ambiguity is gone. (At that level, you have operations like "pop two 16-bit operands off the stack, perform a 16-bit twos-complement add, and push the result. That's unambiguous.) Most of the material the user had to write to help the proof system was in the form of additional source statements in the Pascal program. So this is more of a language front end issue than a fundamental problem.

    Moore's ACL2 is well-matched to LISP because he and Bob Boyer have an elegant formal mathematical system based on a subset of LISP (see their book "A Computational Logic" [amazon.com]. It's a truly constructive mathematics, like the Peano axioms, but machine-processable. Numbers are defined recursively, as (ZERO), (ADD1 (ZERO)), (ADD1 (ADD1 (ZERO))), etc. About four hundred theorems machine-proven theorems later, basic mathematics has been established. Very heavy going, but if you're into this stuff, it's a must-read.

    It would be interesting to look at program verification again today. We have enough MIPS now. My verification runs on a thousand lines of Pascal used to tie up a VAX for 45 minutes. Today that would take about two seconds. You could work on proofs interactively. We were too early in 1982.

    A good Java verifier (not that stupid thing that checks types and stacks during class loading) is quite possible. I don't think it would get much use, though. It takes too much math background to use such tools. Only a tiny fraction of today's programmers have the theory background. It's just not user-friendly.

  • by hey! ( 33014 ) on Sunday July 16, 2000 @11:26AM (#929743) Homepage Journal
    I see this more as an academic issue.

    The thing that the academics have grasped that the mareting suits haven't is that sometimes less is more. Goto is powerful, and powerful is never bad in a marketing context but it can be very bad from a maintainability context.

    On the other hand, the thing that practicing programmers understand that academics don't understand is that sometimes, less is less.

    I think functional languages are cool, and people should learn them/about them in school. But there's a reason they haven't caught on for non-trivial real world applications.

    And, by the way, I don't consider scheme a functional language. You can program functionally in it, and it may even tend to encourage a more functional style, but I don't think that makes it functional.
  • I don't know what domain you program in, but if you write good Scheme or Lisp code, you use recursion instead of any looping construct (which, I would say, is applicable "in the real world"). True, you can indeed prove that any time you write a recursive function you could have written a loop, but if you're writing in a purely functional way the recursion will be easier to write and more legible. That is, after you get past the shock of it.
    --
    -jacob
  • I don't like functional programming languages. This is mostly because the ones I've looked at (scheme, haskell) are too subtle. Yes, scheme may be a great tool for partial enlightenment, but you just can't read the stuff and understand it right away.

  • by Straker Skunk ( 16970 ) on Sunday July 16, 2000 @02:00PM (#929755)
    As others have mentioned, there's some issues with industry support, popularity with programmers, etc. Considering FP in an open-source context, however-- where the tools are free and languages like Scheme are alive and kicking-- there are still technical issues that make for abnormally high activation energy.

    (These points apply to some variants of Haskell and ML I investigated a while back. I don't remember specifics, but these were the salient observations that resulted):

    For one, the compilers are incredibly complex beasts. This wouldn't be so bad, if they weren't so ambitious as to generate native binaries themselves. Myself, I'm leery of anything that generates native machine code, that doesn't use a GCC backend. Maintaining a native code generator is a lot of never-ending work (machine architectures evolve constantly) and IMHO, not using GCC for that is a lost cause, in the long term.

    More significantly, if the compiler's native code generator is the only way to get a native binary, you're basically requiring users to go through a MAJOR hassle (installing a full-blown compiler) just to run your program non-interpreted, if at all.

    (If you want to know what I mean, try building the CVSup program-- written in Modula3-- from scratch. I almost had to do this, on IRIX. Let me just say: was I ever *GLAD* I managed to find a precompiled binary)

    That's why I'm partial to compilers that generate C code (like SmallEiffel). It makes things easier for users, while still allowing them to generate binaries fully optimized for their architectures. And it works perfectly in a source package. You put in Makefile rules to convert Haskell/ML/etc. to C, and then the usual C->O rules, and distribute both the FP sources and C sources. Users will need the FP setup if they want to hack program code, but at least they need nothing more than a C compiler to get it up and running.

    On a related note, there's also the issue of run-time libraries. Requiring anything that isn't the C library, or that can't be statically linked (without making hugeass binaries) is a lost cause. Again, it's an activation-energy thing. If you can't just download a binary and run it, then you're asking your users to do too much.

    So, the ideal alternate language system would have to have at least the following bullets:
    • Native code generation through a GCC backend. That way, the compiler maintainers only have to track GCC, and not the three or four machine architectures they happen to have in their lab. Not to mention, I know my stuff will run on anything GCC runs on, which would be just about every computer architecture in the known universe <g>
    • Compilation to C. Java bytecode output a plus. Natively generated code could run faster, as the FP compiler would be better suited to the language features, but the C code should come in a close second.
    • No run-time library, or at most nothing too formidable. It *must* be statically-linkable.
    • Execution speed on the order of C/C++ code (not a problem for many implementations)
    • Non-ugly interface to C/C++ libraries.
    Not many alternate language implementations have all those features. SmallEiffel was one, I recall. I think there was one for Haskell, though I never got around to checking it out...

    (Disclaimer: I haven't actually gotten into FP yet. I like what I've heard of it, and I do intend to delve into it sometime. I just don't want my non-C code to be a PITA for users because of technical issues like this. If it gives them grief, it should be because they can't think functionally, not because they can't build the damn thing!)
  • Here is a web server and network stack written in Standard ML:

    http://foxnet.cs.cmu.edu/ [cmu.edu]

    There are at least two very large and complicated (and good) compilers for ML, written in ML.

    Yes, applications usually need to perform IO, and so you can't write them in a purely functional language. But for an application where behind-the-scenes processing is a major part of the code (a compiler, for instance), functional languages can be and are often an excellent choice.

  • I've thought about this issue a good amount over the last few years. Here's my take:

    1. While certain complex algorithms are much easier to express in Haskell, there are many, many times when you really just want to do something imperative. Haskell has a way of wrapping imperative operations into a functional framework, and though it may be theoretically beautiful it comes across as contrived. I think issues like this come up enough that it is easier to use an imperative language and struggle with the parts that should be written functionally, than to use a functional language and struggle with the parts that should be written imperatively.

    2. The developers of functional languages are living in a theoretical world in which many topics seem very important, topics that people doing actual programming see as minor issues. Static typic systems come at the top of the list. Proof systems are another.

    3. Many functional language proponents have fooled themselves into thinking that imperative programming is so low-level and dangerous as to be all but impossible. Truth is, there have been people writing complex video games in hundreds of thousands of lines of 16-bit assembly code, games that shipped on consoles and had no known bugs (examples: Donkey Kong Country, Sonic the Hedgehog 3, NBA Jam, Crusin' USA). So, yes, Haskell may let you write a Quicksort in three lines of code, but there's more to programming than that. You could equally fool yourself into thinking that cars are too dangerous to drive, because there's no safety mechanism preventing one from veering into another.

    4. In all honesty, relatively few people are doing classic programming any more. Most programmers do things like database interfacing, GUI tool building using Delphi or Visual Basic (data point: 60% of all new software is written in Visual Basic), CGI scripting, etc. Not too many programmers find themselves needing to do something algorithmically tricky, like handling red-black trees or dealing with weighted graphcs.

    I like functional languages for certain types of problems, but it isn't too difficult to see why they haven't caught on in a bigger way.
  • The reasons why functional languages are not more widely used are the same reasons why other "minority" languages aren't more widely used: lack of training and lack of vendor adoption. Also, the creators of functional programming languages make adoption hard by picking somewhat unusual syntactic features.

    It's hard to explain in a paragraph or two why functional programming is so great. Suffice it to say that it allows for much more reuse than object-oriented programming, opens up whole new ways of abstracting out functionality, and prevents one of the most common sources of bugs--aliasing.

    Not all functional programming languages are purely functional. In fact, many programmers program in such functional programming languages like they do in Perl or Python. That can be both bad and good. On the one hand, because functional programming languages are powerful even for procedural programming, they may never be encouraged to learn how to take advantage of functional features. On the other hand, it may be a good way of getting work done.

    My recommendation for people wanting to use a statically typed, efficient functional programming language would be OCAML [inria.fr]. It has a full object system, yet also offers a full set of functional programming primitives. SML/NJ [bell-labs.com] is another excellent implementation supporting both procedural and functional programming, and very lightweight threads (as an alternative to objects; cf. the GUI system).

    Scheme and CommonLisp are also great languages. As a procedural or OO programmer, you can think of them as Python with a different syntax and a much better compiler. MzScheme [rice.edu] is an excellent Scheme system for learning, and Bigloo [unice.fr] is a powerful Scheme compiler. You can find more information at schemers.org [schemers.org].

    For heavy-duty programming, CommonLisp is still better than Scheme, IMO, but it's significantly more complex. You can find a bunch of implementations at cons.org [cons.org]. I recommend CMU CommonLisp highly. For experimentation, CLISP by Haible is a good small interpreter. There are also a few "scripting" implementations of CommonLisp around.

    Haskell [haskell.org] is absolutely amazing for distilling programs down to 1/10 or 1/100 their size. However, it really requires a very different way of approaching programming. I'm not sure whether to recommend starting programming with it or not, in particular if you come from other languages.

    There are also some special-purpose functional programming languages for high-performance computing. Those languages give performance similar to Fortran or C on numerical problems and can actually be parallelized more easily.

    Of course, whether any of these links help you depends on whether you can get started using a new language with a reference manual, user manual, short tutorial, and implementation. If not, there are lots of textbooks around. The Haskell site in particular also has lost of link for FP-related resources. Also search Fatbrain [fatbrain.com].

    So, in summary: functional programming languages are definitely ready for many applications. If you want to get started, there are lots of resources available. Try to find a book that you like and experiment. MzScheme or OCAML are fairly traditional ways of getting started (you still get a lot of the features you are used to from procedural languages). I suspect that functional programming is going to be the "next big thing" in programming after OOP, and I also think it's a lot more useful than OOP and a lot more well-founded.

  • I'm a C/C++ kinda guy, but I just finished reading "Structure and Interpretation of Computer Programs". In school, I never bothered to learn Lisp, so I decided to do it myself. I can appreciate some of the bigger ideas, but some of "cool techniques" in the SICP book felt more like cruft to get around the functional language properties. I'd love to read a functional language intro, especially one that wasn't as brain heavy as SICP.
  • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Sunday July 16, 2000 @09:53AM (#929772)
    The reason why every statement doesn't return a value is because it is very inefficient -- basically it means that any value must be allowed to be undefined. modern CPUs don't have any support for this (on integers and pointers, at least. floating point has NaN), which means the "undefined" value must be recorded sepearatly and every operation has to check for it. This is ok in an interpreted language like Perl where you already have a lot of bookkeeping to do, but killer in a compiled/low run-time support language like C/C++.

    Undefined values also make it harder to pick up many potential errors at compile time, which is the halmark of C++ philosophy.
  • Portability has a lot to do with the usefulness of a language in the current market. If you have well written C++ code for Windows, you can get some programmers to port it to other platforms. If you write software in a language that runs on MS operating systems, good luck, because then you'll have to completely rewrite it from scratch to sell your software in any other market.
  • (define (this-is-more-elegant b e)
    (cond
    ((> e 0)
    (* b (this-is-more-elegant b (- e 1))))
    (else
    1)))

    (this-is-more-elegant 2 8)
    => 256

    float than_this(float, int);

    float than_this(float b, int e) {
    int i;
    float sum=1;

    for(i=0;i<e;i++)
    sum*=b;
    return sum;
    }

    than_this(2,8);
    => 256

    (define (exp b e)
    (this-doesnt-use-stack 1 b e))

    (define (this-doesnt-use-stack sum b e)
    (cond
    ((> e 0)
    (this-doesnt-use-stack (* b sum) b (- e 1)))
    (else
    sum)))

    (exp 2 8)
    => 256

    Truthfully, people who think functional languages are somehow inferior to imperative languages are talking out of their ass. Functional languages can be just as powerful and just as fast as imperative languages (if not more powerful and faster!), its just that most implementations suck. Remember BASIC, and unstructured coding? When you upgraded to structured coding you did away with things like 'goto's. When you upgrade to functional programming, you toss away side effects. What are side effects? In a functional language, a function will always return the same result given the same arguments. Anything that does not has a side effect. A good example of a side effect are global variables, or in fact, assignment of any type. This is where recursion comes into play, as you noticed in the example I gave above; in order to decrement the variable, I simply called the function again with the variable decremented. There was no assignment, only initialization. My third example is the C program rewritten in Scheme. It uses tail-recursion to loop. Alas, Scheme does not push anything on the stack when tail-recursion is used, so there is no possibility of a stack overflow. Scheme can be as high level or as low level as you like, just like C, after all whats the difference between inb(0x3f8); and (inb 0x3f8) ? I will even go as far as to venture that Scheme can be optimized to the point where it is competitive with hand-coded ASM, but since most will laugh, I will leave that as speculation (for now).
    As for the original complaint of the post I am replying to... Scheme is a heck-of-a-lot more nice looking than C, and a lot easier to understand once you lose the '{' and ',' craziness. Its grouped just like any old mathematics equation: with parentheses! f(x,y) and (f x y). f(g(x),y) and (f (g x) y). I find the Scheme easier to read than the math notation. Sure it uses prefix notation, but its a functional language it only makes sense! (+ 1 1), (plus 1 1), plus(1,1); whats the difference? Instead of f(g(x),h(y)) + i(x,j(x)) you have (+ (f (g x) (h y)) (i x (j x)))
    Remember, everything is grouped by parentheses, so you have (g x) and (h y) and (j x), call them a b and c, then you have (f a b) and (i x c) call them d and e, and you're left with (+ d e). Of course if you've never touched algebra... well go learn it now, wouldnt want to hire a programmer who couldnt do algebra, eh?
    (and i didnt even discuss lambda-functions! ack!;)

    So lets fix up that Haiku:

    (define (language good)
    (write (bug-free-code (language 'Scheme))
    (read (code 'easily)))

    The ' means its a symbol, not a variable or function.

    :)
  • by Junks Jerzey ( 54586 ) on Sunday July 16, 2000 @10:00AM (#929784)
    Everyone who's used ML can attest to this fact: once your programs typecheck, they Just Work.

    This is a myth. I agree that it does prevent certain types of errors, but not others. For example, suppose you have a function that draws a rectangle: rect(x,y,width,height). You forget and pass the parameters like this instead(x,width,y,height). The type system won't catch the error, because all the values are integers. These kinds of errors are at least as common. I'll agree that catching some errors is better than none, but the ones that are caught come at the expense of a possibly elaborate type system; a type system whose complexity may not be worth the benefit.
  • This is a myth. I agree that it does prevent certain types of errors, but not others. For example, suppose you have a function that draws a rectangle: rect(x,y,width,height). You forget and pass the parameters like this instead(x,width,y,height). The type system won't catch the error, because all the values are integers.

    Actually, this is a perfect example of why the first stages of developing a piece of software should be to think about the types of data you're going to be handling, and building low-level code for manipulating these pieces of data in a convenient fashion that maps well to the way you think about them. In the above example, it sounds like you're doing 2-D graphics. That being the case, one of your first steps should have been to define a 'point' data type, which stores an (x, y) pair, build constructors and accessors for this type. Then, your rect procedure would take two points (or a point and a vector (which might have the same internal representation, but are conceptually different, and thus given distinct types)) rather than 4 integers, and the particular error you're describing becomes impossible.

  • There is an annoying elitism among functional programming proponents, implying that all current Perl and Python and C++ programmers are wrong.

    I think that's strong - I prefer Philip Wadler's construction of this idea (from an ACM SIGPLAN Notices) under the title of non-reasons for the lack of general adoption of functional languages:

    "They don't get it" Functional programming is beautiful, a joy to behold. Once someone understands functional programming, he or she will switch to it immediately. The masses that stick with outmoded imperative and object-oriented languages do so out of blind prejudice. They just don't get it.

    He holds this up as a mistaken belief frequently held by researchers. I don't think its arrogance, I rather think it's the kind of thing demonstrated by excessively evangelical, newly converted Christians who want everyone else to 'see the light'.

    Some FP proponents insist that functional laziness is the key. Others think that laziness is unpredictable and does more harm than good. [etc]

    Well, this is still very much a research field. Issues like mutable state in functional languages, controlling dragging caused by excessive, uncontrolled laziness, etc. are hard - and I think it's a good thing that people are trying different approaches.

    To be fair, most of the debate centres around purely functional languages, which are of significant research interest: you can quite happily write industrial strength, concurrent distributed systems with functional languages - cf. the Foxnet server discussed elsewhere on this thread, and the use of Erlang by Ericsson for the software for some of their ATM switches.

    Many FP language developers would prefer to do research on new type systems, rather than create useful libraries. Many think that language choice is more important than tool choice, as if ML would somehow be better at GUI-driven applications than Delphi.

    Most FP language developers are not software developers, stricto sensu: they are computer science researchers! It is 'what they do', to do research on things like type systems. It seems a little unfair to criticise them for this. I have honestly never seen the 'language choice more important than tool choice' attitude in the functional programming community: people may have strong views (academically) about whether strict or lazy semantics are to be preferred, but they are not so blindly partisan as you paint them.

    In sum, the fact is that functional programming is (in most cases) on the cutting edge of programming language research; as such, there are bound to be academic debates about the merit of one approach or another.

    Cheers, Nick.

  • I maintain that a correctness proof of a program - in the ordinary meaning of the words - is not possible. Consider:

    What behavior is "correct" depends on the job to be done. (For instance: A perfectly correct implementation of "grep" is utterly broken if what you wanted was an implementation of "finger" or "gcc".)

    Assume you had a perfect formal correctness proving tool or methodology. You must specify what it means for the program to be correct, in a formal language accessable to the tool/methodology.

    This is exactly the process of writing a program. Did you write that program correctly? Prove it!

    This is NOT to say that what are called "formal correctness proofs", or "correctness-proving tools", are useless. Quite the contrary - they're extremely valuable. They're just misnamed.

    What these tools do is automate the comparison of two distinct descriptions of a portion (possibly all) of a design's behavior.

    This is very important, because the only effective ways known to find and eliminate the bugs in a design amount to expressing it twice, in distinct forms that use distinct modes of thinking, and compare the two.

    In the classic "manual" (though often machine assisted) approach to software development, the two expressions are the canonical specification documentation (the "spec" or "bible") and the source code. The spec is optimized for human readability, while the source code is optimized for processing by compilation tools.

    In a large project they will be written by different people. In a small project the differences in language tends to create enough of a different mindset in a single person that they tend toward non-overlapping bug sets. In a very small project the "spec" may be the program comments. A good programmer comments well, not just to make things clear to others later, or to keep them clear to himself later, but to deliberately create this second mindset, reducing the chance for undiscovered bugs.

    The source code does NOT strictly fall out of the spec. Instead the two co-evolve as the project procedes. The debugging/QA/verification process detects "bugs" which are defined as differences between the spec (or its non-canonical derivitaves) and the executable derived from the source code. The bugs are fixed either in the source code or the spec. A spec bug may be an ambiguity, an internal inconsistency, an ommission (including deliberate ommissions to allow flexibility to implementors, later filled in with the choice made), an error deriving non-canonical documents (such as comments or user documentation) from the spec, a misfeature, missing feature, unnecessary/difficult feature, or an adequate design choice that is later replaced by something considerably better discovered during implementation. A source bug is any program behavior that doesn't match the spec in a way that doesn't expose a spec bug.

    What the so-called formal correctness proof tools can do is automate various aspects of the comparison. Once properly configured they can do, with machine-level reliability and speed, many of the same things that software QA people do. (For instance: Identify "edge" and "corner cases", determine that the behivor is right at and near the edge/corner, and generate inductive proofs that the behavior will be correct throughout the parameter ranges.) And once the tools themselves are debugged, they can do more of it, with less chance of error, than can be done by human effort. This also allows them to perform classes of testing that would be impractical without them, because of the manpower and time costs, because the complexity of the test would lead to errors and thus to missed bugs and bogus bug reports, or because they perform a class of test that is just not a good fit for a human mind.

    Finally, formal tools provide additional specification languages, distinct from the implementation language, leading both to clarity of thought and a distinct mindset in the creation of the second expression of the program's behavior, and thus to less overlap of the bug set in the two expressions.
  • Pike (recently discussed on /. [slashdot.org] offers many cute features on this:
    - functions are first-level variables. They can be (and are quite often) shuffled around
    - it supportsclosures
    - it has anonymous (lambda) functions
    - tail-recursion optimizations
    - automatic memory management, via both refcounts and garbage collector
    for more stuff check the previous discussion.

    This does NOT make Pike a functional language by all means. It's just that it allows people to write functionally those tasks where it helps to do so.
  • ... I'd still be praising the person that invented the goto statement :-)

    Seriously: I've been coding from when I was 11 years old, and throughout school the biggest change in my coding style came with switching from BASIC to Turbo Pascal. Besides having procedures and functions, nothing really changed in getting the things to work.
    When we started using scheme in CS, a whole new world opened up: for the first time in years, I put more time in algorithms than I did in code.

    It's too bad that - in spite of the beauty of the code - there really aren't that much applications of scheme and the likes. Without scheme, I would have probably missed the point of OO completely.


    Okay... I'll do the stupid things first, then you shy people follow.

  • The online documentation is admittedly a bit sparse, but there's also a b ook [amazon.com], which, although ambitious, works as an introduction. It uses an older dialect of OCaml, Caml Light (these guys have a sense of humor too), but is still useful for learning OCaml. What we really need is a Learning OCaml, although I'm not sure what the animal would be since the camel is already spoken for.

    Believe it or not, there is an O'Reilly Book [editions-oreilly.fr], but it's only in French right now. There's been a lot of discussion on the OCaml mailing list about an English translation, but you're out of luck for now if you can't read French.

  • by DevTopics ( 150455 ) on Sunday July 16, 2000 @07:50AM (#929811) Homepage
    Well, SQL is a widely used functional language (4GL, a fourth generation language). Most people have a hard time to grasp the use of a non-procedural language: just think about it how long it took to get object-orientated languages to be applied widely. And even now, most languages that you find in programs are still 3GL. And even if you will see a program advertised as written in C++: most of the time this is written in C, but compiled with a C++ compiler. Which hardly counts as object-orientated. We are so used to program in 3GL that it will take us a long time to make full use of other concepts. This is even true for SQL: most people tend to program cursors where a shift to 4GL would be a lot better. It will just take time...
  • by tilly ( 7530 ) on Sunday July 16, 2000 @10:06AM (#929813)
    Or else you never learned the language.

    The "funny symbols" define a miniature grammar. Learn that grammar and it gives you guidelines about how to think about hashes etc. (Guidelines such as what you should name them.) Next use strict to stop pointing your gun footwards. Warnings exist for a reason. Turn them on as well. Finally avoid default variables except where they are necessary.

    Now follow basic sane programming guidelines and Perl is perfectly readable.

    It gets a lot better when you start using it like it was meant to be used. For instance the language is a list-oriented language for a reason. There are a lot of constructs that are syntactic sugar. Use them wisely. Note that map() and grep() give you all of the power of a pipeline without the problems of parsing and reparsing text, use them.

    Now learn perldoc, use package namespaces, use Exporter, start taking advantage of the flexibility to make the style suit you...

    Perl gives you rope. Yes. But don't judge the limits of the language by people who commit maintainability suicide...

    Cheers,
    Ben
  • SQL is not even strongly relational; see Darwen and Date's Foundation for Object/Relational Databases: The Third Manifesto.

    While I agree SQL leaves a lot to be desired, Date and Darwen have some fundamental problems of their own. Here are some comments on Date and Darwen composed by Tom Etter and myself during a Relation Arithmetic [geocities.com] research subproject at Hewlett-Packard's E-Speak project:

    * Date and Darwen: All logical differences are big differences ... and all non-logical differences are small differences.

    Comment: This is wrong. All differences are logical differences. All big differences are rational [geocities.com] differences. Rationality requires more than logic - it requires a sense of proportion. One of the chief aims of our new relational model is to bring a sense of proportion into the querying of data.

    Their message here is perhaps better expressed in their discussion of conceptual integrity (p. 8), where they speak of the need to rigidly adhere to "a coherent mental model" at the foundational level, to which we say amen.

    * Date and Darwin: The first logical difference we want to discuss is between model and implementation, which we define thus. A model is an abstract, self-contained logical definition of ... the abstract machine with which the users interact. An implementation of a model is the physical realization on a real computer system of the components of that model. Comment: In this quote we witness the great tragedy of the computer industry:

    Ignorance of the complimentary roles of man and machine exposed by the computational intractability of relational systems.

    This quote is all the more poignant because Date and Darwin are authorities in relational systems.

    Instead of a hard definition of "the abstract machine" as the "model" with which "the users interact", an man/machine partnership interaction of humans and machines is needed in which both all are rational about their limits and ask the others for assistance. This partnership ranges continuously from the start of the software process through the execution of workflow to the solution of immediate problems. The interaction starts when humans specify their intention with intractable queries. The machines detects intractability and queryies humans other actors for increasingly specific predicates until reaching tractable problem specifications.

    Actors are rational about their limits when know when to count on the actions of others and when not to. Counting is fundamental to accountability. It is this notion of counting that we introduce at the foundation of our relational system.

    Codd's SQL was a "fourth generation programming language" which was envisioned to dramatically reduce the distinction between users and programmers. Too much cynicism has been attached to this vision. While leaving much to be desired, SQL was a giant leap forward in the computer industry because it was a small step toward a relational paradigm of man/machine interaction.

    Further Comment: We see the need to distinguish three levels here rather than the two levels of model and implementation. Our fundamental level is what we'll call relational structure. We'll turn to this in detail shortly, but suffice to say here that it is essentially self-contained. Next is the level of predication, which encompasses relational algebra and predicate calculus. Unlike the structural level, the level of predication is not self-contained but spans the gap from structure to implementation and connects the two. It is tied to structure through logic and becomes increasingly concerned with the practicalities of language and the needs of the user as it approaches the physical computer. The third level is of course that of the physical computer itself, which is the "realization" of this middle level.

    Where, then, is Date and Darwen's relational model in this scheme? It certainly involves the structural level, but it also includes a good deal of predication. The authors call their model a "self-contained logical definition of the abstract machine with which the users interact." We believe that it is better not to think of this abstract machine as self-contained, since its proper design has a lot to do with who is using it for what. We do share their desire for "conceptual integrity", but we believe this is better directed toward the level of structure.

    * Date and Darwin: The question of what data types are allowed IS COMPLETELY ORTHOGONAL to the relational model (Appendix G p. 439).

    Comment: This is a very revealing statement, and points straight at what we mean by relational structure. Here is the key definition:

    The shape of a relation is defined as that about a relation which remains unchanged when we replace its values one-for-one by other values.

    A one-for-one substitution of values will be called a similarity substitution, and if R' can be obtained from R by a similarity substitution, we say that R' is similar to R. The shape of a relation can be formally defined as its similarity class. Thus the structural level is about entities called relational shapes, or simply shapes for short.

    Cells in a relation table with the same value will be called congruent (later we'll extend congruence to sets of cells). Another way to define a shape is as a table for which we are given a congruence relation on the cells rather than an assignment of values. Clearly there is only one such table for each similarity class (we are ignoring the row and column orders in the tables - see below).

    The concept of shape comes from Russell and Whitehead's Principia Mathematica (1912), though they used the term relation number instead of shape. They had planned to write a fourth book of Principia devoted what they called relation arithmetic, whose point of departure was Cantor's arithmetic of ordinals. Russell had a vision of relation-arithmetic as a powerful tool that would enable us to deal with every kind of structure, including the structure of the empirical world:

    "I think relation-arithmetic important, not only as an interesting generalization, but because it supplies a symbolic technique required for dealing with structure. It has seemed to me that those who are not familiar with mathematical logic find great difficulty in understanding what is meant by 'structure', and, owing to this difficulty, are apt to go astray in attempting to understand the empirical world. For this reason, if for no other, I am sorry that the theory of relation-arithmetic has been largely unnoticed." Bertrand Russell [ref.]

    Relation arithmetic didn't get very far, however, and in retrospect it's easy to see why. The problem is that the most important combining operators for relations, such as Cartesian product and join, are not invariant under similarity. That is, if A is similar to A' and B is similar to B', it does not follow that the Cartesian product or join of A and B is similar to the Cartesian product or join of A' and B' [ref.1, section --]. To put it more simply, shapes don't combine into shapes. We'll return to this point.

    * Date and Darwin: RM PROSCRIPTION 3: NO DUPLICATE TUPLES (p. 173)

    How may words in the sentence "First come first served"? Four if you count duplicates, three if you don't. Duplicates arise in relation tables when you project onto certain columns, ignoring the rest. In the sentence above we must have two occurrences of "first", since the two reach out into the context in different ways. The same is true of duplicate rows, which reach out into the other columns in different ways.

    It is crucial to count duplicate rows when we need to know if several (projected) parts of a table are independent. To see why this is so, we must carefully distinguish between two meanings of independence. Let P and Q be two projections. Then:

    Logical independence: Every pair of distinct rows in the P and Q is a distinct row of PQ, and there are no other distinct rows in PQ.

    Independence in place: Every pair of rows in P and Q is a row of PQ, and there are no other rows in PQ.

    Clearly the first does not imply the second. The duplicate row counts in PQ provide a numerical profile of how P and Q are correlated, and indeed it's possible for P and Q to be almost perfectly correlated despite being logically independent. Conversely, they can be independent for all practical purposes despite being logically dependent. In the real world there are occasions when logical differences are very small differences and non-logical differences very big differences.

    Etter and Bowery: RM PRESCRIPTION 3: YES! DUPLICATE TUPLES. In practice duplicate tuples are represented by a count associated with every distinct row.

    * Date and Darwin: RM PROSCRIPTION 1: NO ATTRIBUTE ORDERING (p. 171)

    * Date and Darwin: RM PROSCRIPTION 3: NO TUPLE ORDERING (p. 172)

    Comment: We agree.

    * Date and Darwin: RM PROSCRIPTION 10: RELATIONS. A relation consists of a header and a body, where the header is the set of column names.

    Comment: Names, including column names, belong to the level of predication, and for now we are considering the structural level. However, the role of column names can be filled at the structural level by key rows, or more generally, by sets of rows the constitute keys. We believe it is better to do as much as possible at the structural level, if for no other reason than that computers operate on structure.

  • by Ungrounded Lightning ( 62228 ) on Sunday July 16, 2000 @02:44PM (#929822) Journal
    I have to take a position distinct from both of those above.

    [] once your programs typecheck, they Just Work

    This is a myth.


    I agree with the second poster on this point. But...

    [the errors] that are caught come at the expense of a possibly elaborate type system; a type system whose complexity may not be worth the benefit.

    Strong type checking in imperitive languages (particularly OO langues such as C++, but to a lesser extent non-OO languages such as ANSI C) is often criticized as being too restrictive and too complex. But in my direct experience, complaints that type-checking was too restrictive or onerous were mostly made by software "cowboys", whose code tended to be horribly bug-ridden. (I trust the second poster doesn't fall into that category.)

    What strong type checking does is provide toolset support for catching design and implementation errors characterized by mismatched interfaces. When combined with a good OOP-style type declaration system you can express your intent clearly to the compiler - and to yourself and the other members of the project. The key to making it your friend is to understand it and use it in that way: Take a few moments to express the intended uses of your variables via types.

    Mismatched operands are a symptom of lack of clarity of thought about what is going on at the interface. That lack of clarity leads to much more subtle bugs than just exchanging operands - bugs you can hunt for for days if they're not pointed out, but which jump out at you as soon as a compiler complains.

    A good type system, properly used, doesn't normally get in your way. And in those rare cases where it does some languages (such as C++) give you a mechanism (such as cast to void or pointer-to-void, recast to alternative type) to explicitly override it, while expressing your intent to do so.

    While my experience is primarily with imperitive languages, I suspect the same is true of functional languages - providing the type safety doesn't get in the way of code reuse (as it did to a small extent in C++ before templates, though this could be easily worked around with macros).
  • by King Babar ( 19862 ) on Sunday July 16, 2000 @09:18PM (#929828) Homepage
    Yes, but that's bad design. You could have different types for co-ordinates, and for lengths. My experience with Haskell and ML has overwhelmingly led me to believe that strict type systems are a good thing.

    Yes, but while types are a good thing, there's a limit. No one wants to take this to the extreme, and have a type for odd integers, a type for the number of students in a class (as opposed to the number of books in a library), and so on.

    Oh, wow. What you say might sound so sensible, but, I am afraid, is so very wrong. My point is that once you really do embrace typeful programming (in the sense of Luca Cardelli, or beyond...), then all of these kinds of types might make perfect sense.

    So, for starters, why not have a type that represents all and only the Odd Integers? One can think of plenty of mathematical situations where functions are defined only (or alternatively) for odd numbers rather than even numbers. In a type-impoverished system, you might have to litter your program with checks like (in pseudocode) "if ((x % 2) == 0) --we're doomed!". In a typeful environment, when you construct X as an odd integer, and use only operations defined for odd integers, then X *stays* odd, dagnab it. And that can be a huge win. Or let me put it this way: if you were programming on an architecture where valid addresses are only even integers (Motorola 68000 comes to mind?), then specialized types guaranteed to be even or odd could save you from doing something naughty.

    As for distinguishing the types for numbers of students vs. number of books, again, why not make the distinction if it is important for your problem? I mean, if values of those two types occur in one program, then having them around could make it impossible for you to compute something like "number of students read by each book" when you obviously really meant to do something else.

    After a while, being pedantic ceases to pay off. Actually, it ceases to pay off rather quickly, I think.

    I'm not as sure as you are. I've seen too much buggy C code where there could have been a type error (but there wasn't) and too much confusion when everything's a string (so convenient until it all goes wrong) to think of anal retentive type use as merely pedantic.

  • by Kris Warkentin ( 15136 ) on Sunday July 16, 2000 @07:52AM (#929829) Homepage
    I took a course in Scheme and did alright - coding small things to do simple stuff is not that bad. Due to my personal limitations though, I just couldn't see how one could develop big, useful applications with them. I'm not saying that it couldn't be done but that years of procedural programming seems to have hardened my brain. For whatever reason, I just had trouble wrapping my head around it.

    I've heard that functional languages are easier to learn for someone who has never programmed before. I think, however, that for people who have written a lot of procedural code, it's very difficult to get used to. Perhaps that's why: not enough people START with functional languages and, once you know procedural (or OOP), there's very little reason to switch since you can do most everything you need to. I guess you just choose your poison: Turing or Church.....
  • you might want to try ML, since it's the only language I know of which lets you declare your own infix operators. (C++ only lets you overload, not declare new ones)

    I mourn the death of "MAD".
  • Perl can construct programs "on the fly". So can TCL, and so can (I believe?) Python. Probably not exactly with the same ease, since you must do it in ascii strings form (as in, you can't manipulate the syntax tree directly).
    Um, yeah, practically any language can output code. You don't even have to use a turing complete language -- here's a T-SQL quine:
    CREATE PROCEDURE print_me
    AS exec(sp_helptext 'print_me')

    As long as you have something like exec(), eval() or even system() you can generate and execute code on the fly.
    --Shoeboy
  • by Anonymous Coward
    like all languages, functionals ones are good for certain kinds of jobs and not so good for others. Performance and low level hacking are problematic for function languages. They excell as research tools though. I hope that advanced typing systems with monads, functors ... will trickle down to the OO-world. BTW, simon peyton jones (haskell guru) and some colleagues has some interesting research going about a machine independent assembly language to replace C (http://www.cminusminus.org/)!
  • Yes, but that's bad design. You could have different types for co-ordinates, and for lengths. My experience with Haskell and ML has overwhelmingly led me to believe that strict type systems are a good thing.

    Yes, but while types are a good thing, there's a limit. No one wants to take this to the extreme, and have a type for odd integers, a type for the number of students in a class (as opposed to the number of books in a library), and so on. After a while, being pedantic ceases to pay off. Actually, it ceases to pay off rather quickly, I think.

    The typing issue is confused by all the other benefits that functional programming languages provide. While there is good reason to belive that ML and Haskell are better suited to many problems than C, it is harder to argue that ML is more productive than Erlang or pure Lisp, for example, based on static vs. dynamic typing.
  • Using scheme isn't the best solution for a lot of things. I think its biggest hinderance is its obscurity, nobody but serious hacks and cs majors knows anything about it. Anything I write in scheme, I have to deal with, cause no one else can. For my purposes it really doesn't do anything a lot of other languages can do just as well. And other people can hack at my c or python code.

    -- Moondog
  • Why does it use all parenthesis?

    Simple. It makes the syntax dead simple to parse. All Scheme and Lisp has this general form:

    (function-name arguments...)

    Of course, there are special forms, like cond and macros, etc.. I've written a parser for C, and I'd much rather write a parser for Scheme and variants. Much easier.

    Anyway, delimiting expressions with parenthesis makes parsing much simpler and with an editor that matches parenth's for you, very easy to write in.

    Woz
  • ...I've found that most of my debugging needs are solved by:

    Typechecker (~97%)

    I have yet to see a difficult bug that is caught by a typechecker. Sure, typos, incorrect order of arguments, etc. These are the easy bugs, the ones that take little time to deal with in any system. The difficult bugs are ones where the program works, but works incorrectly. These are the time-suckers.

    Fine-grained inspection of the system (objects and execution state) are what make for a good debugging system. gdb is okay -- certainly better than print statements and Deep Thought. Squeak [squeak.org] still has the most pleasant and helpful debugger I've used -- in part because it was never an afterthought, being a language not created by mathematicians with fantasies of Correctness.

    I don't think static typing is very mathematical anyway. Most higher math deals with things in terms of properties, not types. The analog in CS is methods. If something implements mathematical operations then it's a number, what the object actually is isn't terribly important.
    --

  • Moderators: the parent is not (+1, Informative), but (-1, Overrated). This lenghty post is all wrong. I even checked the troll forums to make sure this wasn't a troll and found nobody claiming it. And in my experience no troll takes so much time to write a single post.

    in point of fact, 'pure' functional programming (no state, no side effects) can only handle the kinds of problems that can be solved by a pushdown automaton. in theoretical terms, they can only handle prolems up to and including context free grammars.

    [tons of BS snipped]

    The lambda calculus, which is the logical foundation of functional programming, is Turing complete. Period. There's no getting around this. If a language has application and abstraction, it is Turing complete, and can compute all computable functions.

    'pure' functional programming can't do that. it involves storing a value, and that's something functional languages don't do.

    Bullshit. "Storing values" is a function from an initial state to a modified state. This is trivial to express functionally as a function on states.

    to make themselves Turing-complete, functional languages rely on a trick known as 'stream programming', which rests on a sneaky form of variable storage called 'deferred execution'.

    BS again. Streams have nothing to do with the Turing-completeness of functional languages. The lambda calculus is Turing complete; evaluation regime (strict or lazy) makes no difference.

    we've created something static and unchanging that still manages to act like it has state.

    You simply don't grasp the idea that states can be first-class objects you manipulate functionally. Yes, you can simulate states in a purely functional language. This does not mean the language is not functional; it is not, since the semantics of the language itself have no notion of state. Functional programs can explicitly implement the notions of states and state transitions.

  • by kaphka ( 50736 ) <1nv7b001@sneakemail.com> on Sunday July 16, 2000 @09:45PM (#929847)
    This lenghty post is all wrong.
    I concur. My computability theory knowledge is mostly swapped to disk right now, but I know that a pure functional language can easily be Turing complete. There's a reason why the term "recursive function" is synonymous with "Turing-computable function".
  • Types are in a class system. You have your most generic polymorphic types and then the set of values can be restricted as much as the programmer needs. I can't explain it exactly, but I think it is this that prevents the need for them weird hacks you described.

    Sounds to me like the sort of type safety C++ provides.

    Types may be basic, enums, agregates (structs) or classes. Classes can have arbitrary characteristics - including classes that look, act, and are stored like variables of the basic types but with arbitrarily limited ranges. Class types can be arranged in a forest of inheritance trees. Arguments of a type at a particular node in the inheritance tree allow passing of objects of that type or any descended from it. A class can be defined in a way to prevent further subclassing.

    Sound like the same sort of thing to you?
  • And it probably didn't help that Edsger Dijkstra was the Goto Considered Harmful man, since everybody knew those Europeans didn't really know how to program real programs! Just look at Algol, compared to real languages like COBOL or FORTRAN for crissakes!

    I do hope that was intended to be ironic...

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction

  • While functional programming has always been considered more "elegant", a number of mitigating issues have prevented it from taking hold as a popular technique, beyond a lack of popular tools, libraries, or operating system interfaces.

    The biggest mitigating issue - people don't think the way functional languages want them to. Getting to the point of "elegance" in Lisp or Haskell takes most people so long that it isn't practical to make the effort. Essentially, its above our heads. C++ has a related problem - the language is so complex that few can use it advantageously.

    People should probably just accept this and leave functional languages in the dustbin of history. Looking at the real growth in languages - Java and Perl - we see two languages that take different paths to making things easier for programmers - Java through a rich set of libraries, and perl through a more "human" language structure.

  • Interesting reading; I'll give ya that much.. Right now I'm going through the Hoar paper on your site.. Semi-interesting.

    But it seems to be more along the lines of defining your langauge in a logical framework (a prolog-like language), then defining the semantics and theorems of it..

    Suggestion, try looking at twelf (www.twelf.org). You can define any damned language you want in it, and for some, you can automatically derive theorems from it, and go up&down.. Not too good with the theorem proving between layers; but I doubt that you could make a useful sound top-down logic.
  • by Black Parrot ( 19622 ) on Sunday July 16, 2000 @08:00AM (#929860)
    Functional languages seem to lend themselves to formal correctness proofs. I don't mean to imply that it is not possible with other languages, only that it is (IMO) somewhat "natural" for functional languages.

    For instance, check out ACL2 [utexas.edu]. This is a LISP-derived system that can both execute code and do semi-automated correctness proofs of same. It works by having you propose correctness theorems about the code, and (cool part!) expressing those theorems in the same language as the code itself.

    <trivia>ACL2 was used to validate parts of AMD's K5 and K6 FP operations after Intel's embarassing faux pas with the Pentium FP unit. I've heard that it was used even more extensively on the Athlon. (Strictly speaking, what ACL2 validated was a model of these processors, since the processors themselves are not actually written in ACL2's input languages.)</trivia>

    --
  • Too true. Of course, the way to Make Money Fast is to spot the next big language before anyone else does, and become an expert in it. Write some free software in it or something, so it can go on your CV. Then when the curve goes exponential and all the job adverts are offering training in the language, you can name your price.

    Hmmm. Anyone got any historical data on Python popularity?

    Paul.

  • by tmoertel ( 38456 ) on Sunday July 16, 2000 @08:01AM (#929869) Homepage Journal

    Given that most of the certified "professional" programmers that I have bumped into during my last few years of consulting weren't quite up to the, er, daunting task of writing sensible code in VB -- or code that just plain worked, for that matter -- I doubt that they would understand, let alone appreciate, wonders like Scheme's call-with-current-continuation.

    The legions of programmers who have entered the market recently only because they see it as a quick and easy way to make money aren't interested in taking on the more-disciplined, almost mathematical mindset that seems to allow for maximum immersion into functional languages. Rather, they'll do whatever M$ says is the best way to earn the big, easy bux.

    And that's why FP languages aren't more mainstream. (Thank goodness that Perl has map! At least I can sneak FP into one corporate semi-approved language.)

  • > Alot of people can not used to thinking recursivly.

    All the more reason to use them to teach programming, IMO.

    --
  • bagh, back in my day, we didn't need any functional programming languages. LISP and SCHEME were for acedamia and procedures were for hard working people.

    Everything was procedural and when we wanted to develop an algorithm, we used GOTO's and FOR NEXT's and we liked it. Nowadays, everyone wants to use fancy schmancy functions or bloated Classes and Objects.

    Bagh.
    Data Structures are for Lazy young Silicon Valley kids that don't know good programming from a VAX prompt!

  • the ideas behind that post are brilliant, but it is cookbook econ 101 and applies to many products, technologies and industries. Think about VHS/Beta, internal combustion engines, whatever.
  • That's quite a claim. References? By what measure? Lines of code? Useless. Number of projects started? Please. Function points? What? Back up this claim or let VB molder in the grave it deserves.

    Number of projects. I've seen this mentioned in several sources--yes, one of which was from Microsoft--but I can believe it. I should clarify that the correct statistic is "60% of all new *desktop* applications are written in Visual Basic." The majority of all software is still written for embedded systems, not desktops.

    Most desktop software consists of so-called enterprise applications: database front ends, form-filling programs, time reporting programs, sales trackers, etc. Visual Basic makes sense here.
  • Oh, wow. What you say might sound so sensible, but, I am afraid, is so very wrong. My point is that once you really do embrace typeful programming (in the sense of Luca Cardelli, or beyond...), then all of these kinds of types might make perfect sense.

    This is the kind of thing that scares people away from FP.

    What's you're suggesting is pedantic in the extreme. In OOP, the complexity moves from raw code into the class hierarchy. That is, the code may look simple, but there's an elaborate framework that you need to understand as well. In the type of programming you're proposing, the complexity moves into the type system. Once you've painstakingly developed all the types you need, including algebraic properties and restrictions, then you've done most of the work. The resulting code may look simple, but, again, there's complexity elsewhere.

    In general, it is a mistake to build up fancy mechanisms for simple tasks. In most programming, the typing is very simple. I find it easier to develop a program without regard to type, and then put the types in later. Otherwise I'm doing extra work for a prototype that may, in the end, be incorrect. Imagine if you couldn't just email a note to someone without diagramming each sentence to show that it is, in fact, grammatically correct. To a linguist, yes, maybe that is interesting, but not to everyone else. Programming is not the same as programming language theory. If it were, then no one would ever have been able to write code in C, C++, assembly, or Lisp, as those languages don't force you to specify every step of the way.
  • by Junks Jerzey ( 54586 ) on Sunday July 16, 2000 @03:55PM (#929886)

    > it is harder to argue that ML is more productive
    > than Erlang or pure Lisp, for example, based on
    > static vs. dynamic typing.

    Really? I find static typing to be a huge help in debugging my programs. I wouldn't give it up for anything! It may come at the intellectual overhead of learning the type system, but once you do it really does make sense.


    Really :) Compared to C, both ML and Lisp provide interactive development, garbage collection, higher order functions, and less bookkeeping clutter. With all of that on the table, with both languages far beyond C in those ways, it is difficult to tell how much of an issue the type system plays in productivity. You will find many notable experts who prefer dynamic typing (Peter Norvig and Kent Pitman, for example), and you will find many other notable experts who prefer static typing. So it is more of a personal preference than anything else.
  • by Ungrounded Lightning ( 62228 ) on Sunday July 16, 2000 @03:56PM (#929888) Journal
    Many of these indirect benefits are not actually characteristic of functional languages, but are easily available in imperitive languages. Some examples:

    - Concurrency. Writing concurrent programs in a functional language is so much more natural. It's easier to avoid certain kinds of race conditions too, since you don't update variables in a functional language.

    OOP does this well, also. There are several approaches. One method is to use "actors" (objects whose instantiation represents a thread of execution, where message-send represents an actual intertask message rather than a function call).

    A major part of preventing interthread trouble is to use a style where variables are private, manipulated by accessor functions. Then you can build the concurrency control into either the accessor or the inter-object messaging.

    It's difficult to design a language (and many smart people have tried) that's imperative, type safe, and powerful.

    Actually, it's been done. A notable example is C++. A problem, though, is that classes and books on C++ tend to put their focus on the wrong parts of the language, neglect to teach the use of types as a means of encoding programmer's intent.

    Features of functional languages like garbage collection and non-updatable values make it easier to define a language with a powerful type system.

    Non-updatable values ("const") are readily available in C++.

    Genaralized garbage collection can be built on top of it with some effort. But the language supports four distinct storage regimes for objects (static, dynamic, member, and heap), rather than a garbage-collector-based language's one.

    The politically-correct method for handling heap-allocated objects in C++ is with explicit allocation and freeing, then building (or importing by inheritance) appropriate automation for particular classes' usage patterns as appropriate.

    Using garbabe collection as the primary has several problems:

    - It leads to a style of constructing composite objects as a web of primitive heap-allocated objects, rather than a single object. This vastly increasing the storage management overhead.

    - It has unfortunate interactions with finalization (destruction), a very important semantic construct in C++'s style of OOP.

    - Garbage collectors tend to work well only for particular patterns of usage - and are usually optimized for program development. This leads to variable and hard-to-characterize latencies in time-critical production applications. (C++ lets you use GC only on those classes that need it, and to use distinct garbage collection schemes for different objects, according to their needs.)

    (What keeps C++ from being a nearly ideal language for practical OOP is an obscure hole in the standard: The deliberate, explicit failure to specify which overriding of a virtual member function is invoked if one is called during construction/destruction of member variables of object type. It SHOULD be the derived class' version from the first executable statement of the constructor to the last of the destructor, and base class' version otherwise.)

    - Machine independence. Making a type system usually means hiding away the details of the machine, and this usually means that the execution of your program is completely predictable. (Compare to C/C++ "undefined" behavior!)

    This is the result of C++ inheritance of C's ambiguous primitive data types. The standard workaround (also available in C) is to construct a small, machine dependent include file that defines a set of unambiguous types (i.e. int_16, uint_32), then use THOSE instead of the primitives elsewhere in the code.

    Signature Ascription - This allows you to define abstract data types by naming a type and some operations on it (and their types). This is similar to header files in C (much more refined), but thanks to the type system, you can guarantee that the user can ONLY use your abstract data type the way you intended. They cannot cast, subclass, or use any other tricks to get at your datatype.

    For defining abstract interfaces: Abstract base classes. (One or more member functions are not defined until the concrete subclass.)

    To protect the guts from tampering: private member variables and private functions.

    Yes, you can get around the protection. C++ explicitly gives you enough rope to hang yourself. But you have to express your INTENT to violate the protection by a particular set of casts, or you end up buried in warning messages and NOT trashing the variable.
  • Logic programming languages based on Horn clause resolution (ie. all
    of them) are restricted to finding solutions to formulae of very low
    logical complexity. Whilst logic programmers have been ingenious in
    working within this universe, I think its no accident that researchers
    working in applications of computation to logic have overwhelmingly
    preferred functional to logic languages, the big exception being
    algebraic specification (unsurprising, since algebraic systems have
    axioms of low logical complexity). Also functional languages have
    some nice correlations with linguistic entities.


    I think the `relations are more general than functions' is a red
    herring, since it is easy to translate between programs coded in a
    pure (ie. the minimal heart) functional and pure logical language.
    Logic languages are nice, but in my opinion they are the best tool for
    a restricted set of problems, whilst functional languages are useful
    over a much wider domain.

    • Functional languages are slower, as a rule, than imperative languages. Compilation, rather than interpretation, helps - but few functional languages can approach the speed of a well-written imperative program. Of course, there are many occasions where speed doesn't really matter, and even if it did it's limited by I/O, rather than CPU, but it's still a factor
    • A lot of functional languages still have the element of research tool rather than real language about them - particularly in the area of their libraries and other associated bits. They don't support things like graphics toolkits, CORBA bindings, parser generators, database interfaces, and the like. Similarly, the toolchains are generally less sophisticated - while some very clever debuggers have been written for functional languages, they haven't been turned into real tools in many cases.
    • A lot of people just don't get non-imperative languages. I taught Haskell as a first computer language at a university, and people frankly struggle with it - much more than they seem to with imperative languages.

    I like functional languages - the project I'm working on uses them extensively, and Scheme is great to work in. It's a shame that they're not more widely used.

    And, while we're naming our favourite alternative languages, you could try Mercury [mu.oz.au], a logic programming language designed for real-world programming. It compiles to C, it's got the best type checking of any language I've ever used, it's fast, and its compiler is good. The fact that it's developed at my former university has nothing to do with it :)

  • It supports destructive updates (i.e., analogs of the assignment operator in C or C++) and yet still is purely functional.

    Mercury also does destructive updates. You are quite right, they are *very* important for making functional languages run fast in the real world.

    Thanks for the pointer.

  • <em>While my experience is primarily with imperitive languages, I suspect the same is true of functional languages - providing the type safety doesn't get in the way of code reuse (as it did to a small extent in C++ before templates, though this could be easily worked around with macros).</em>

    I am definitely not any kind of expert in functional languages, but I have been studying them for a while now. You see, the types in functional languages, at least Haskell, are quite different from what you would experience in C or Pascal. Types are in a class system. You have your most generic polymorphic types and then the set of values can be restricted as much as the programmer needs. I can't explain it exactly, but I think it is this that prevents the need for them weird hacks you described.
  • by quintessent ( 197518 ) <my usr name on toofgiB [tod] moc> on Sunday July 16, 2000 @08:02AM (#929899) Journal
    (are(languages(These), nice)
    (Easy(to_write(code(bug_free))))
    (If(can(read(you, them)))))
  • My guess on why functional languages haven't been as popular as langugages such as C and Java is that they're typically good at doing one particular type of job, but not so hot generally. Consequently you end up with a bunch [dmoz.org] of different languages to solve different problems and no one language can reach a critical mass of widespread usage.
  • -
    My experience (22 years on the job) showed that languages which are easy to use, are considered "childish" or worthless by the managment.

    I wrote a connectivity (system) application using REXX.
    Two ES/9000 mainframes and numerous AIX workstations, access to DB2 from both mainframes (which were NOT coupled).
    Plus a fool-proof user interface that also worked around bugs in an IBM "user interface".
    I saw persons looking at the code, saying "looks like VB"...

    I know that a US finance agency runs a complete system using REXX.
    I know how much trouble the developer at IBM had,
    to get her implemantation of REXX for IBM's AIX machines "official".
    (We used a free interpreter REGINA until the managment gave in.)

    If something does look easy, it's considered worthless.
    What I wrote in 1000 assembler lines in 1979 could be written in 50 lines using REXX.
    But those 50 lines don't give you credit.
    Write 1000 lines and the managment thinks you're really hip, a real cool cat, a hoopy frood.
  • Functional languages are slower, as a rule, than imperative languages. Compilation, rather than interpretation, helps - but few functional languages can approach the speed of a well-written imperative program. Imperative langauges have limitations on the optimizing of code. This means that we will eventually see compiled functional langauges which produce faster code then ANY imperative langauge (assuming a time constraint and/or programmer skill constraint). There is just more analysis that can be done to functional code, so the machine can do more of the work for you. The only real problem is that someone needs to put a lot of work into the compiler. They don't support things like graphics toolkits, CORBA bindings, parser generators, database interfaces Total bullshit! Haskell has many/several diffrent packages for ALL of those things. I'd say most serious functional langauges probable have most/all of those things. Actually, imperative langauges are the ones with suck ass parser generators. You have Lex and Yacc style parser generators for Haskell, but Lex and Yacc are total crap compaired to the monadic recurisve descent parsers in Haskell. These parsers have all the power of YACC, but they are just a Haskell library & include, so you can add your own shit. Every want to intermix contex free grammors and regular expressions freely, i.e. not this stupid Lex dose the regex and Yacc dose the CFG bullshit, it's cake under Haskell.. or you could write millions of other short cots to make you CFGs much smaller/simpler. Parsing is one of those things functional langauge were made to do.. and the moderatly good ones kick the shit out of all imperitive langauge at doing it. Also, it's worth pointing out that Functionall langauges have had steller results in "ultra high reliability" programming situations (like phone switching networks), i.e. you can not afford the program to break, so you force yourselve to write it in a manor when it can be "nearly" proven to not be breakable.
  • Great Any! If you don't need something more natural than assembly code, why don't we just use assembly language?
  • I'll start writing in a functional language just as soon as it's supported inside any OS kernel that has a large enough user base to be worth developing for. Until then, FP languages are useless to me, though I might of course use concepts and constructs that people associate with FP (even though for the most part those things have been known and accepted as good practice in non-FP contexts for decades).
  • What is surveyability?
  • Remember learning to program for X Windows or other graphics systems back when you were a C programmer? It was a major step - instead of writing code where all the subroutines belonged to you, except library stuff you could call to do things for you and return with the answers, you needed to learn to write code that
    • Had functions with arguments in a format that event-meisters could call
    • Covered all the reasons an event-meister would try to call you
    • Used C's appalling syntaxes for asking the event-meister to call your routines if something happened instead of writing your own event loop, because your program was no longer in charge of what got called when.
    Maybe that's as unnatural to do in a functional language as in a procedural language, but you have to give the functional folks as much slack as you've used. My friends who write Smalltalk swear by the stuff. A friend of mine used to write window-app prototypes in "winterp", an Xlisp windowing package that ran on X Windows, and said that the prototypes generally took much less time to write than the product C code, but also did more.

    The real issues are the ability of teachers to teach the stuff, the quality of books to teach or learn from, and the availability of programming environments and tools for not only learning but also production. The latter used to be a hard problem, but now that you can include a CD in a book or put a few tens of MB on a web site, and everybody has access to Windows, it's less difficult, and we return to the previous problem of bootstrapping the learning process. Java pulled it off, but it was in the right place at the right time, with "fast web application development" as the hook when that was needed. Academics aren't always as good at self-promotion as commercial marketing departments.

  • Err, the what the program must do can be defined in declarative language, where you specify assertions and invariants that must be true in every execution of the program (actually, temporal logics statements).

    An automatic formal correctness validation program will then verify these against some code that describes _how_ to achieve those results.

    For instance, check Spin/Promela, and the very simple Mutual Exclusion examples. Declaring what a Mutual Exclusion algorithm must do is simple, and very different from actually writing a program to do it.
  • Definition of read: to be able to tell the meaning of something written quickly. Ok, that does not necessarily fit perl very well ;)

  • Check out this link I found from Kuro5hin.org to the Clean functional language: http://www.cs.kun.nl/~clean/index.html Intro to Clean language: http://www.cs.kun.nl/~clean/About_Clean/tutorial/t utorial.html I think their 2D platform games section shows that Clean can be used for practical applications, and that functional programming is not just a research toy. They also seem to have some nice IO library. Taken from the intro to Clean, thes functional implementation of common mathematical functions just seem SO elegant: Exponentiation: power :: Int Int -> Int power x 0 = 1 power x n = x * power x (n-1) Factorial: fac :: Int -> Int fac 0 = 1 fac n = n * fac (n-1) Maximum: maximum :: Int Int -> Int maximum n m | n=m = n From a quick scan of it, Clean looks VERY cool.
  • No it's not. Check your facts before posting.
  • One problem with GCC back-end is that the intermediate language sucks horribly. It's completely inadequate for expressing parallelism, ordering and dependence issues.

    For this reason, all OSS will suck very, very badly on IA64, btw. :-(
  • Some english company is maintaining it and will cost you lots of £££ if you want a copy.

    That'd be the imaginatively named Research Software Ltd., in Canterbury. That's basically a trading name for David Turner, the guy that created Miranda (and one of my lecturers at University). I guess he's still making at least a token amount of money from Miranda. There's no other reason to keep it closed, particularly given free alternatives like Orwell and Haskell.

  • Oh, wow. What you say might sound so sensible, but, I am afraid, is so very wrong. My point is that once you really do embrace typeful programming (in the sense of Luca Cardelli, or beyond...), then all of these kinds of types might make perfect sense.

    This is the kind of thing that scares people away from FP.

    What's you're suggesting is pedantic in the extreme. In OOP, the complexity moves from raw code into the class hierarchy. That is, the code may look simple, but there's an elaborate framework that you need to understand as well. In the type of programming you're proposing, the complexity moves into the type system.

    Uh, not in the examples *I* gave. Which you ignored, for some reason. None of the examples I gave required any complexity in the typing system. Shoot, I didn't even mention anything that would require type inference, or anything you couldn't do in OO. (Which is, of course, why I gave Cardelli as the reference: he's an OO champion, not a flunky for the flat-earth functionalists. :-))

    Maybe a more charitable reading of what you said is that typeful programming could induce complexity in the type hierarchy (rather than the system itself). And I do have some sympathy for that. But one of the nice things about, e.g., Haskell, is that it performs type inference in addition to type checking, so that in many cases, I can gain a lot of the nicest benefits of strict typing for relatively little cost.

    Once you've painstakingly developed all the types you need, including algebraic properties and restrictions, then you've done most of the work. The resulting code may look simple, but, again, there's complexity elsewhere.

    But, of course, very few people even in functional circles program like that. You really don't have to. But even if I were to concede that programming thoughtfully takes more work, you point out for me that the resulting code may look or be simpler. If for no other reason than the fact that you have cleanly expressed and yet strongly encapsulated the assumptions you are making about the values you are manipulating. Now, if somebody comes along after you and has to modify code you wrote, yes, they will have to absorb some aspects of the design that go beyond the neat and simple functions used to express the answer. But at least they know where to look for this information.

    In general, it is a mistake to build up fancy mechanisms for simple tasks. In most programming, the typing is very simple.

    In general, I think people over-estimate the simplicity of programming. Indeed, if programming were so simple than we would expect to see a far larger number of accomplished programmers than we do see.

    [snip]

    Imagine if you couldn't just email a note to someone without diagramming each sentence to show that it is, in fact, grammatically correct.
    Now this is the ironic part. Categorical grammarians (among others) have explicitly suggested that we don't have to do this to communicate because the constituents of sentences can be understood to have specific (often higher order) types, known to us all, and that the computation of meaning depends strongly on these types. In other words, we can just knock off an email to somebody in simple language precisely because we are so constrained by some kind of type system. Now, interestingly, research done on what this type system might look like by Aarne Rante suggests it *is* a doozy of a type system. (And is possibly mathematically inconsistent, to boot.)
    Programming is not the same as programming language theory. If it were, then no one would ever have been able to write code in C, C++, assembly, or Lisp, as those languages don't force you to specify every step of the way.

    Wow. I guess I'm really missing something here. Leaving out Lisp for a moment, the usual argument about C, C++ or assembly is just the opposite of what you say it is. The problem with many procedural languages is precisely that you have to specify every step of the way, every check for a bad value that can't be excluded by the type system, every memory allocation and de-allocation, every decision about sequencing and evaluation order and everything else. You've done all that, but you still might have missed the branch or failed condition that now causes your program to dump core.

    Of course, there are reasons for doing this, the key one being execution efficiency, or what we might call inner loop thinking. The irony of this, of course, being that this kind of code is the kind most likely to have been directly transliterated from some kind of mathematical proof or formula, where typing considerations were in effect. So what the programmer did was some kind of fancy translation from a theorem to a nicely optimized block of low-level instructions that are likely to execute perfectly. Meanwhile, that programmer or some other didn't check for an error returned by the 13th fetch of data from disk, leading to a garbage result.

  • The first reply to your post was perfectly correct about the theoretical limitations of imperitive langauges (including assembly), but I was actually talking about a more practical limitation.. time. I will explain: First, assume you have a non-trivial constraint on programmer skill, preexisting libraries, and time. Now, our programmer can probable write a faster program in C then assembly since he can take advantage of things which make writing the program faster and can spend more time profiling, adding complex features which boost speed (say by using function pointers), etc. Yes, he *could* do all these things in assembly, but he would need MUCH more time and/or skill. Functional langauges provide another layer of abstraction, so there is more work for the machine to do.. and the programmer can spend more time worring about the details which are really importent to makign things faster. Example: It's easy for a Haskell or ML complier to make functions which write functions, i.e. safe complier level structured self modifing code. I'd really love to you do that in assembly. Shure your self modifing code tricks might be faster, but the functional langauge compiler's self modifing code tricks are not going to fuck up, so Joe average programmer can use them. Implementing these tricks in assembly is something only mr. assembly god can do.. and only after a great deal of time and testing.
  • This means that we will eventually see compiled functional langauges which produce faster code then ANY imperative langauge.

    Yes, I'm aware of the optimization potential of functional languages, but it's just that - potential. Even the best functional or logic programming language environments aren't there yet. We're talking the now here, rather than the future.

  • by Christopher B. Brown ( 1267 ) <cbbrowne@gmail.com> on Sunday July 16, 2000 @08:11AM (#929966) Homepage
    SQL may be nonprocedural; it is not particularly functional, as:
    • It does not make functions first class objects;
    • It does not eschew side-effects.

    SQL is not even strongly relational; see Darwen and Date's Foundation for Object/Relational Databases: The Third Manifesto. [amazon.com]

    And see any of Joe Celko's [celko.com] books on SQL to see how weakly people tend to use what relational properties SQL has.

    But as for calling SQL a "functional" language, while there may be some abtruse arguments by which some variation on it could be argued to be somewhat functional, it is certainly not recognized as such in the way that Haskell or ML are...

  • Yeah, you're right. Although he takes a very long time getting around to it, the parent post seems to be well confused about what constitutes a functional programming language. The author seems to thing they're equivalent to push-down automata (finite state machines with stacks, OK?), but this Just Ain't So.

    The source of the confusion seems to be that he assumes functional programming languages are just fixed trees of recursing function calls. If this were so, he'd be right. They aren't, though.

    Functional programming languages allow functions to be passed around as values, and these have full first-class status - so you have functions that take functions (which can do loops, or if-then-else constructions), and functions that return functions, which can do all kinds of funky stuff, including, importantly, functions that return a function that does the same thing, but with one parameters already set.

    This seems to be what the author is getting at in his blather about "stream programming" and "deferred execution". Its not just a hack, though. Its the very heart of the lambda calculus - its how, in fact, you derive arithmetic from simple functions alone.

    *However* the author is clearly a clever chap, and has obviously come up with all this stuff himself, so I say let him keep his mod points. Original thought should be encouraged - even when its wrong.
  • I don't think static typing is very mathematical anyway. Most higher math deals with things in terms of properties, not types.

    I don't know where you're getting this from, but it's really not true. If you've ever seen the rules for the static semantics of a language written down and the proofs of safety that accompany them, I don't see how you can not consider this math. The isomorphisms between logic, set theory, and types are well known, as well... care to elaborate?

    I wasn't saying that static typing can't be expressed mathematically, but that mathematical expression is not well done with static typing.

    I'm thinking of vector spaces and abstract algebra, where you have operations in a space. Something like * (times) can have an analog in a number of different spaces (i.e., different types). The implementation of * is dependant on the types involved, but the concept of * is abstract from the types.

    Of course, some languages make "ab"+"cd" = "abcd", which messes things up. Addition and concatenation aren't really the same thing. But that's just bad language design.

    Languages with dynamic typing can do really neat things -- creating a string buffer that simulates a file, a sorted collection that has (mostly) the same interface as an unsorted collection, etc.
    --

  • by Paul Johnson ( 33553 ) on Sunday July 16, 2000 @08:12AM (#929971) Homepage
    I've spent the last several years trying to explain to colleagues why they should start using another obscure-but-good language, Eiffel [elj.com], to no avail. Here is what I have learned. Note that this is not about the pros and cons of particular languages or paradigms, its about the way the programming language industry actually works.

    The language industry is dominated by network effects. There are major costs with using a minority language, and for an individual project these completely outweigh the benefits, even when the benefits are very large. Hence it is generally far better to stay with a majority language. The costs of a minority language include:

    • Support. Sure, you can get a GPL compiler for most languages, but on a project you don't want to have your coders digging into the code trying to fix a bug, you want them writing code. Support is something you outsource.
    • Performance. Every minority language claims to be faster than C, but often isn't in practice. Whatever the truth, C and C++ are at least known quantities. Maybe the minority language will be faster, maybe slower. If its faster, well gee so what. If its slower then you have a major problem.
    • Tool support. These days even small projects start by drawing UML diagrams and then converting these automatically into class templates. CASE tool vendors don't support minority languages. Ditto for testing and documentation tools. Little things like tying your compiler to your configuration control manager might potentially be major headaches. Again, its more risk that the PM can do without.
    • Nobody ever got fired for buying C/C++/Java. If you are a PM this is a major issue. Every language is going to bring some headaches, but if you have chosen a minority language then these headaches can be turned into an excuse for project failure, and hence for hanging you out to dry.
    • Trained staff in a minority language are going to be rare. This does not necessarily make them more expensive (nobody else wants them), but it does make recruitment much harder and more uncertain. Alternatively you have to train all your existing people in the new language. And for Functional Languages its not just another syntax, its a whole new way of thinking. The industry went through this with OO languages, and many PMs have vivid memories of reams of non-OO obfuscated C++ written by a bunch of C hackers who had been sent on a one week C++ course. Getting your head around a new paradigm can take months, and this is time that the project just does not have.

    So, overall the PMs want to go with popular languages, not for PHM reasons, but for entirely rational local reasons. But rational local decisions turn into globally arbitrary decisions, as the entire herd gallops off in a random direction chosen only because most of the herd thought that most of the herd were headed that way.

    The lesson of this is that if you want to introduce a language, you don't concentrate on making it a good language, you try to persuade the herd of programmers, PMs and tool vendors that your language is the Next Big Thing. The important point here is not how much the language will do for productivity, quality and cost, it is to create the perception that everyone else thinks that this language will be the next big thing.

    There are two ways to do this. One way is to tackle the whole industry at once. For an object lesson in how to do this, see Java. For an object lesson in how not to do it, see Eiffel. Believe me, I know all about this. I have spent a long time giving presentations extolling the technical virtues of Eiffel, only to have my audience say "y Yes, but in the Real World....". In the Real World what counts is the network effects. And you know what? My audiences were right. It has taken me a long time to realise this.

    The other more interesting and more promising way to introduce a new language is to identify a niche market and attack that. Once you have taken over your niche you can expand to nearby niches and start to build momentum. Python is doing exactly this in web serving, for example. Web serving is a good niche because lots of people do it, and productivity and quality generally count for more than raw performance. Projects also tend to be small, so experiments are not the Career Limiting Moves they are for large projects. Education can also be a useful niche if you can afford to take the long view, which is how Pascal, Basic and Unix got started.

    Paul.

  • The flipside is that if you're willing to put more time into abstracting and kneeding (as I like to call it) the functionality of your project, the time actually spent coding and debugging can be lowered significantly (in my experience, the total time from start to finish is about the same, but I am generally more confident and sure that the project is finished if I spent time thinking about it). This applies to any language, not just functional languages (where code without forethought probably won't even compile in SML)

    Also, I found that when I was writing SML (only in a college course), I really didn't need a debugger. When I was really stuck, sometimes a little string printing function was useful, but beyond that it was much easier (and more revealing to stand back for a few minutes and look at what was really going on). And of course, SML has the wonderful property that if it compiles without warnings, it will practically never crash (though it may never stop recursing, either :) ).

    In my mind, programs written in functional languages have a natural modularity to them. You don't find a lot of speghetti code or unneeded work. I still find myself using C++ and Java in the real world, because a 50MB runtime library is rather heafty. I'd also have trouble finding coworkers who could tolarate the language. :)

  • by PigleT ( 28894 ) on Sunday July 16, 2000 @08:19AM (#930011) Homepage
    There is already an established Software Industry built on the principle of going with the majority. This means C-based "because everyone knows it" and mainstream Unix-based "because it's always worked so far".

    The trouble with this is, in the big industrial scene, the quality of the code produced is abhorable. It is C, batch-produced, written to some Quality Idiot's idea of a "style guide" to be enforced in totally inappropriate ways. Such things as always checking for a symbolic name as the error return value from a function just go straight out the window, "oh no it's -1 if the host's not found, isn't it?".
    What's even more worrying is that code is not built up with a view to reusability or expansion - things start out small and evolve features until you realise "we should've just used a database for this whole component", but instead of this you get "New! v3! with added triggers!", d'oh.)

    I've gone from BASIC through C, C++, Perl and a limited amount of Tcl / Java, into Scheme and other Lisps. I don't find the fact that Lisp as a concept/family is >40 years old a block to it being *good*. It says in the preface to Graham's "ANSI Lisp" book that functional languages can embrace OO languages with minimal addition, and it's right.

    But mostly I think it's fair to say that the masses can't cope with the idea of a function being a return-type in its own right (which is probably the defining feature of a pure functional language); they're too used to the "do this, then do that" chronological programmatic way of doing things, rather than saying "make this a list of things, map this function over the list, then this one..." and so on.

    I'm learning Scheme. There are some very funky Scheme environments around - Guile, Kawa and Elk all bear lots of inspection; it's definitely coming to something when you can type 'java pig1' and it executes this:

    (define (factorial x)
    (if (= 0 x)
    1
    (* x (factorial (- x 1)))))

    (map factorial '(1 2 3 4 5))

    Unfortunately, the corporate scene doesn't seem to wish to spend the time on this. That's its sad lookout. I'm off to have some fun and party!

    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • by Nagash ( 6945 ) on Sunday July 16, 2000 @08:19AM (#930028)
    I took a course on Programming Langauges, and we studied Scheme as our first language. It was a major hurdle for many people. This is probably because you have to think in functions and recursively, as opposed to the structured/imperative way of assignment.

    The ultimate goal of functional languages is to have everything act as a function of it's inputs. Setting variables should not be necessary. However, it never works out that way. It would be hell to write that many functions. The spirit is still there, tho.

    Probably the biggest problem was the fact that a function is a first-class value (i.e., it can be passed as a parameter, returned from a function and assigned to a variable). Writing functions within functions to take care of little recursive problems was a major stumbling block. Instead of single-stepping your way through an algorithm, you thought of a way to write an anoymous function inside another function to take care of a something. This function is not defined - it is created at run-time. The fact that you could return it was weirding people out as well.

    Another thing that throws people for a loop is the lack of non-local exits. There is no return in Scheme (or Elisp. I don't know about Lisp, but I would imagine it is similar). Instead, you have a very generalized procedure called call-with-current-continutation that does everything return does and more. It actually allows you to save the state of your program, put it in a variable and use it again later. Thus, you can make generators for infinite data structures. This is hard to grasp, especially after two years of C/C++/Java.

    The fact that everything is a list in Scheme and it is not typed can be a bit of a stumbling block.

    Structured/imperative programming is a much more natural way to program - at first. When you get some practice in functional languages, you see how incredibly powerful they can be. (this is not to say C/C++ style languages aren't powerful. They just lack some really handy features functional languages have as primitives)

    I think people avoid them because of the total paradigm shift that is involved. It really is quite a leap. There is no lack of literature on it, it's just not published by IDG Books or SAMS ;) The fact that it is not typed and makes it a little slower is also a factor. They also hate Lots of Infernal Stupid Parenthesis! =)

    Woz
  • by Scott_Marks ( 179077 ) on Sunday July 16, 2000 @08:44AM (#930039)
    I have used functional languages since the 70s. I put together a more-or-less complete implementation of Backus' FP in Rosetta Smalltalk running on a Z-80 (64K RAM, CP/M OS) around 1980, programmed in Interlisp until '87, bogged down trying to implement Common Lisp (which can be procedural or not, your choice) in Windows 3.1 (got just about done, then Steele brought out version 2 -- ugh!) about 1992. Currently I just write one-liners in Perl. What happened?

    Mostly, there were no implementations. Plus it's sort of like the Micro$oft dominance -- all the good stuff was written in procedural languages (primarily C/C++), so why fight it? But the real question is why did procedural languages win?

    What I don't believe is that it is harder or less natural to think functionally than procedurally. It's just what one is taught.

    Like recursion, for instance. In my small experience teaching, I saw the light go on with regularity -- you just chip away at the problem, deal with some part of it, and then (recursively!) deal with the remaining simpler problem. Hmm, that doesn't seem so hard.

    Most people don't think about transactional database programming as being like FP, but just think about it -- it's just like Backus' Applicative State Transitions, where one computes the proposed new state, validates it so far as possible, and then installs it as the basis for more computation.

    Another thing to think about is the long-standing tradition in math of "recurrence" relations, like the Fibonnaci series, or approximations to pi, or whatever. Those are clear examples of things which could be thought of as iterative or recursive, just depending on the color of the lightbulb.

  • by wik ( 10258 ) on Sunday July 16, 2000 @08:46AM (#930041) Homepage Journal
    As far as user interfaces go, I wouldn't want to write them in SML, either! :)

    I wouldn't call grep or wget simple (see the documentation if that's at all unclear!). I don't believe that those two in particular would be terribly bad, though. Performance would probably be really crummy, but I could imagine that some of the more mundane uses of wget (recusively mirroring a website) would come naturally to a functional language.

    On the other hand, I think the user interface or presentation of a program should be well-separated from it's data structures or, more towards the dollars sign, its business logic. Why can't the application tier of an e-commerice site be SML-based? With the appropriate connectors (CORBA-based or otherwise) to databases and either a slew of HTML-generating code or some other connector to JSPs, I could see it happening. Would it catch on? I don't know. Java has positioned itself to work well like this. An event-driven program would probably crawl in SML, there are graphical toolkits for it, but I have never used them. If SML is used for the innards, however, it could be much more manageable.

  • by SlydeRule ( 42852 ) on Monday July 17, 2000 @04:53AM (#930057)
    Excellent points. As an Eiffel "true believer", I would add one more: Nobody really cares about quality.

    Programmers want "cool syntax" and "expressive freedom"; programming languages are for hacking with, not for the considered, careful implementation of software.

    Corporations and organizations want quick and cheap results. As an all too typical example, the company I work for is on a CMM program to improve our software quality. The definition of "software quality" for our CMM program is that the software is "delivered on time and in budget". As Dilbert's PHB says, "Oh, and we need a banner which says 'Quality'."

  • by John Jorsett ( 171560 ) on Sunday July 16, 2000 @08:31AM (#930064)
    Trained staff in a minority language are going to be rare. This does not necessarily make them more expensive (nobody else wants them), but it does make recruitment much harder and more uncertain.

    A corollary to this is that programmers are going to be less willing to learn a language that no other employer is going to want. Having a few years of intensive (not an insult, just an example) Eiffel experience on your resume might just be a recipe for unemployment, whereas Java programmers are practically carjacked by prospective employers these days. Acquaintences of mine have quit jobs just to avoid being put into this position.
  • I think the real reason Java and Perl have been successful is that they were all carefully designed to resemble established, popular languages. Stroustup's stated goal in designing C++ was compatibility with C. Java is basically a simplified C++ with garbage collection. Perl is based in awk, sed and unix shell. All of these languages have marginalized languages that are technically superior in many ways - C++ vs. Ada or Modula, Java vs. Smalltalk, and Perl vs. Python.

    Unfortunately, most of the obstacles to the acceptance of a new language are social rather than technical. Backward compatibility and the support of a powerful company are key. A new language faces the same resistance that a new operating system does. Corporations and programmers have made substantial investments in their current toolsets and skills and are very reluctant to throw that time and money away. Something like Java, that looks and feels very familiar, is palatable, but something like Haskell causes panic.

    Open source programmers are lucky, though, because there are far fewer constraints on the tools and languages they use. We pride ourselves on making design and implementation decisions on a strictly technical basis, and there's no reason we shouldn't make our choice of languages any other way. There are a number of high quality free implementations of functional languages out there.

    I've spent the last few years studying functional programming in my spare time. While I'm not sure that I'll ever use a functional language professionally, there's no question that I'm a much better programmer than I would be otherwise. Libraries like the C++ STL and language extensions like Java's anonymous inner classes make a lot more sense if you've been exposed to closures and generic functions. Studying CLOS makes the limitations of single-dispatch OO systems like C++ and Java clear. But most importantly, functional languages help you see the larger picture - to focus on the architecture of a system without getting lost in implementation details. My experience with functional programming has taught me more about software design than the shelves of OO design pattern books and UML bibles I've waded through.

    Any programmer that really loves the craft of programming owes it to themselves to take a walk on the wild side with a functional language or two. I'd recommend Ocaml [inria.fr], a mature, full-featured system that comes with a blazingly fast byte-code and native code compiler, a debugger and profiler, a first-class YACC-like compiler generation tool, a full-featured standard library, and a growing collection of contributed code [www.npc.de]. If you really want your mind blown, read SICP [mitpress.com].

  • by Master of Kode Fu ( 63421 ) on Sunday July 16, 2000 @08:55AM (#930069) Homepage
    I think imperative programming languages tend to be more popular than functional programming languages for the same reason that reverse [whisqu.se] polish [cam.ac.uk] notation [ucl.ac.uk] calculators are less popular [palmtoppaper.com] than those using standard notation. . A standard notation calculator should fill a good number of common needs, but when the going gets hairy, there's nothing like an RPN calculator to do the job quickly.

    The same applies to programming languages. For many programming tasks, the imperative model will serve you well, but there are times -- especially when repetitive, recursive or just plain mathematically complex tasks are involved -- that a good functional language is exactly what you need.

    P.S. While probably not the best way to compare languages, you might want to check out this web page that compares how you'd get verious programming languages to output the complete lyrics to the "99 bottles of beer" song [ionet.net]. (At last, an almost on-topic posting about beer!)

  • by Baldrson ( 78598 ) on Sunday July 16, 2000 @10:20AM (#930087) Homepage Journal
    Cutting to the chase:

    An appropriately conceived relational calculus is more powerful than a similarly conceived functional calculus because functions are special cases of relations. [geocities.com]

    When I was manager of interactive architectures at the precursor to Prodigy [geocities.com] I spent about a year pursuing functional programming languages as a possible public standard for the network programming language. By network programming language, I mean a language used to make programming distributed applications as transparent as possible with dynamic redistribution of functions based on load leveling and security requirements.

    I chose functional programming because the dataflow graphs provided a natural network map, the nodes of which could be redistributed on the physical network without altering any of the logical analysis that went into the writing of the program. The inspiration for this work was my prior experience with the Plato network where I had pushed the creation of a mass market version of that product. (Worthy of digression is the fact that middle management killed the release of that product and may have, thereby, killed Seymour Cray's first company, Control Data Corporation along with the Midwest's chances to be the locus of the network revolution -- 20 years earlier than it finally happened.) I realized that a widely distributed mass market Plato network needed parallel distributed authoring tools for novice programmers. Combined with the Turing Award Lecture [chungnam.ac.kr] by John Backus of BNF and FORTRAN fame [pitt.edu] I was inspired to pursue functional programming when I left Plato to join with AT&T and Knight Ridder in their joint venture mass market information service experiment.

    While authorized to pursue this vision by AT&T and Knight RIdder, I initiated working groups involving computer telecommunications departments from Bell Labs, Atari, Apple, Xerox PARC, MIT, Software Arts and Knight-Ridder News to explore a staged evolution from tokenized FORTH-based programmable graphics communications protocol that would fit in the earliest Videotex terminals being produced by Western Electric (which became PostScript) through distributed Smalltalk based on a FORTH VM, and on to either functional programming with data abstraction or possibly a more radical revision of Codd's work in relational programming. During this time of intense activity, I was fortunate to actually meet Alonzo Church and Haskell Curry at the 1982 ACM conference on functional programming at Carnegie Mellon shortly before Haskell's death and at least get them to sign my conference proceedings and personally thank them for their contributions.

    The closest I came to finding a working foundation for distributed functional programming (with object semantics) was a synthesis between David P. Reed's distributed file system transaction protocols and Arvind and Gostellow's U-Interpreter for dataflow computations (see the special "Dataflow" issue of IEEE "Computer", I believe it was December 1980). It turns out that Reed, Arvind and Gostellow had come, from two distinct directions, on virtual machines to describe their programming systems that were isomorphic to one another. Reed's distributed transaction file system was based on the object oriented CLU programming language developed for OO research at MIT, and Arvind and Gostellow had come at theirs from the work on dataflow computers arising from the excitement inspired by Backus's previously mentioned Turing Award Lecture. Reed's system was particularly important for funcitional programming enthusiasts because he was directly addressing the concept of network state, transaction mechanisms and the practicalities of network timeouts, faults and other real-world difficulties. Unfortunately, although Reed would go on to become chief scientist at Lotus Corporation, where some collegues of mine from the Plato project were developing a distributed programming system called Lotus Notes, Reed never actually pursued his conception of network state within the context of functional programming, nor even within the context of Lotus Notes! Perhaps this was my fault for not attempting to beat Ray Ozzie over the head with Reed's thesis, but Ray was pretty cagey about what he was up to at Iris Associates back in 1984. By the time I found out Reed was Ray's chief scientist, I assumed he and Reed were working on something related to Reed's thesis. Imagine my surprise to discover Notes was not only a distributed file system of sorts, but that Reed's primary theoretic expertiese was never actually discussed as a foundation for Notes! But it gets better: the most ironic twist is that Reed and Arvind were both at MIT's Laboratory for Computer Science when I discovered their respective works. When I went to visit them at MIT's LCS, I walked up the stairs from Arvind's office to Reed's office to discover that they had no idea that their respective VM's were nearly identical despite being based on entirely different approaches -- and that neither of them were particularly interested in talking to the other about a synthesis between their works!

    Academics...

    In any case without a good foundation for handling network state in distributed functional programming, I was left facing the same sort of problems faced by John McCarthy when Marvin Minsky et al took off and started to kludge in all kinds of arbitrary state handling "formalisms" into McCarthy's mathematically pure implementation of Church's lambda calculus: LISP. I saw where that road led...

    While a degeneration of Reed's approach was actually tried on the Intel 432 project under the iMAX operating system's distributed OO file system, to the best of my knowledge, the only other attempt to implement his system was a distributed archive object base that I prototyped a few years back at Filoli Information Systems (formerly Memex -- the company that bought out Xanadu Operating Company and attempted to resurrect hypertext after Autodesk dropped support when John Walker was displaced as CEO from that company and ultimately from the entire country).

    However, I've never really been happy with the functional approach because functions are a degeneration of relations. That's why I've always been more interested in advancing the state of relational programming than that of functional programming. The problem is, functional thinking is embedded in our mechanistic views of time and causality -- sort of the way up and down are embedded in our physical structures due to having evolved on the surface of a planet. If we're going to deal with distributed persistence and transaction problems, we may as well handle the more general case -- especially since relational programming is at the root of the relational database industry, and it appears a relational formulation of time based on a revision of Russell and Whitehead's Relation Arithmetic, may end up dominating the future of physics.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...