Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Use of Math Languages and Packages in Research? 454

CEHT asks: "As a research programmer at the university, I have encountered numerous times when I need to choose which language(s) or package(s) to use for different projects. Tradeoffs and performance issues have to be considered: results from one package may be more compatible with the data from other researchers, another package may find the solution faster and use less resources, and so forth. Maple, Matlab, Magma, and Mathematica are among the most well-known packages. Libraries such as IMSL is also popular. Of course, there are smaller (and mostly free) packages that tend to target specific types of problem, such as LiDIA, Singular, and LAPACK. The question is, how useful are these [and other] math packages? Do researchers use only one or two packages for most of their projects? Or do people like to mix things a little by pulling the strength of different packages together to solve a math problem? If not, do researchers write C/C++ programs and use GMP or Matpack to solve math problems?"
This discussion has been archived. No new comments can be posted.

Use of Math Languages and Packages in Research?

Comments Filter:
  • Octave (Score:4, Insightful)

    by tanpiover2 ( 249666 ) on Wednesday February 26, 2003 @03:35PM (#5388776)
    Whenever I need to do anything like that, I use Octave [octave.org].
    • Re:Octave (Score:5, Insightful)

      by s20451 ( 410424 ) on Wednesday February 26, 2003 @03:41PM (#5388827) Journal
      Octave is a great idea, but also a gigantic pain in the ass. I don't think I have ever successfully compiled it without serious tweaking on any system I have ever owned. The Octave team needs to spend a little time improving their configuration and make files before that package can be considered a serious alternative to Matlab.
      • Re:Octave (Score:2, Informative)

        by klokwise ( 610755 )
        hmm, having used octave recently, i have to disagree with you. i grabbed the latest release off the site and only had to update my version of readline to get it to compile. also, i really only see octave being used in an educational or small-time commercial sense. it's great to learn on and give you a taste, but is no where near as efficient as matlab.
      • Re:Octave (Score:3, Informative)

        by SquadBoy ( 167263 )
        I'm just in one of those moods today.

        apt-get install octave or if you really like to have it build on your box apt-build install octave. :)
      • Re:Octave (Score:5, Informative)

        by Anonymous Coward on Wednesday February 26, 2003 @04:17PM (#5389221)
      • Re:Octave (Score:4, Insightful)

        by Karn ( 172441 ) on Wednesday February 26, 2003 @04:25PM (#5389301)
        Actually, I think you have it backwards.

        First, Redhat (I'm sure other distros as well) INCLUDE OCTAVE. That's right, you don't NEED to compile it because it comes with your distro. You can't get much easier to install than selecting the package at OS install time.

        Second, Octave doesn't require a user to jump through 8 hoops to run it on your system. No FlexLM, no licence files, no keys, no giving Mathworks your hostID when you change computers, etc.

        Finally, most people don't compile from source. And those who do are usually experienced enough to figure out how to get things compiled.
        • Thanks, but for some reason, RedHat includes the latest unstable release of Octave, rather than a stable release. When started, this version warns against using it for general purpose use and points to the stable release, which is not reassuring. I've found the same to be true for the version of Octave in the FreeBSD ports collection. I also have yet to find a reliable source of Octave binaries or rpm's.
      • by Somnus ( 46089 ) on Wednesday February 26, 2003 @04:38PM (#5389396)
        I've just built it successfully on my Gentoo laptop (everyone seems to rave about, so I thought I'd try it out). Ostensibly, whatever voodoo the ebuild script does works. Here it is:

        http://www.gentoo.org/dyn/pkgs/app-sci/octave.xml [gentoo.org]

      • Re:Octave (Score:3, Informative)

        by blair1q ( 305137 )
        No pain at all.

        Download the binary distro of Octave for Windows [sourceforge.net].

        I just did, now I'm going to go play with it.
    • Re:Octave (Score:3, Informative)

      by robbyjo ( 315601 )

      Although Octave is great to mimic Matlab 5.2, some neat features in Matlab 6.0 are not there yet.

      Another drawback is... just like Matlab... It's Slow, with a capital S. I won't recommend it for serious number crunching like encryption or gene data manipulation unless you have really good CPU power and godly patience.

      • Re:Octave (Score:3, Funny)

        by GlobalEcho ( 26240 )
        Although Octave is great to mimic Matlab 5.2, some neat features in Matlab 6.0 are not there yet.

        Three-dimensional arrays come to mind. Grrr.
    • Re:Octave (Score:5, Interesting)

      by decoutt ( 538383 ) on Wednesday February 26, 2003 @03:55PM (#5388994)

      Octave is a nice MATLAB clone, developed from chemical engineers in the beginning, but now used extensively in virtually any area that math is usefull.

      Many packages have their open source counterparts: Octave [octave.org] for MATLAB, R-system [r-project.org] for SPLUS (statistics algebra system), and so forth. But IMHO you raise another issue: you can use each of these packages to do whatever calculations you want, since all of them are extended in the C/Fortran end, i.e. they can use programs written in these languages. Custom code is readily integrated. And above all, the GNU Scientific Library [redhat.com]. If you don't like or you don't trust the numerical solvers integrated in MATLAB, you can investigate the source in the GSL.

      And yes, you can use all of these together. So, what is the question again?

    • Wanna hear something cool?

      I just got Octave & Gnuplot running on my Sharp Zaurus. I can do my DSP type calculations, anywhere!

      Someone is currently porting gtktiemu, at which point I'll have a TI-89 emulator, which will let me handle just abount any engineering math type stuff I need to do with one pocket-sized deivce.

      Now if my fold-up keyboard would just show up.....
  • MathCad? (Score:3, Insightful)

    by Joe Jordan ( 453607 ) on Wednesday February 26, 2003 @03:39PM (#5388807) Journal
    Now, I'm only in college, but while we did dabble a bit in Maple and a few others that you mentioned, the only Math software package I've used extensively is MathCad. I've found it to be the most user friendly of the bunch, and they have a new version out: MathCad.com [mathcad.com]
    • Re:MathCad? (Score:4, Insightful)

      by Big Mark ( 575945 ) on Wednesday February 26, 2003 @03:48PM (#5388911)
      MathCad is far too limited in power and scope to be of any real use. Maple is my package of choice; admitedly it's hardly the most intuitive or user-friendly software around but hey, neither's Linux. Give it a few weeks of tinkering and you'll learn to love it.

      I want to learn how to use Matlab more effectively as it's (apparently) the most effective for physical modelling, but we don't get taught it over here (Mathcad, Maple and Mathematica are all these scuzzers will teach us); anyone know a good intro to it on the web?

      -Mark
    • Re:MathCad? (Score:2, Informative)

      by DuckDodgers ( 541817 )
      I was forced to learn Maple. Maple isn't particularly user friendly, but it's pretty powerful.

      We did, in no particular order: differential equations, groups, rings, RSA encryption, McLaurent series, and matrix manipulation.

      But I can't compare it to any of the other programs because it's the only one I used.
      • Re:MathCad? (Score:3, Informative)

        by saforrest ( 184929 )
        We did, in no particular order: differential equations, groups, rings, RSA encryption, McLaurent series, and matrix manipulation.

        Er, you mean either "Maclaurin" series or "Laurent" series.

        A Maclaurin series is the Taylor expansion of a function about 0.

        A Laurent series is like a Taylor series, but with the range of exponents going from -infinity to infinity instead of 0 to infinity.

        (Maple is capable of doing both of these.)
        • Re:MathCad? (Score:3, Informative)

          by BitterOak ( 537666 )
          A Maclaurin series is the Taylor expansion of a function about 0. A Laurent series is like a Taylor series, but with the range of exponents going from -infinity to infinity instead of 0 to infinity.

          Right, so a McLaurent series is like a Taylor series, expanded about x=0, with exponents ranging from -infinity..+infinity.

    • Re:MathCad? (Score:2, Insightful)

      by CEHT ( 164909 )
      MathCad is probably good for undergrad / highschool math problems, but I doubt it will do stuff like nullspace calculation with a 200x200 matrix.
    • Re:MathCad? (Score:5, Informative)

      by franimal ( 157291 ) on Wednesday February 26, 2003 @04:17PM (#5389224) Homepage

      User friendly? Are you talking about the program that I use on a daily basis? Surely not. MathCAD is without a doubt the prettiest of all the options but it is among the worst in user interface.

      For those of you who are not familiar with MathCAD, it works like this:

      Anything and everything that you want to input into MathCAD is in it's own little box. Be it a text or an equation box.

      The horrid part is trying to organize all these boxes on the page. Putting everthing in a box means that it operates completely contrary to what most people are used to with MS Word. Say you enter some equations and then decide you want to add a few more in the middle. You can't just hit the up arrow and start typing with maybe an enter. Instead, you'll often have to select the later equations and drag them down to make room for the new. Then, if you have a lot of equations you likely didn't move all of them down. So, you have to select the equations that now overlap and select 'Separate Regions' from a menu. This gets to be very tedious.

      Furthermore, is it too much to expect MathCAD to figure out that I don't want have my equation on page one and the rest on page two? Why should I have to go and select "Reimpaginate' from a menu before I print?

      Entering equations is no joy either. I'm constantly frustrated when I try and do something as simple as add antoher term to an equation, like changing x^2 - 3 to x^2 + x - 3. I find myself starting over and at times typing 1 + 1 - 1 and then replacing the ones. I mean, come on, I've seen many math typeing solutions that are far better, in MathType, and LyX for example.

      Sure you might have a nice looking document but was it really worth the pain? Furthermore, I find MathCAD to be seriously lacking in function compared to Maple et al.

      Of course, Maple et al. all have their problems with user interface. Why should I have to end with a semi-colon? And you have to realize that it's never going to look the way you want it to. So you have to suck it up and do the math without worrying about the beauty of the output.

      Not to sell MathCAD short, there are some things that it does do well:

      • Units, the best unit management system I've had the joy to use. Very nice.

      • The output is beatiful.

      • Simple math that doesn't require big complicated equations and lots of loops.

      Personally, I can do the easy math by hand. For more advanced stuff check out SciPy.org [scipy.org]. They provide a python interface to established numerical algorithms in C and Fortran. But it's much quicker and 'funner' to use. Unfortunately they are only at alpha right now. But, you can't be the price and for the most part, I've found the optimization sections to be quite stable. Combine it with pychart and your've got a good science package for free.

      Otherwise, the only package that I've actually heard people rave about is Matlab.

  • useful (Score:2, Informative)

    by mrtroy ( 640746 )
    maple and matlab i have found to be useful

    but that is only if u are doing a highly intensive amount of math

    otherwise it takes longer to figure the gui and usefulness out than to figure out the problem on your own

    matlab rules for matrix math
  • Heh ... (Score:5, Funny)

    by B3ryllium ( 571199 ) on Wednesday February 26, 2003 @03:41PM (#5388818) Homepage
    I use `expr`.
  • Fortran (Score:5, Interesting)

    by luzrek ( 570886 ) on Wednesday February 26, 2003 @03:41PM (#5388822) Journal
    In Experimental Nuclear Physics (ENP) there is a healthy mix of Fortran , C, and C++ (and some others). There is a healthy schepticism of "black box" programs and libraries so programs like Mathematica and Mathlab are pretty much not used. Also, most of the problems are pretty specific (and time consumming to run) so everyone seems to run specialized code (Example: Radware is very popular in Nuclear Spectroscophy). Of course it helps that most ENP's are pretty competant with computers and electronics (amoung other things).
    • I'm sorry, but the correct spelling of that is FORTRAN. :-)
  • by coult ( 200316 ) on Wednesday February 26, 2003 @03:42PM (#5388834)
    I'm a research mathematician, and I use lots of tools:

    matlab for design prototypes of numerical algorithms and for visualizing data.

    mathematica for doing messy algebra/calculus/differential equations.

    my own c/c++ code, with a lapack backend, for doing large-scale computations (matlab and mathematica are too slow for big computations).
    So, the answer is e) all of the above!

  • Is there a good gneral math package that provides "virtual calculator" type functionality? Something that provides features akin to A Ti-8x (Minus the graphing, of course) would be ideal.
  • Perl Data Language (Score:4, Interesting)

    by sleepingsquirrel ( 587025 ) <Greg@Buchholz.sleepingsquirrel@org> on Wednesday February 26, 2003 @03:44PM (#5388855) Homepage Journal
    Let's not forget about PDL [perl.org], the Perl Data Language. Think of Matlab combined with the goodness (i.e. CPAN packages) of perl.
    • by liquidsin ( 398151 ) on Wednesday February 26, 2003 @04:10PM (#5389149) Homepage
      What a wonderful idea! Perl is so terribly simple to read that it *needs* some highly complex math thrown in to keep us from getting bored.

    • I've advocated here before, so was going to refrain from posting -- but since sleepingsquirrel posted... :-)

      PDL is great for numerical work and data analysis. I use it to reduce image sequence data and to simulate the "small-scale" dynamo on the surface of the Sun (the domain of a typical simulation is about 30,000 km across).

      The array slicing and indexing operators are the most powerful I've come across. There are several graphics output packages, including a Tcl-based interface, PGPLOT, PLPLOT, and OpenGL. I use mostly the PGPLOT interface because of its extreme device independence, but the others have advantages too. As sleepingsquirrel pointed out, you get all of the goodness of perl/CPAN too -- PDL is just a set of modules that you use in perl scripts, so you can readily use all the database-horking, XML-parsing, Morse-code-spewing CPAN modules that you've come to know and love.

      PDL doesn't do analytical math parsing at all. It wasn't clear from the original question whether CEHT is looking for an analytical resolver or a numerical package.

      It's a little bit of a pain to get PDL all installed right (you need to get several packages from several places), but hopefully the next release will mitigate that by including a "complete" package with most of the external libraries as well as the actual PDL module set.

  • [none] (Score:2, Insightful)

    by WetCat ( 558132 )
    "If not, do researchers write C/C++ programs and use GMP or Matpack to solve math problems?"
    No! They use FORTRAN!
    Surely it's still much better language for numeric stuff
  • I'm an artist, and as such I'm not expected to know a lot of math. So I obviously have nothing to suggest to this guy.

    But I would like to share my own method for solving some problems, mainly geometry problems. I work in 3D, Lightwave to be specific, and I've helped engineering solve some mathematical problems with it. For example, there was a question about how to build a sphere with each face being in the shape of a pentagon. They needed to know what the angles of some of the vertices were. While the engineers were busy pushing numbers around on paper, I built the model in Lightwave and used its tools to get the right measurements. Actually got it done before they got their equations done. That was kind of cool.

    As I said, that doesn't really help that guy. I just thought solving a math problem like that using a 3D app was kind of interesting. New? No. Just interesting. Math is not my favorite subject but at least I've got tools today that prevent that from being a huge disadvantage to me.
  • by $$$$$exyGal ( 638164 ) on Wednesday February 26, 2003 @03:44PM (#5388862) Homepage Journal
    A general question. With some programming languages... floating point is by its nature inexact. It is probably best if you imagined that after every floating point operation, a little demon came in and added or subtracted a tiny number to fuzz the low order bits of your result. (I quoted that from here [mindprod.com]).

    Do any of the listed tools/languages take care of this problem for me? I understand the nature of the problem, but it is still very frustrating. What do the "pure" math programming languages do with this issue?

    --sex [slashdot.org]

    • I'm not sure of the exact implementation, but they must use custom data types to store numbers in, as a) they can store oodles of digits in a single number (e.g. factorial 2000 is well bigger than any int this side of the moon!) and b) they will do as many decimal places as you ask of them.

      Put it this way - floating point accuracy is NOT an issue with these programs.

      Trying to figure out how to get them to do the differentials, though, can be a bitch...

      -Mark
    • Floating point numbers are by their nature inexact. it is impossible to make them exact, though you can make them arbitrarily high precision.

      When people need exact answers they do exact math using a combination of symbolic computations and arbitrary sized integers and rationals.

      ie. the answer you might get from doing an operation like sqrt(2) would be "sqrt(2)", doing an operation like area_of_circle(r=1) would be "Pi"

      an operation like 2^128/3^45 would be some ratio of enormous integers expressed as x/y where x and y are the integers 2^128 and 3^45

      does that help?
    • Yes, floating point calcs are inexact. That's why numerical mathematicians for agest (well, since 1960-something) have been doing error analysis of programs. Basically, you show that your algorithm will not stray too far under small errors. Some algorithms that look entirely plausible do stray far, so those don't get used.

      Packages that work exact (maple, mathematica) can run only at a fraction of the speed of numerical packages, so big simulations (how high should this airplany engine be mounted) are simply not done that way.

      V.
    • That's true with all programming languages, not just some. Floating point accuracy is an inherited limitation of using a computer, you can partly work your way around it, but it never really goes away.

      One way of getting around it is making your own method for storing a floating point number, not using a built-in type with so many bits (like using more than 32-bits). Now, you can keep adding "bits" in your interpretation of a floating point number until you reach the desired accuracy.

      I can make a list of booleans to represent a floating point number and have the list longer or shorter depending on how precise I want the number to be. Then I use my own addition, multiplication, etc. algorithms on this number to get my result. This makes the process slower since you are basically rewriting the functionality of a chip to accomodate a higher number of bits than which it was designed, but it's possible.

      As for mathematical tools dealing with the issue for you, I think you can specify the precision you would like, and it adjusts the answer accordingly. (At least I believe it is the case with Mathematica.)

    • As others have mentioned, Mathematica and Maple-- like Maxima-- use symbolic manipulation to give exact answers. For example, Maxima tells me that the integral of x^2+2*sin(x) from -5 to 4 is 2 COS(5) - 2 COS(4) + 63. It can't reduce that further without introducing inexactness from the irrational functions. (If it could, it would; the same integral over -5 to 5 is reported as 250/3.)

      Some math packages and programming languages-- such as Common Lisp-- have bignums (infinitely long, perfect precision integers) and rationals, which are also infinitely long and perfect precision. So the value of (/ 1 3) is not 0.3333, it's 1/3.

    • by sckienle ( 588934 ) on Wednesday February 26, 2003 @04:21PM (#5389260)

      It's been a while since my Senior Independent project, but it was on products like these. I did a lot of the symbolic maths on the packages to describe and document what they were doing. As part of that, I had to look into how they handled floating point errors. I don't remember the package I was specifically working with; and my copy of the SIP is at home.

      With that as an introduction, for the "pure computational" packages,the problem you point out is real. Floating point errors when ignored will slowly move further and further into the significant digits of the FP calculations. For a package to be even reasonable, it must be able to describe in mathematical expressions and textual dialog how it will manage FP errors to keep them in the least significant digits of the number. If you review a package, look and ask specifically how the package does that.

      Some may ask why is this important, don't modern languages handle all this according to the FP Specs? Well, basically the specs are not good enough for large computation tasks. When you start multiplying several matrices together, you end up doing so many FP operations, that without carefully written and mathematically backed code the errors will pratically zoom "to the right." This is compounded by the fact that not all chips comply with the specs in exactly the same way, most of these packages will have a lot of conditional code to handle each chip set's specific particularities. Something else to look for: if a package claims one size fits all, and doesn't talk about OS and HW specific compiles, take extra care checking the FP issues out. They may be taking a worst possible processing approach, which will work but at the expense of speed, or they may be taking a more "mean" processing approach, which may end up with different results on different OS and HW combinations.

      Now, after all that, the reality is that most of these packages, at least if they have been around a while, have the mathematical grounding and programming "backgrounds" to handle FP operations pretty darn well. After all, this was a fairly well known and documented issue back in 1983 when I wrote my SIP.

      The "symbolic" packages seem to side step this by first taking the equations and modifying them in "symbol" form before performing their calculations. Thus, the differential of X^2 is changed to 2x by the program before any FP operations happen. But this does not mean that FP operations do not occur. If your equations still deal with matrices, then a lot of FP operations will have to be done to come up with a numeric answer, no matter what.

  • by RhettR ( 632157 ) on Wednesday February 26, 2003 @03:44PM (#5388866)
    I have used a few other packages, command-line utilities, which I find useful: Recently I'm using one for my honors research project (I'm an undgergrad): GAP [st-and.ac.uk] and another I've used which I like: PARI-GP [parigp-home.de]. GAP tends to deal with group-theoretical functions, and GP tends to deal more with number theory, but both shouldn't be ignored.
  • by JWyner ( 653364 )
    I find that each package is well suited for a particular use...

    MATLAB is great for off-the-cuff research. I can open it up, and program image processing routines in 30 minutes or less. This would take hours in C/C++. Additionally, I can take the M-file and dump it from my computer onto a workstation running MATLAB and get some decent speed and batch processing done.

    C/C++, however, gives you so much more control and execution speed, that often you either use the MATLAB --> C compiler, or end up writing a final routine in C directly. I believe for image processing, as an example, you can get over a 100x speed increase just by using the MATLAB --> C compiler.

    Just my $0.02.
  • Python and Numeric (Score:2, Interesting)

    by kognate ( 322256 )
    I've used python and Numeric and coworkers of mine have used them in real physics type stuff and it is great (I think Fermi lab uses Numeric a great deal).

    I'm surprised you also haven't mentioned R. It's a stats
    package (gpl'd) modled after S. http://www.r-project.org
    and it is very powerful with a great community behind it. It's an amazingly powerful tool for analysis.

  • For my research in mechanical engineering (more specifically regarding tolerances), I use Matlab since it's what I'm most comfortable with. Maple is also used at my Uni, but I don't have much experience with it (other than it's symbolic kernel since it's available with Matlab) so I don't end up using it.

    It mostly depends on what you're doing. Depending on your area of research, you may find that one of those is more popular because it solves these types of problem better. If speed is an issue for you, you can easily port your algorithm to a compiled language if you prototyped on an interpreter, even interfacing the two in some cases.

  • Other trade offs (Score:5, Interesting)

    by briancnorton ( 586947 ) on Wednesday February 26, 2003 @03:49PM (#5388916) Homepage
    The trade offs are a lot more than you mentioned. If you are doing MATHEMATICAL research, Mathematica AFAIK has the most extensive capabilities for expansion and programmability. If you are doing somethign like environmental modelling or complex systems analysis, then something like Matlab may be more important.

    I am not able to articulate this well, but the type of research you are doing is MUCH more important of a consideration than computation speed or resource consumption. If you need supercomputer time, then you had better ask the admin what you need to use. I know a bunch of people that do environmental modelling, and I have never seen or heard of anybody writing their own C++ to do it. Researchers GENERALLY have better things to do than re-invent wheels.

  • MatLab, Mathematica (Score:5, Informative)

    by Pemdas ( 33265 ) on Wednesday February 26, 2003 @03:49PM (#5388921) Journal
    I'm a graduate student studying robotics. YMMV.

    For most computer vision code, Matlab is a must for prototyping. It's useful in other areas, and, if you know how to use it, reasonably fast. If you're doing particularly involved matrix manipulations, it takes a lot of work to come up with C/C++ code that will work faster then well-written matlab code.

    Personally, I also use Mathematica for doing real math work. If I need to derive something that's particularly complex, then Mathematica's notebook style is really nice to work with, and it makes possible extremely clear and concise mathematical arguments while limiting stupid human errors when doing drudgery like taking derivatives and the like.

    I hear Maple and MathCad are both good, too, but I've never used them.

  • It's been my experience that most folks tend to focus on one package for their work. Commonly it is Matlab or Mathematica it seems and each package has its strengths and weaknesses with Mathematica being the better choice for symbolic math while Matlab is used more for building applications to perform specific tasks. Mathematica is a favorite of mine and works well in solving problems related to applied mathematics.

    For a couple of reviews of Mathematica, see Applelust Scientia [applelust.com]

  • FORTRAN (Score:3, Informative)

    by Entropy_ah ( 19070 ) on Wednesday February 26, 2003 @03:50PM (#5388926) Homepage Journal
    I'm working on a thesis with a math professor here at school. We're working on a mathematical fiber model which requires a whole lot of computation and a whole lot of data. My advisor does all the computations by writing FORTRAN programs and running them on an SGI Octane. Yeah the language is really old an ugly, but it's still useful for mathematics and its what a lot of mathematicians use in academia.
  • My experience (Ph.D. in applied mathematics, and employment at a mathematical consulting firm) is that researchers only use their favorite package and will rarely use anything else, despite the fact that their favorite may not be appropriate for the job at hand. To that end, I use Mathematica for nearly all my prototyping, except for brief excursions to Matlab which is much better at image analysis. But both of these have speed issues, and when it came down to serious buisness I would often roll my own C,C++ or FORTRAN code for the problem at hand.
  • I like it because I've used it. But it does have very powerful DSP tools, controls simulation (Simulink!!!), ability to code anything you need or buy it, also interfaces to external device through the serial port (for example), and allows you to develop algorithms for embedded systems.

    Never had to use it for much besides 36-hour suicide class project marathons, but it was reliable and easy to work with.
  • Comment removed based on user account deletion
  • If not, do researchers write C/C++ programs and use GMP or Matpack to solve math problems?"

    Writing a robust, efficient, and accurate numerical analysis library is not something you do in a weekend. There's not much to improve upon with packages such as LAPACK and their kin: They've been proven to be accurate and reliable over years of use. There's really nothing to reinvent.

    I'ver personally used LAPACK for digital terrain matrix (3D) processing of satellite stereo images and the mapping of spherical coordinates to and from various "flat-plane" projections generated by the IKONOS [spaceimaging.com] sats. There were simply no other viable alternatives to LAPACK in terms of speed and accuracy, and we certainly weren't arrogant enough to think we could write a better numerical analysis program.

  • I personally like GiNaC [ginac.de] for stuff like that. Basically the authors were in a similar possition, they wanted the power of C++, with the symbolic solving of Maple. It doesn't have every feature in the world, but it works for what I do. They've used it for research, although I haven't directly.
  • Mathematics past (Score:3, Interesting)

    by Shadow Wrought ( 586631 ) <shadow.wrought@g ... minus herbivore> on Wednesday February 26, 2003 @04:01PM (#5389064) Homepage Journal
    Just out of curiosity, anyone know what mathematicians, engineers, and phycicists would do in regards to these complex problems before there were these programs mentioned? What about before slide rules?
    • by doctorwes ( 128881 )
      Yes, they did calculations by hand. Mathematicians were more patient in those days. For example, in 1863 Kulik published a table listing the least prime factor of every number less than 100,330,200. It filled 4212 pages and took him twenty years to complete.
    • I looked up "computer" once in an 1828 Noah Webster dictionary reprint. The definition was "one who computes." I think mathematicians employed assistants much as engineers employ technicians, to do the time-consuming grunt work.

      Johannes Kepler had this problem (trying to emperically verify his theories, I suppose) and tried to build a fairly complex mechanical calculator, but failed to finish it before he died. This was long before Babbage.
    • Fortran and Tektronics 4010 terminals. Apparently 4010s were one of the first devices which could display graphics. However, they were around well before there was enough RAM in a computer to hold the display in memory. They worked by using a high intensity electon beam to write on the inside of a cathode-ray-tube, a low energy continuous emitance of electrons to maintain the writing, and a high energy pulse to erase what had been drawn. The number crunching was done by a Fortran program and the 4010 was used to display it. For those of you on *nix boxes the xterm (and friends) terminals still contain 4010 emulations. I think it is accessed by "control + right mouse button", but it could be shift or alt.
    • Back in the old days, we were lucky if we had three significant digits. Most times we only had one or two. And sometimes we didn't have any significant digits at all.

      Have you ever tried to build a pyramid without any significant digits. You kids have it so easy these days.

  • I have gone several ways on this question. Mathematica for symbolic solving, coding my own in C for mathematical modeling. But I did once use S-plus [insightful.com] which is a fairly nifty matrix algebra system that might be useful to you. An open source variation is R [r-project.org]
  • by pz ( 113803 ) on Wednesday February 26, 2003 @04:03PM (#5389084) Journal
    Having traversed from a predominantly engineering realm (computer science) to a predominantly scientific realm (neurobiology), my observations have been that the tools are selected mostly on habit or previous knowledge rather than fitness for use.

    The most commonly-used analytical platform is probably Excel (or some similar tool like Statistica), but the more serious researchers, who are also the more mathematically-aware, nearly all use Matlab in my experience.

    When efficiency is an issue, nearly everyone I've worked with turns either to IDL (a Matlab competitor that has more arcane syntax, but much higher processing speed) or writes a C/C++ program by taking algorithms from "Numerical Recipes in C".

    Recently, I've also seen a rising use of Visual Basic, especially to do experimental control (although some Matlab hooks do exist for such), and, of course, LabView. Some diehards use LabView for data analysis as well, but their results are suspect just because the tool is so poorly fitted to the task.
    And, of course, many data collection hardware manufacturers (CED, National Instruments, TDT, etc.) supply scripting languages to control their hardware and perform rudimentary and sometimes not-so-rudimentary calculations.

    The best researchers select the most appropriate tool for the job, but, again in my experience, it seems the selection is normally based on previous experience and inertia. Those who know a particular tool well (eg, Excel, Matlab, SPSS, Mathematica) tend to keep using that tool, even if it is not well-suited. This means you get abberations like Matlab programs that control real-time experiments and LabView programs that do higher-order mathematics.

    Why?

    Because the largest fraction of a scientists' time should be spent on data collection, not experimental implementation, and the amount of time (for nearly all fields except those with astronomical amounts of data) spent executing code is dwarfed by the time developing it. Clearly this breaks down for certain applications, but most of the science currently being done (read: molecular biology, and no, not bioinformatics) is not algorithm-bound.

    Since data analysis is such a huge, broad field, I expect to see radically different answers from other posters!
    • Clearly this breaks down for certain applications, but most of the science currently being done (read: molecular biology, and no, not bioinformatics) is not algorithm-bound.

      Most of the bioinformatics being done that I'm aware of is not algorithm-bound either.

      People do tend to find a language and stick to it, though. Usually Perl. You get the occasional Python diehard as well, but my experience has been that while I'd far rather use Python for a large project, I'd rather use Perl for anything with significant amounts of text processing. There are times when weird kludges and shortcuts are actually a good thing. I know someone who programs in Lisp whenever possible. C is usually the last resort of people who think it'll be faster than Perl. Sometimes this is the case. Sometimes they simply can't program worth shit.

      The real problem is that many bioinformaticists have no concept of software engineering. This applies on many levels. First, they can't write reusable, maintainable code. Second, they have no concept of algorithms or recursion. Third, they never get to the point where they can write software reflexively. The best code, in my experience, is the stuff that's pounded out in under an hour, but which has been thought about for days beforehand. I think everyone wanting to do bioinformatics should be forced to take an intermediate CS class before they're allowed to do research, rather than sitting down with an O'Reilly book and starting to write code. They'll waste less of their time and everyone else's this way.

      Frankly, however, two-thirds of the time of any bioinformaticist is spent interpreting and reformatting the crap data that biologists give us.
  • Maxima! (Score:4, Informative)

    by Piquan ( 49943 ) on Wednesday February 26, 2003 @04:06PM (#5389109)
    I use Maxima for my work. It's a continuation of Macsyma, the computer math program that was the inspiration for Mathmatica. Macsyma was tied up in copyrights for a while, but now it's public domain. So Maxima updated it to modern computer environments, added in a GUI (with web browser) and ties to modern programs like GnuPlot, and now there's a good, open-source symbolic math utility / programming language.
  • by Somnus ( 46089 ) on Wednesday February 26, 2003 @04:08PM (#5389125)
    I've used a number of different packages, with varying results. Here are my thoughts:
    • Mathematica -- It's good at everything: symbolic computation, statistical analysis (esp. if you deal in intricate error propagation, where symbolic computation is handy), visualization (w/ some tweaking), and even number-crunching. Has a fantastic built-in library. However, it is a blackbox solution (and I have encountered errors in the past), is awfully slow (can be sped up by accessing the kernel directly through C) and closed source.
    • IDL -- Great for crunching through large amounts of data for the end-user because it has optimized implicit array math. It has an extensive built=library and is good at producing visualizations. Drawbacks are: blackbox (though it uses well known algorithms out of Numerical Recipes, for example), closed-source, and runs best on Windoze, and has an arcane syntax which is some bastard child of Pascal, Fortran and Perl, but not too bad when you get used to it!
    • Maple -- Has all of Mathematica's weaknesses but cannot match its built-in capabilities (plotting, extensive symbolic library, statistics, numerical analysis).
    • Matlab -- Only suitable for numerical computation, and is neither as easy to use nor as replete as IDL.

    Are there any viable open-source solutions to either Mathematica or IDL?
    • Two notes:

      (1) Mathematica is not so hot for processing data sets. Since it works so symbolically, you can try some transformation and end up waiting forever for an endless symbolic expression to appear on the screen, all because of a typo. This can be mitigated using $PrePrint=Short[#,7]&, but that has its own problems.

      (2) Anything using Numerical Recipes is immediately suspect (not to mention the license issues). There used to be a document at JPL about this. For a critique, see this compilation [colorado.edu].
    • I've used IDL a lot and found that it's great for getting the wrong answer in a hurry. The designers didn't spent a lot of time thinking about special cases, so (for example) the built-in interpolation includes unavoidable off-by-one pixel errors. By default, all FOR loops break after 32767 iterations. Structures try (and fail) to be like perl hashes. If you write two double-precision variables to a file and then read them in again (using the default I/O format), you get an egregiously wrong answer. The symbol table combines the worst of global and local lookup: array names collide with global functions, and there's no way to isolate modules (so big projects inevitably produce collisions).

      It's possible to get work done with IDL (zillions of scientists use it), but it's a tragic waste of brainspace to keep all the extra exceptions and pitfalls in mind. Writing robust code in IDL is like kicking a whale carcass across a mined Afghani battlefield.

      Oh, and the license to use it costs about as much as your workstation. I'll take PDL, NumPy, or Matlab any day over IDL.
  • Which package... (Score:3, Informative)

    by digitalhermit ( 113459 ) on Wednesday February 26, 2003 @04:08PM (#5389128) Homepage
    Asking this question is no different from asking, "which programming language should I use?" without stating the purpose. Bash is great for scripting a daily ftp get, but inappropriate for drawing graphics.

    I can use Mathematica for almost all of my dabbling. Sometimes I play with MuPAD, R, GnuPLOT, Octave or Mathematica to show a particular problem. Since these are also free (beer or speech, depending on package) I can be reasonably sure that everyone can get a hold of it.

    For example, Octave is suitable for matrix manipulation. It does everything that I need it to do and can replace Mathematica for me. It's also fast enough (the longest calculation has taken just over a minute but it was a huge manipulation of some graphic data).

    I've dabbled with some of the libraries but only for fun.

    I guess what it comes down to is how comfortable are you with the package. By the time I try to write something in C using a dedicated library I can most likely do the same thing in Mathematica in a tenth of the time. Even if the execution speed was 100 times slower, the "real" time may not about to much.
  • I used lots of numerical methods related libraries from netlib.org back in the day. For symbolic manipulation (rarely important to me as I solved most stuff by hand anyway) I relied on Maple.
  • by foog ( 6321 ) <phygelus@yahoo.com> on Wednesday February 26, 2003 @04:10PM (#5389145)

    You're talking about two different classes of software: "numerical linear algebra packages" and "computer algebra systems". Maple and Mathematica are the latter, Matlab is the former. I don't know about Magma.

    Hardcore numerical programmers use LINPACK/LAPACK with platform-optimized BLAS (this latter is often commercial, or at least proprietary to the platform vendor) directly from Fortran. They usually use modern commercial Fortran 90 or Fortran 95 compilers, too.

    On numerical linear algebra stuff where you aren't going to recruit and pay a Fortran programmer with a PhD in applied mathematics, most sane people use Matlab or GNU Octave or one of the many other Matlab clones. A lot of people like Numerical Python, if I had a big new project to do, I'd seriously consider it.

    Yes, crazy "researchers" who don't want to learn Fortran and think Matlab is too slow or too expensive will write numerical code in C++. Some of them do fine work, too.

    Excel and other spreadsheets are fine for small bits of numerical analysis, too. Don't turn up your nose at 'em, you can email your boss your whole analysis and he doesn't have to learn Matlab to do anything with it. Excel is also slowly replacing Qbasic as the computing lingua franca of the Amateur Radio/hobbyist-electronics community.

    The class of people who just doodle out the singular integral equations for the airfoil design they're brainstorming seem to like Mathematica a lot. I wish I were more like that. Maxima is seeing a renaissance now that its licensing and distribution issues are cleared up (it's GPL now). I should check it out. There's also GNU (Emacs) Calc, which I use regularly as an RPN desktop calculator. It is actually much more powerful than that and will do all kinds of HP-calculator-style graphing and computer algebra with a liberal sprinkling of Mathematica-style syntax, but I don't use those features much, because they're wicked slow.

  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion
  • SciLab [inria.fr],

    the healthy open source alternative. (tm)

    Might not have all the features but looks pretty decent.
  • by NoData ( 9132 ) <<moc.oohay> <ta> <_ataDoN_>> on Wednesday February 26, 2003 @04:20PM (#5389245)
    I use MATLAB every day for my neural network simulations. MATLAB is incredibly powerful, incredibly flexible. It is also incredibly expensive. And the decision to port it to OS X was about the best decision The Mathworks has made recently.

    MATLAB offers student versions for about $99 a pop, which is dirt cheap considering its $1000 price tag for the retail version. Many universities of course have dramatic discounts, but then, you have to have be affiliated with a univeristy. Even the student version requires you to attest that youre using it for course work or student-level research and not commercial gain.

    MATLAB has a number of drawbacks. Price is the largest. To enforce its license, MATLAB requires you to run the onerous and clumsy FlexLM license manager. FlexLM is brought to you by GLOBEtrotter....a division of that bastion of consumer rights, Macrovision. That should speak volumes. The license manager makes doing a lot of simple things stupidly difficult, especially if you're (like me) mobile and have to authenticate with a central server running the license manager. I can get into details if people have questions.

    On top of that, MATLAB requires a yearly "maintenance" fee. It's more or less software as a service. Apparently, if you let the maintenance contract lapse, you can still use MATLAB, but you get no more support and cannot apply any new updates. That may be, but the particular license my university employs will cause my copy to simply stop working after April 1 if I don't renew. (April 1 being the beginning of the Mathworks license year. I don't think they see the irony in choosing that date).

    The maintenance contract does not apply, AFAIK, to the student version.

    On top of THAT, the student version or the $1000 base retail installation just gets you the MATLAB core. Which, granted, is extremely powerful. But the Mathworks also has a couple dozen or so Toolboxes, each with a range of specialized functions and tools (i.e. Signal Processing, Image Processing, MATLAB-to-C Compiler, Symbolic Math, etc. etc.). Each of these comes for an additional price, and its own maintenance fees. IIRC, these are like $500-$700 more each.

    Did I mention all these prices are for licenses on a per seat basis? Any institution or company thinking about MATLAB is going to shell out serious bucks for the privelage.

    On the other hand...MATLAB is a serious, extensible, highly flexible platform for technical and mathematical computing. I find that I can prototype programs for solving scientific problems in MATLAB far faster than I can in any other language. And its visualization features are truly impressive...even if the Handle Graphics system it uses is SO DAMN KLUDGY to program. You can customize visualizations just about however you can imagine...ALTHOUGH, some simple customizations are going to be UNNECESSARILY tedious to program.

    Another drawback to programming in MATLAB is speed. MATLAB ("Matrix Laboratory") is exceptionally optimized for handling calculations of very large matrices. However, because it's interpreted, if you have any loops, it's going to be very slow going. There often many tricks to "vectorize" operations you'd normally do iteratively in other languages, but often the only solution is the ol' for-next or while loop. These are slow. Very very slow. Yes, there's a compiler, but in my experience the compiler isn't that great at optimizing code...and, did I mention it costs extra?

    Anyway, MATLAB is amazing in its breadth and depth of power. I haven't even touched on its capabilities for engineers, like the SimuLink system design simulator, and hardware interface toolboxes. I can't imagine a problem needing to use a "mix" of math packages (as the original poster asked) if you're using MATLAB. But the purchase and ownership costs are very steep.
  • I have a related issue, in this regard. Some of the problems I am working on require arbitrary precision floating point numbers. E.g., one number might be 3.2334, but it needs to be multiplied by 3.4568902349830983945873908730987578439345, and I need all the resultant digits.

    The problem is that the output of one calculation is fed into the input stage of another, that output being the input of the first calculation, in a circular style, so that small rounding changes may have a large affect on the final outcome.

    Now, at some points, the precision may be truncated (where the effect will be unnoticable to the equations), but at certain points I need the exact number.

    I have heard that with Lisp you can have numbers as large as you like, but I don't know how hard it is to perform complex numerical tasks in Lisp. Also, speed is an issue (I want it to be as fast as possible).

    Any suggestions as to how to accomplish this?
    • The Constructive reals package [hp.com] by Hans Boehm should work for you. It "sucks bits" from subexpressions as necessary to obtain the output precision that you ask for. Beware of questions that take forever -- with unlimited precision, certain (in)equality questions can never be answered.

      I wrote some code [theworld.com] that uses this package to test the quality of the Java Math.sin method, if you would like a starting example.

  • Python (Score:2, Informative)

    by Anonymous Coward

    In the past I've used Matlab, C/C++, and a junkyard of Perl scripts to get things done.

    Nowadays I use exclusively Python, with underlying C and C++ components when performance is at a premium. C is easy to call from Python thanks to Swig.

    Python is simply unparalleled in its simplicity and elegance, and I find that I can accomplish most of the things that Matlab is good for from a Python interactive shell using Numeric and the other various scientific Python libraries.
  • by edhill ( 12418 )
    I'm a post-doctoral researcher at a engineering college and I use Linux for all of my data acquisition and analysis. The following environments are used:

    LabVIEW, PERL, shell scripts, and/or C for data acquisition

    C++, MatLAB, and/or shell scripts for data analysis

    and you can get some of my codes from Sourceforge:

    http://sourceforge.net/projects/qaxa
    http://sourceforge.net/projects/ssnooper

    and others are available by sending me an email.

    Ed
    http://cesep.mines.edu/people/hill.htm
  • I have been using matlab, mathematica, maple and mathcad for different purposes in university and currently in job. I feel matlab by far is the best of the bunch simply because it allows for a more powerful simulation experience using simulink. Here is what I feel about learning these packages vs. coding ...

    1. matlab - hard on newbies' but very powerful and elegant in the hand of intermediate and advanced users. Graphical simulations possible through simulink. Matrix computations are very fast!! and on average it takes the least amount of time to solve the problems compared to mathematica, mathcad and maple.

    2. mathcad - the first mathematical software I used, very easy to learn almost instantaneous learning!!! but that was when mathcad was still in dos mode, I saw the windows version out now and they seem to be cumbersome to get around, but then I have not been using it regularly.

    3. mathematica and maple - almost similar performance, mathematica has better interface and with a large amount of tutorials and extensive help system is easier to learn than maple.

    4. scilab - a matlab clone which is GNU (I think!!) used it couple of years back when it was not as good as matlab graphically ... maybe its better now ...

    Now, coding - yes it has to be done from time to time but I think due to these softwares it has been relegated to the sidelines when extensive run times are involved and a significant performnace gain can be derived by days of coding. I belive it may be easier given the tons of free libraries available but it still takes longer to code in c/c++ than in a 4GL (generation language) like matlab and mathematica.

    Coding still can't beat the quick prototyping mode of these softwares i.e u can do a lot of manipulations in the time u take to write and debug the code. It basically boils down to whether result or way of getting result matter most !!!

    and then don't forget sometime those big screen scientific calculators are faster to get quick results than u'r fancy softwares and codes :) ob' it may not stand the test of u'r research !!!
  • I usuall write binaries (since I do genetic algorithms/pattern recognition experiments) and then play with my data in Matlab (do some PCA to get the dimensions down, draw it up nicely, etc).

    I guess the issue is that the major suites have SO many tools that, once you are used to them, mesh well with your way of thinking/coding/problem solving. In that way you usually find one tool and stick with it.
  • I was taught Matlab in my computational physics graduate class, which biases me toward Matlab in my own research. I also own Mathematica, but have not taken the time to master its language and command structure. Mathematica was an award at a conference where I presented a paper, but I purchased Matlab for myself.

    There are two primary advantages which I see in Matlab. The first advantage to me is its abilities with matrices and arrays; it can do things in a couple of lines of code which can take some roundabout programming and subroutines in other more conventional languages.

    The second is Matlab's graphical abilities. Display of data is very important, both in the final product (thesis, paper) and in the research process itself. After a brief introduction to graphing in Matlab, it becomes a trivial task to choose and use various display options for your data.

    In physics, it seems that we stick with what works until something better is found. That applies to our theories and to our tools. It is not uncommon for us to use Fortran, Pascal, or even various types of Basic to perform simple calculations and experiments.

    Much of what one uses may be determined partially by chance--what software package was available at your institution, what professor did you study under, did your undergraduate degree require a programming course? The work involved in switching from one major package to another, for instance from Matlab to Mathematica, simply seems like too much effort for very little sure return.

    Jim Deane
  • ...Haskell. See here [unicaen.fr] for some advocacy.
  • MathForge (Score:2, Informative)

    by Anonymous Coward
    MathForge [mathforge.net] is a project designed to utilize "web services" to provide interfaces to networked and local math tools. The idea being the mathforge architecture "discovers" math services depending on whatever task needs to be done.

    The base of the project is a Java environment on which programmers can build tools as needed.

    It is GPL'ed software.
  • ...for the job at hand.

    I'm a graduate student in Mathematics studying (convex) optimization problems so I see a healthy mix of pure and applied math. When I'm doing pure math the best tool for the job is a strongly symbolic math package like Maple (which I use extensively). Maple is also really good for quick visualization and helps gain insight and intuition into problems. Other offerings in this arena include Mathcad and Mathematica (however Mathcad actually uses a smaller version of Maple's symbolic engine).

    Similarily, if the task is more numeric, Matlab is the choice (actually, we use Octave, which is a GPL'd and free numeric package that has Matlab syntax; most code written for one runs in the other). I'd say Matlab/Octave are most useful for prototyping numeric algorithms, and solving medium sized numeric problems.

    Finally, when a tool is needed that performs well at one specific task (or the problem size gets really large), you can't beat writing your own tools from scratch in the compiled language of your choice. At this point, there are a variety of libraries that one may find useful (for arbitrary precision arithmetic, expression parsing, symbolic manipulation, etc).

    So I guess the answer isn't white or black, but rather varying shades of grey (as is always the case).
  • Scilab (Score:5, Informative)

    by tie_guy_matt ( 176397 ) on Wednesday February 26, 2003 @04:33PM (#5389368)
    I don't think anyone has mentioned scilab. It is a good GPL alternative (along with octave) to the expensive (expensive if you are a college student) matlab. It has been a while since I played with them alot but I found that matlab had the best graphing functions.

    Anyway the best package for you in part depends on what you are using it for. Matlab, scilab and octave are great for doing linear algebra things -- manipulating matrices and arrays etc. Some people complain about how slow matlab is. I find matlab is pretty fast as long as you use it for what it was designed for. You should use their built in functions as much as possible and use as few loops as possible. If you find yourself using a lot of loops try writing a mex function in C or FORTRAN.

    Maple and Mathmatica are great for Calculus differential equations etc. If you are doing a lot of matrix mulitiplies in Maple, you should be using matlab.

    Mathcad is user friendly but it is SLOW. Even old people who have been doing insane integrals in their heads since the 50's and refuse to even look at a computer can see a Mathcad print out and tell exactly what the program is doing.

    Hope this helps. Personally I like to use Octave and Scilab since they are GPL. Scilab is prettier IMHO but Octave is closer to Matlab (which I am already used to.)
  • PARI (Score:2, Insightful)

    by jasuus ( 643881 )
    If you have lots of money, use Mathmatica. If you are poor, use PARI.
  • Instead of C/C++ (Score:2, Interesting)

    by bauernakke ( 545050 )
    A highly unknown but very efficient (faster than Pascal w gc) and easy to program is ML (Meta Language) Seems to be perfect for Math computation.
  • I'm an experimental particle physicist, and most of the mathematics packages I use are home-brew libraries (mostly C++ w/ bits of older Fortran). I think for me a multi-dimensional function minimizer is just about the most complex tool used, and it is used constantly -- tens to thousands of times per day. Unfortunately, it's not a trivial problem and different minimizers will often produce different results. The only one that I know which is semi-free and basically functional is MINUIT, part of CERNLIB (in Fortran). I've recently been searching hard for an open-source minimizer to replace this, written in C/C++. I would also like to see one in Java, for a few reasons (I'd rather an applet than a CGI script).

    The GNU scientific library has a very crude minimizer that's too simplistic for my needs (I want things like the curvature at minimum which can be inverted to give a coordinate covariance matrix). I most often use the minimizer to fit various functional forms to observed statistical distributions.

    I am surprised at the lack of an up-to-date open-source minimizer, because so many university researchers use these kinds of tools, and are in an environment where commercial solutions are painfully expensive and a schism for any multi-university collaboration. A lot of phycisists write good code prolifically, but far too few support/contribute to open-source projects!

    I was reading through Numerical Recipies recently, and was also taken aback by their licensing policies. The algorithms in the book are simple solutions which have been previously published by others in journals and such. And the code is just a direct adaptation (translation really) of the algorithm. Yet somehow their code, or any translations of it, are under copyright? I think it's foul-play like this that are the reason there are so many high-quality commercial mathematics packages, and so few open-source ones.
  • Phew! (Score:5, Insightful)

    by jefu ( 53450 ) on Wednesday February 26, 2003 @05:24PM (#5389820) Homepage Journal
    This is not a small question.

    First a global kind of classification.....

    octave/matlab... are mostly vector/array oriented languages and are useful for doing work in problems that are suited for such - you can experiment easily, then recode in C,fortran... if needed. apl and j are also in this group and should not be ignored - though they're used a bit less frequently.

    Macsyma/mathematica/maple/maxima/derive are symbolic math languages and can solve interestingly sized problems and get symbolic answers (that is, things like sqrt(pi/2)) as well as numeric approximations. This can be a very useful tool to have - depending on what I'm doing I use such things a couple times a week (nice to check results done by hand, or to handle all the crufty part of a solution). Most will emit fortran code which can also be useful.

    vtk, opendx/khoros(?) are visualization tools - most of the other packages have some visualization tools packaged in them, but vtk and opendx both offer quite a bit more power.

    Now the incredibly non specific recommendation

    My suggestion is to pick one of each of these and learn it - do enough in it so you know the language/system well. Otherwise you'll be struggling with the language as well as with the problem - and finding bugs will be close to impossible.

    If I put on my "computer science professor" hat (probably a wise thing if I'm to keep the top of my head from turning bright red with sunburn), I usually try to recommend that all CS students learn a smattering of these things as well. When you need one of these tools, knowing its there and how to use it can save large and wonderful quantities of time.

    And now some more specific comments

    On the whole my choices would be as follows - note the caveats - some of them are pretty cave-rnous (sic). I don't have piles of money to spend, so tend to prefer the open source programs just on that basis.

    For array/matrix manipulation I much prefer APL or one of its derivatives (check out aplus on sourceforge). Languages in the APL family are also fun to program once you learn how. However the terseness of the syntax (and with APL itself the odd character set) tends to make these a bit forbidding, so a more popular choice would be octave (open source) or matlab. I've had good luck with octave - it seems to handle most matlab programs well enough. If you've got piles of money, go for matlab.

    For symbolic math, maxima (sourceforge) is good. Its commercial cousin Macsyma has usually ranked as about the best symbolic math packages for accuracy and power and seems less expensive than the others. Actually writing programs in either of these requires learning quite a bit about the innards of the system though. My second choice for symbolic math would be Mathematica - its programming language is well integrated with the system as a whole and and for general goodness and niceness of the interface it can't be beat. (The other commercial products are building on the best parts of the Mathematica interface - I've not checked recently, but they're getting much better fast.) The visualization capabilities of Mathematica are also very good. Maple is probably the most popular, so using it will probably make it easier to find someone to help you, but on the whole I've just never found Maple as easy to program as Mathematica and I tend to want to program almost everything.

    For visualization both vtk and opendx are very nice systems. vtk is more aimed at a programming interface, opendx has a labviewish kind of programming environment. I like both and have both at hand. Both these systems are big enough that you'll want to make sure you understand them before you tackle a project with them.

    They don't scale well, but spreadsheets can be very convenient for small models. Careful though, its easy to have errors even in middlin sized models that can be very hard to find.

    Odd Zen Endz

    As has been noted there are other systems, some smaller, some more specifically focussed on a single domain. Those tend to be harder to match to a problem - unless the problem is right in the center of the domain in question.

    There used to be a program AXIOM which had a lot of nice features, but it seems to have gone to that Big Bit Bucket in the sky - but its base language "Aldor" is now available at aldor.org. I have a copy, but haven't looked deeply at it.

    Sourceforge is also hosting a new project "lush" - which is a lisp system that has some integration of some of these features. To the extent that I've used it I'm impressed and will probably spend some time working deeper with it in the hopes that it will prove another valuable tool.

  • by Goonie ( 8651 ) <robert.merkel@b[ ... g ['ena' in gap]> on Wednesday February 26, 2003 @05:53PM (#5390081) Homepage
    GNU maxima is a free symbolic algebra package, roughly equivalent to Mathematica. It's not nearly as tidy, but I've found it handy on occasion.

    If you want to do some statistics, there's also R, a stats analysis package. It's very powerful, but it's designed for experts rather than non-statisticians who occasionally want to crunch some numbers.

  • by sstory ( 538486 ) on Wednesday February 26, 2003 @06:40PM (#5390554) Homepage
    About keeping important bits of paper. I have MathCAD Pro 2000, and an upgrade to MathCAD 2001, both of which set me back nicely, (though I usually need Mathematica) and when I switched computers in December it was all useless because I can't find the serial number to my MC Pro 2000 disk.
  • recommendations (Score:3, Informative)

    by g4dget ( 579145 ) on Wednesday February 26, 2003 @11:19PM (#5392699)
    If you are looking for general purpose tools for numerical computing, Numerical Python and Scientific Python are excellent choices, possibly with VTK for visualization (search on Google; the projects are on Sourceforge). Octave is a Matlab clone, although, like Matlab itself, rather limited. The R system (www.r-project.org) is probably the best statistical system around and produces great plots. And for interactive symbolic math, Maxima is still very good (and you can use "texmacs" as a nicer interface to it).

    Don't underestimate the power of C++: with type checking and overloading, C++ may actually be more convenient to use for many numerical applications that even something like Numerical Python or Octave/Matlab.

    Beyond that, yes, obviously, all those libraries are in use by someone if they are maintained. If you have the need to use one of them, you will know. If not, don't worry about it.

What is research but a blind date with knowledge? -- Will Harvey

Working...