Use of Math Languages and Packages in Research? 454
CEHT asks: "As a research programmer at the university, I have encountered numerous times when I need to choose which language(s) or package(s) to use for different projects. Tradeoffs and performance issues have to be considered: results from one package may be more compatible with the data from other researchers, another package may find the solution faster and use less resources, and so forth. Maple,
Matlab, Magma, and Mathematica
are among the most well-known packages. Libraries such as IMSL is also popular. Of course, there are smaller (and mostly free) packages that tend to target specific types of problem, such as LiDIA, Singular, and LAPACK.
The question is, how useful are these [and other] math packages? Do researchers use only one or two packages for most of their projects? Or do people like to mix things a little by pulling the strength of different packages together to solve a math problem? If not, do researchers write C/C++ programs and use GMP or Matpack to solve math problems?"
Octave (Score:4, Insightful)
Re:Octave (Score:5, Insightful)
Re:Octave (Score:2, Informative)
Re:Octave (Score:3, Informative)
apt-get install octave or if you really like to have it build on your box apt-build install octave.
Re:Octave (Score:5, Informative)
Re:Octave (Score:4, Insightful)
First, Redhat (I'm sure other distros as well) INCLUDE OCTAVE. That's right, you don't NEED to compile it because it comes with your distro. You can't get much easier to install than selecting the package at OS install time.
Second, Octave doesn't require a user to jump through 8 hoops to run it on your system. No FlexLM, no licence files, no keys, no giving Mathworks your hostID when you change computers, etc.
Finally, most people don't compile from source. And those who do are usually experienced enough to figure out how to get things compiled.
Re:Octave (Score:2)
This might be useful ... (Score:4, Informative)
http://www.gentoo.org/dyn/pkgs/app-sci/octave.xml [gentoo.org]
Re:Octave (Score:3, Informative)
Download the binary distro of Octave for Windows [sourceforge.net].
I just did, now I'm going to go play with it.
Re:Octave (Score:3, Informative)
Although Octave is great to mimic Matlab 5.2, some neat features in Matlab 6.0 are not there yet.
Another drawback is... just like Matlab... It's Slow, with a capital S. I won't recommend it for serious number crunching like encryption or gene data manipulation unless you have really good CPU power and godly patience.
Re:Octave (Score:3, Funny)
Three-dimensional arrays come to mind. Grrr.
Re:Octave (Score:5, Interesting)
Octave is a nice MATLAB clone, developed from chemical engineers in the beginning, but now used extensively in virtually any area that math is usefull.
Many packages have their open source counterparts: Octave [octave.org] for MATLAB, R-system [r-project.org] for SPLUS (statistics algebra system), and so forth. But IMHO you raise another issue: you can use each of these packages to do whatever calculations you want, since all of them are extended in the C/Fortran end, i.e. they can use programs written in these languages. Custom code is readily integrated. And above all, the GNU Scientific Library [redhat.com]. If you don't like or you don't trust the numerical solvers integrated in MATLAB, you can investigate the source in the GSL.
And yes, you can use all of these together. So, what is the question again?
Re:Octave (Score:4, Informative)
Re:Octave (useless bragging) (Score:3, Interesting)
I just got Octave & Gnuplot running on my Sharp Zaurus. I can do my DSP type calculations, anywhere!
Someone is currently porting gtktiemu, at which point I'll have a TI-89 emulator, which will let me handle just abount any engineering math type stuff I need to do with one pocket-sized deivce.
Now if my fold-up keyboard would just show up.....
Depends (Score:5, Informative)
CHALKBOARD is great for addition and the other basic operations, but if you want to do symbolic algebra, Maple or MathCad are your best bets.
If you want to do some sort of signal processing and/or crazy matrix applications, the Matlab is probably the answer.
If you want to do something with statistics, Matlab or Minitab are the way to go.
Re:Depends (Score:3, Interesting)
MathCad? (Score:3, Insightful)
Re:MathCad? (Score:4, Insightful)
I want to learn how to use Matlab more effectively as it's (apparently) the most effective for physical modelling, but we don't get taught it over here (Mathcad, Maple and Mathematica are all these scuzzers will teach us); anyone know a good intro to it on the web?
-Mark
Re:MathCad? (Score:2, Informative)
We did, in no particular order: differential equations, groups, rings, RSA encryption, McLaurent series, and matrix manipulation.
But I can't compare it to any of the other programs because it's the only one I used.
Re:MathCad? (Score:3, Informative)
Er, you mean either "Maclaurin" series or "Laurent" series.
A Maclaurin series is the Taylor expansion of a function about 0.
A Laurent series is like a Taylor series, but with the range of exponents going from -infinity to infinity instead of 0 to infinity.
(Maple is capable of doing both of these.)
Re:MathCad? (Score:3, Informative)
Right, so a McLaurent series is like a Taylor series, expanded about x=0, with exponents ranging from -infinity..+infinity.
Re:MathCad? (Score:2, Insightful)
Re:MathCad? (Score:5, Informative)
User friendly? Are you talking about the program that I use on a daily basis? Surely not. MathCAD is without a doubt the prettiest of all the options but it is among the worst in user interface.
For those of you who are not familiar with MathCAD, it works like this:
Anything and everything that you want to input into MathCAD is in it's own little box. Be it a text or an equation box.
The horrid part is trying to organize all these boxes on the page. Putting everthing in a box means that it operates completely contrary to what most people are used to with MS Word. Say you enter some equations and then decide you want to add a few more in the middle. You can't just hit the up arrow and start typing with maybe an enter. Instead, you'll often have to select the later equations and drag them down to make room for the new. Then, if you have a lot of equations you likely didn't move all of them down. So, you have to select the equations that now overlap and select 'Separate Regions' from a menu. This gets to be very tedious.
Furthermore, is it too much to expect MathCAD to figure out that I don't want have my equation on page one and the rest on page two? Why should I have to go and select "Reimpaginate' from a menu before I print?
Entering equations is no joy either. I'm constantly frustrated when I try and do something as simple as add antoher term to an equation, like changing x^2 - 3 to x^2 + x - 3. I find myself starting over and at times typing 1 + 1 - 1 and then replacing the ones. I mean, come on, I've seen many math typeing solutions that are far better, in MathType, and LyX for example.
Sure you might have a nice looking document but was it really worth the pain? Furthermore, I find MathCAD to be seriously lacking in function compared to Maple et al.
Of course, Maple et al. all have their problems with user interface. Why should I have to end with a semi-colon? And you have to realize that it's never going to look the way you want it to. So you have to suck it up and do the math without worrying about the beauty of the output.
Not to sell MathCAD short, there are some things that it does do well:
Units, the best unit management system I've had the joy to use. Very nice.
The output is beatiful.
Simple math that doesn't require big complicated equations and lots of loops.
Personally, I can do the easy math by hand. For more advanced stuff check out SciPy.org [scipy.org]. They provide a python interface to established numerical algorithms in C and Fortran. But it's much quicker and 'funner' to use. Unfortunately they are only at alpha right now. But, you can't be the price and for the most part, I've found the optimization sections to be quite stable. Combine it with pychart and your've got a good science package for free.
Otherwise, the only package that I've actually heard people rave about is Matlab.
useful (Score:2, Informative)
but that is only if u are doing a highly intensive amount of math
otherwise it takes longer to figure the gui and usefulness out than to figure out the problem on your own
matlab rules for matrix math
Heh ... (Score:5, Funny)
Fortran (Score:5, Interesting)
Re:Fortran (Score:2)
Black Box (Score:3, Interesting)
The other major factor is that nuclear physics is perpetually underfunded and buying commercial software is ussually not nessasary (since we would have to make sure it worked properly anyway).
BTW we do use "building block" type programs and libraries for our interfaces. A good example is SpecTCL at the National Superconductiong Cyclotron Laboratory. I have used GTK/GDK in my applications, others have used Qt. However, the numbercrunching and datacrunching parts are nearly all custom. The data processing is simply too complex and too specialized to trust to prepackaged software. The numbercrunching applications are too time consuming to use a generalized program, everything has to be optimized.
age-old answer: it depends (Score:5, Interesting)
matlab for design prototypes of numerical algorithms and for visualizing data.
mathematica for doing messy algebra/calculus/differential equations.
my own c/c++ code, with a lapack backend, for doing large-scale computations (matlab and mathematica are too slow for big computations).
So, the answer is e) all of the above!
Good general packages? (Score:2)
Re:Good general packages? (Score:2)
Perl Data Language (Score:4, Interesting)
Re:Perl Data Language (Score:5, Funny)
Re:Perl Data Language (Score:2)
PDL is great for numerical work and data analysis. I use it to reduce image sequence data and to simulate the "small-scale" dynamo on the surface of the Sun (the domain of a typical simulation is about 30,000 km across).
The array slicing and indexing operators are the most powerful I've come across. There are several graphics output packages, including a Tcl-based interface, PGPLOT, PLPLOT, and OpenGL. I use mostly the PGPLOT interface because of its extreme device independence, but the others have advantages too. As sleepingsquirrel pointed out, you get all of the goodness of perl/CPAN too -- PDL is just a set of modules that you use in perl scripts, so you can readily use all the database-horking, XML-parsing, Morse-code-spewing CPAN modules that you've come to know and love.
PDL doesn't do analytical math parsing at all. It wasn't clear from the original question whether CEHT is looking for an analytical resolver or a numerical package.
It's a little bit of a pain to get PDL all installed right (you need to get several packages from several places), but hopefully the next release will mitigate that by including a "complete" package with most of the external libraries as well as the actual PDL module set.
[none] (Score:2, Insightful)
No! They use FORTRAN!
Surely it's still much better language for numeric stuff
Nothing to add here.. (Score:2)
But I would like to share my own method for solving some problems, mainly geometry problems. I work in 3D, Lightwave to be specific, and I've helped engineering solve some mathematical problems with it. For example, there was a question about how to build a sphere with each face being in the shape of a pentagon. They needed to know what the angles of some of the vertices were. While the engineers were busy pushing numbers around on paper, I built the model in Lightwave and used its tools to get the right measurements. Actually got it done before they got their equations done. That was kind of cool.
As I said, that doesn't really help that guy. I just thought solving a math problem like that using a 3D app was kind of interesting. New? No. Just interesting. Math is not my favorite subject but at least I've got tools today that prevent that from being a huge disadvantage to me.
Inexact floating point calculations... (Score:4, Interesting)
Do any of the listed tools/languages take care of this problem for me? I understand the nature of the problem, but it is still very frustrating. What do the "pure" math programming languages do with this issue?
--sex [slashdot.org]
Re:Inexact floating point calculations... (Score:2)
Put it this way - floating point accuracy is NOT an issue with these programs.
Trying to figure out how to get them to do the differentials, though, can be a bitch...
-Mark
Re:Inexact floating point calculations... (Score:2, Informative)
When people need exact answers they do exact math using a combination of symbolic computations and arbitrary sized integers and rationals.
ie. the answer you might get from doing an operation like sqrt(2) would be "sqrt(2)", doing an operation like area_of_circle(r=1) would be "Pi"
an operation like 2^128/3^45 would be some ratio of enormous integers expressed as x/y where x and y are the integers 2^128 and 3^45
does that help?
Re:Inexact floating point calculations... (Score:2, Informative)
Packages that work exact (maple, mathematica) can run only at a fraction of the speed of numerical packages, so big simulations (how high should this airplany engine be mounted) are simply not done that way.
V.
Re:Inexact floating point calculations... (Score:2, Interesting)
That's true with all programming languages, not just some. Floating point accuracy is an inherited limitation of using a computer, you can partly work your way around it, but it never really goes away.
One way of getting around it is making your own method for storing a floating point number, not using a built-in type with so many bits (like using more than 32-bits). Now, you can keep adding "bits" in your interpretation of a floating point number until you reach the desired accuracy.
I can make a list of booleans to represent a floating point number and have the list longer or shorter depending on how precise I want the number to be. Then I use my own addition, multiplication, etc. algorithms on this number to get my result. This makes the process slower since you are basically rewriting the functionality of a chip to accomodate a higher number of bits than which it was designed, but it's possible.
As for mathematical tools dealing with the issue for you, I think you can specify the precision you would like, and it adjusts the answer accordingly. (At least I believe it is the case with Mathematica.)
Re:Inexact floating point calculations... (Score:3, Informative)
Some math packages and programming languages-- such as Common Lisp-- have bignums (infinitely long, perfect precision integers) and rationals, which are also infinitely long and perfect precision. So the value of (/ 1 3) is not 0.3333, it's 1/3.
Re:Inexact floating point calculations... (Score:4, Insightful)
It's been a while since my Senior Independent project, but it was on products like these. I did a lot of the symbolic maths on the packages to describe and document what they were doing. As part of that, I had to look into how they handled floating point errors. I don't remember the package I was specifically working with; and my copy of the SIP is at home.
With that as an introduction, for the "pure computational" packages,the problem you point out is real. Floating point errors when ignored will slowly move further and further into the significant digits of the FP calculations. For a package to be even reasonable, it must be able to describe in mathematical expressions and textual dialog how it will manage FP errors to keep them in the least significant digits of the number. If you review a package, look and ask specifically how the package does that.
Some may ask why is this important, don't modern languages handle all this according to the FP Specs? Well, basically the specs are not good enough for large computation tasks. When you start multiplying several matrices together, you end up doing so many FP operations, that without carefully written and mathematically backed code the errors will pratically zoom "to the right." This is compounded by the fact that not all chips comply with the specs in exactly the same way, most of these packages will have a lot of conditional code to handle each chip set's specific particularities. Something else to look for: if a package claims one size fits all, and doesn't talk about OS and HW specific compiles, take extra care checking the FP issues out. They may be taking a worst possible processing approach, which will work but at the expense of speed, or they may be taking a more "mean" processing approach, which may end up with different results on different OS and HW combinations.
Now, after all that, the reality is that most of these packages, at least if they have been around a while, have the mathematical grounding and programming "backgrounds" to handle FP operations pretty darn well. After all, this was a fairly well known and documented issue back in 1983 when I wrote my SIP.
The "symbolic" packages seem to side step this by first taking the equations and modifying them in "symbol" form before performing their calculations. Thus, the differential of X^2 is changed to 2x by the program before any FP operations happen. But this does not mean that FP operations do not occur. If your equations still deal with matrices, then a lot of FP operations will have to be done to come up with a numeric answer, no matter what.
a couple other packages (Score:3, Informative)
For our research... (Score:2, Insightful)
MATLAB is great for off-the-cuff research. I can open it up, and program image processing routines in 30 minutes or less. This would take hours in C/C++. Additionally, I can take the M-file and dump it from my computer onto a workstation running MATLAB and get some decent speed and batch processing done.
C/C++, however, gives you so much more control and execution speed, that often you either use the MATLAB --> C compiler, or end up writing a final routine in C directly. I believe for image processing, as an example, you can get over a 100x speed increase just by using the MATLAB --> C compiler.
Just my $0.02.
Python and Numeric (Score:2, Interesting)
I'm surprised you also haven't mentioned R. It's a stats
package (gpl'd) modled after S. http://www.r-project.org
and it is very powerful with a great community behind it. It's an amazingly powerful tool for analysis.
Re:Python and Numeric (Score:3, Informative)
What I use (probably different from your needs) (Score:2, Informative)
For my research in mechanical engineering (more specifically regarding tolerances), I use Matlab since it's what I'm most comfortable with. Maple is also used at my Uni, but I don't have much experience with it (other than it's symbolic kernel since it's available with Matlab) so I don't end up using it.
It mostly depends on what you're doing. Depending on your area of research, you may find that one of those is more popular because it solves these types of problem better. If speed is an issue for you, you can easily port your algorithm to a compiled language if you prototyped on an interpreter, even interfacing the two in some cases.
Other trade offs (Score:5, Interesting)
I am not able to articulate this well, but the type of research you are doing is MUCH more important of a consideration than computation speed or resource consumption. If you need supercomputer time, then you had better ask the admin what you need to use. I know a bunch of people that do environmental modelling, and I have never seen or heard of anybody writing their own C++ to do it. Researchers GENERALLY have better things to do than re-invent wheels.
MatLab, Mathematica (Score:5, Informative)
For most computer vision code, Matlab is a must for prototyping. It's useful in other areas, and, if you know how to use it, reasonably fast. If you're doing particularly involved matrix manipulations, it takes a lot of work to come up with C/C++ code that will work faster then well-written matlab code.
Personally, I also use Mathematica for doing real math work. If I need to derive something that's particularly complex, then Mathematica's notebook style is really nice to work with, and it makes possible extremely clear and concise mathematical arguments while limiting stupid human errors when doing drudgery like taking derivatives and the like.
I hear Maple and MathCad are both good, too, but I've never used them.
Matlab is a jit compiler now (Score:3, Insightful)
Yes, indeed, the latest version (from less than half a year ago) was the first to include a just-in-time transparent compiler by default. Inner loops are so much faster than the old interpreted versions it's not funny.
However, a Matlab clone called MIDEVA had the same thing three years ago. Mathworks bought them out and incorporated their tech.
Re:MatLab, Mathematica (Score:2)
The new version of Matlab is a just-in-time compiler, so this is no longer entirely true.
That is true, but what was a big problem a few years ago is now medium-sized, as Moore's law marches on.
Fav packages (Score:2)
For a couple of reviews of Mathematica, see Applelust Scientia [applelust.com]
FORTRAN (Score:3, Informative)
Multiple tools? Never! (Score:2, Informative)
MATLAB. (Score:2)
Never had to use it for much besides 36-hour suicide class project marathons, but it was reliable and easy to work with.
Re: (Score:2)
Not just a weekend hack (Score:2)
Writing a robust, efficient, and accurate numerical analysis library is not something you do in a weekend. There's not much to improve upon with packages such as LAPACK and their kin: They've been proven to be accurate and reliable over years of use. There's really nothing to reinvent.
I'ver personally used LAPACK for digital terrain matrix (3D) processing of satellite stereo images and the mapping of spherical coordinates to and from various "flat-plane" projections generated by the IKONOS [spaceimaging.com] sats. There were simply no other viable alternatives to LAPACK in terms of speed and accuracy, and we certainly weren't arrogant enough to think we could write a better numerical analysis program.
GiNaC (Score:2)
Mathematics past (Score:3, Interesting)
Re:Mathematics past (Score:3, Insightful)
Re:Mathematics past (Score:2)
Johannes Kepler had this problem (trying to emperically verify his theories, I suppose) and tried to build a fairly complex mechanical calculator, but failed to finish it before he died. This was long before Babbage.
Re:Mathematics past (Score:2)
Back in the old days... (Score:3, Funny)
Have you ever tried to build a pyramid without any significant digits. You kids have it so easy these days.
S-plus / R? (Score:2)
Matlab, C, VB, local scripting (Score:5, Insightful)
The most commonly-used analytical platform is probably Excel (or some similar tool like Statistica), but the more serious researchers, who are also the more mathematically-aware, nearly all use Matlab in my experience.
When efficiency is an issue, nearly everyone I've worked with turns either to IDL (a Matlab competitor that has more arcane syntax, but much higher processing speed) or writes a C/C++ program by taking algorithms from "Numerical Recipes in C".
Recently, I've also seen a rising use of Visual Basic, especially to do experimental control (although some Matlab hooks do exist for such), and, of course, LabView. Some diehards use LabView for data analysis as well, but their results are suspect just because the tool is so poorly fitted to the task.
And, of course, many data collection hardware manufacturers (CED, National Instruments, TDT, etc.) supply scripting languages to control their hardware and perform rudimentary and sometimes not-so-rudimentary calculations.
The best researchers select the most appropriate tool for the job, but, again in my experience, it seems the selection is normally based on previous experience and inertia. Those who know a particular tool well (eg, Excel, Matlab, SPSS, Mathematica) tend to keep using that tool, even if it is not well-suited. This means you get abberations like Matlab programs that control real-time experiments and LabView programs that do higher-order mathematics.
Why?
Because the largest fraction of a scientists' time should be spent on data collection, not experimental implementation, and the amount of time (for nearly all fields except those with astronomical amounts of data) spent executing code is dwarfed by the time developing it. Clearly this breaks down for certain applications, but most of the science currently being done (read: molecular biology, and no, not bioinformatics) is not algorithm-bound.
Since data analysis is such a huge, broad field, I expect to see radically different answers from other posters!
Re:Matlab, C, VB, local scripting (Score:3, Interesting)
Most of the bioinformatics being done that I'm aware of is not algorithm-bound either.
People do tend to find a language and stick to it, though. Usually Perl. You get the occasional Python diehard as well, but my experience has been that while I'd far rather use Python for a large project, I'd rather use Perl for anything with significant amounts of text processing. There are times when weird kludges and shortcuts are actually a good thing. I know someone who programs in Lisp whenever possible. C is usually the last resort of people who think it'll be faster than Perl. Sometimes this is the case. Sometimes they simply can't program worth shit.
The real problem is that many bioinformaticists have no concept of software engineering. This applies on many levels. First, they can't write reusable, maintainable code. Second, they have no concept of algorithms or recursion. Third, they never get to the point where they can write software reflexively. The best code, in my experience, is the stuff that's pounded out in under an hour, but which has been thought about for days beforehand. I think everyone wanting to do bioinformatics should be forced to take an intermediate CS class before they're allowed to do research, rather than sitting down with an O'Reilly book and starting to write code. They'll waste less of their time and everyone else's this way.
Frankly, however, two-thirds of the time of any bioinformaticist is spent interpreting and reformatting the crap data that biologists give us.
Maxima! (Score:4, Informative)
Re:Absolutely, Maxima is very, very useful (Score:4, Informative)
A problem I've struggled with ... (Score:3, Insightful)
Are there any viable open-source solutions to either Mathematica or IDL?
Re:A problem I've struggled with ... (Score:2)
(1) Mathematica is not so hot for processing data sets. Since it works so symbolically, you can try some transformation and end up waiting forever for an endless symbolic expression to appear on the screen, all because of a typo. This can be mitigated using $PrePrint=Short[#,7]&, but that has its own problems.
(2) Anything using Numerical Recipes is immediately suspect (not to mention the license issues). There used to be a document at JPL about this. For a critique, see this compilation [colorado.edu].
Re:A problem I've struggled with ... (Score:3, Informative)
It's possible to get work done with IDL (zillions of scientists use it), but it's a tragic waste of brainspace to keep all the extra exceptions and pitfalls in mind. Writing robust code in IDL is like kicking a whale carcass across a mined Afghani battlefield.
Oh, and the license to use it costs about as much as your workstation. I'll take PDL, NumPy, or Matlab any day over IDL.
Which package... (Score:3, Informative)
I can use Mathematica for almost all of my dabbling. Sometimes I play with MuPAD, R, GnuPLOT, Octave or Mathematica to show a particular problem. Since these are also free (beer or speech, depending on package) I can be reasonably sure that everyone can get a hold of it.
For example, Octave is suitable for matrix manipulation. It does everything that I need it to do and can replace Mathematica for me. It's also fast enough (the longest calculation has taken just over a minute but it was a huge manipulation of some graphic data).
I've dabbled with some of the libraries but only for fun.
I guess what it comes down to is how comfortable are you with the package. By the time I try to write something in C using a dedicated library I can most likely do the same thing in Mathematica in a tenth of the time. Even if the execution speed was 100 times slower, the "real" time may not about to much.
back in the day... (Score:2)
depends, or, if you have to ask slashdot... (Score:5, Interesting)
You're talking about two different classes of software: "numerical linear algebra packages" and "computer algebra systems". Maple and Mathematica are the latter, Matlab is the former. I don't know about Magma.
Hardcore numerical programmers use LINPACK/LAPACK with platform-optimized BLAS (this latter is often commercial, or at least proprietary to the platform vendor) directly from Fortran. They usually use modern commercial Fortran 90 or Fortran 95 compilers, too.
On numerical linear algebra stuff where you aren't going to recruit and pay a Fortran programmer with a PhD in applied mathematics, most sane people use Matlab or GNU Octave or one of the many other Matlab clones. A lot of people like Numerical Python, if I had a big new project to do, I'd seriously consider it.
Yes, crazy "researchers" who don't want to learn Fortran and think Matlab is too slow or too expensive will write numerical code in C++. Some of them do fine work, too.
Excel and other spreadsheets are fine for small bits of numerical analysis, too. Don't turn up your nose at 'em, you can email your boss your whole analysis and he doesn't have to learn Matlab to do anything with it. Excel is also slowly replacing Qbasic as the computing lingua franca of the Amateur Radio/hobbyist-electronics community.
The class of people who just doodle out the singular integral equations for the airfoil design they're brainstorming seem to like Mathematica a lot. I wish I were more like that. Maxima is seeing a renaissance now that its licensing and distribution issues are cleared up (it's GPL now). I should check it out. There's also GNU (Emacs) Calc, which I use regularly as an RPN desktop calculator. It is actually much more powerful than that and will do all kinds of HP-calculator-style graphing and computer algebra with a liberal sprinkling of Mathematica-style syntax, but I don't use those features much, because they're wicked slow.
Re: (Score:2, Insightful)
Remember (Score:2)
the healthy open source alternative. (tm)
Might not have all the features but looks pretty decent.
MATLAB: Costly but extemely effective (Score:5, Informative)
MATLAB offers student versions for about $99 a pop, which is dirt cheap considering its $1000 price tag for the retail version. Many universities of course have dramatic discounts, but then, you have to have be affiliated with a univeristy. Even the student version requires you to attest that youre using it for course work or student-level research and not commercial gain.
MATLAB has a number of drawbacks. Price is the largest. To enforce its license, MATLAB requires you to run the onerous and clumsy FlexLM license manager. FlexLM is brought to you by GLOBEtrotter....a division of that bastion of consumer rights, Macrovision. That should speak volumes. The license manager makes doing a lot of simple things stupidly difficult, especially if you're (like me) mobile and have to authenticate with a central server running the license manager. I can get into details if people have questions.
On top of that, MATLAB requires a yearly "maintenance" fee. It's more or less software as a service. Apparently, if you let the maintenance contract lapse, you can still use MATLAB, but you get no more support and cannot apply any new updates. That may be, but the particular license my university employs will cause my copy to simply stop working after April 1 if I don't renew. (April 1 being the beginning of the Mathworks license year. I don't think they see the irony in choosing that date).
The maintenance contract does not apply, AFAIK, to the student version.
On top of THAT, the student version or the $1000 base retail installation just gets you the MATLAB core. Which, granted, is extremely powerful. But the Mathworks also has a couple dozen or so Toolboxes, each with a range of specialized functions and tools (i.e. Signal Processing, Image Processing, MATLAB-to-C Compiler, Symbolic Math, etc. etc.). Each of these comes for an additional price, and its own maintenance fees. IIRC, these are like $500-$700 more each.
Did I mention all these prices are for licenses on a per seat basis? Any institution or company thinking about MATLAB is going to shell out serious bucks for the privelage.
On the other hand...MATLAB is a serious, extensible, highly flexible platform for technical and mathematical computing. I find that I can prototype programs for solving scientific problems in MATLAB far faster than I can in any other language. And its visualization features are truly impressive...even if the Handle Graphics system it uses is SO DAMN KLUDGY to program. You can customize visualizations just about however you can imagine...ALTHOUGH, some simple customizations are going to be UNNECESSARILY tedious to program.
Another drawback to programming in MATLAB is speed. MATLAB ("Matrix Laboratory") is exceptionally optimized for handling calculations of very large matrices. However, because it's interpreted, if you have any loops, it's going to be very slow going. There often many tricks to "vectorize" operations you'd normally do iteratively in other languages, but often the only solution is the ol' for-next or while loop. These are slow. Very very slow. Yes, there's a compiler, but in my experience the compiler isn't that great at optimizing code...and, did I mention it costs extra?
Anyway, MATLAB is amazing in its breadth and depth of power. I haven't even touched on its capabilities for engineers, like the SimuLink system design simulator, and hardware interface toolboxes. I can't imagine a problem needing to use a "mix" of math packages (as the original poster asked) if you're using MATLAB. But the purchase and ownership costs are very steep.
Arbitrary Precision Floating Point? (Score:2, Interesting)
The problem is that the output of one calculation is fed into the input stage of another, that output being the input of the first calculation, in a circular style, so that small rounding changes may have a large affect on the final outcome.
Now, at some points, the precision may be truncated (where the effect will be unnoticable to the equations), but at certain points I need the exact number.
I have heard that with Lisp you can have numbers as large as you like, but I don't know how hard it is to perform complex numerical tasks in Lisp. Also, speed is an issue (I want it to be as fast as possible).
Any suggestions as to how to accomplish this?
Re:Arbitrary Precision Floating Point? (Score:2, Informative)
I wrote some code [theworld.com] that uses this package to test the quality of the Java Math.sin method, if you would like a starting example.
Python (Score:2, Informative)
In the past I've used Matlab, C/C++, and a junkyard of Perl scripts to get things done.
Nowadays I use exclusively Python, with underlying C and C++ components when performance is at a premium. C is easy to call from Python thanks to Swig.
Python is simply unparalleled in its simplicity and elegance, and I find that I can accomplish most of the things that Matlab is good for from a Python interactive shell using Numeric and the other various scientific Python libraries.
heres one data point... (Score:2, Interesting)
LabVIEW, PERL, shell scripts, and/or C for data acquisition
C++, MatLAB, and/or shell scripts for data analysis
and you can get some of my codes from Sourceforge:
http://sourceforge.net/projects/qaxa
http://sourceforge.net/projects/ssnooper
and others are available by sending me an email.
Ed
http://cesep.mines.edu/people/hill.htm
learning softwares vs. coding .. (Score:2, Insightful)
1. matlab - hard on newbies' but very powerful and elegant in the hand of intermediate and advanced users. Graphical simulations possible through simulink. Matrix computations are very fast!! and on average it takes the least amount of time to solve the problems compared to mathematica, mathcad and maple.
2. mathcad - the first mathematical software I used, very easy to learn almost instantaneous learning!!! but that was when mathcad was still in dos mode, I saw the windows version out now and they seem to be cumbersome to get around, but then I have not been using it regularly.
3. mathematica and maple - almost similar performance, mathematica has better interface and with a large amount of tutorials and extensive help system is easier to learn than maple.
4. scilab - a matlab clone which is GNU (I think!!) used it couple of years back when it was not as good as matlab graphically
Now, coding - yes it has to be done from time to time but I think due to these softwares it has been relegated to the sidelines when extensive run times are involved and a significant performnace gain can be derived by days of coding. I belive it may be easier given the tons of free libraries available but it still takes longer to code in c/c++ than in a 4GL (generation language) like matlab and mathematica.
Coding still can't beat the quick prototyping mode of these softwares i.e u can do a lot of manipulations in the time u take to write and debug the code. It basically boils down to whether result or way of getting result matter most !!!
and then don't forget sometime those big screen scientific calculators are faster to get quick results than u'r fancy softwares and codes
Matlab and C/C++ (Score:2)
I guess the issue is that the major suites have SO many tools that, once you are used to them, mesh well with your way of thinking/coding/problem solving. In that way you usually find one tool and stick with it.
Matlab primary; old languages also used (Score:2, Interesting)
There are two primary advantages which I see in Matlab. The first advantage to me is its abilities with matrices and arrays; it can do things in a couple of lines of code which can take some roundabout programming and subroutines in other more conventional languages.
The second is Matlab's graphical abilities. Display of data is very important, both in the final product (thesis, paper) and in the research process itself. After a brief introduction to graphing in Matlab, it becomes a trivial task to choose and use various display options for your data.
In physics, it seems that we stick with what works until something better is found. That applies to our theories and to our tools. It is not uncommon for us to use Fortran, Pascal, or even various types of Basic to perform simple calculations and experiments.
Much of what one uses may be determined partially by chance--what software package was available at your institution, what professor did you study under, did your undergraduate degree require a programming course? The work involved in switching from one major package to another, for instance from Matlab to Mathematica, simply seems like too much effort for very little sure return.
Jim Deane
I'm surprised more mathematicians don't use... (Score:2)
MathForge (Score:2, Informative)
The base of the project is a Java environment on which programmers can build tools as needed.
It is GPL'ed software.
Use the right tool... (Score:2, Insightful)
I'm a graduate student in Mathematics studying (convex) optimization problems so I see a healthy mix of pure and applied math. When I'm doing pure math the best tool for the job is a strongly symbolic math package like Maple (which I use extensively). Maple is also really good for quick visualization and helps gain insight and intuition into problems. Other offerings in this arena include Mathcad and Mathematica (however Mathcad actually uses a smaller version of Maple's symbolic engine).
Similarily, if the task is more numeric, Matlab is the choice (actually, we use Octave, which is a GPL'd and free numeric package that has Matlab syntax; most code written for one runs in the other). I'd say Matlab/Octave are most useful for prototyping numeric algorithms, and solving medium sized numeric problems.
Finally, when a tool is needed that performs well at one specific task (or the problem size gets really large), you can't beat writing your own tools from scratch in the compiled language of your choice. At this point, there are a variety of libraries that one may find useful (for arbitrary precision arithmetic, expression parsing, symbolic manipulation, etc).
So I guess the answer isn't white or black, but rather varying shades of grey (as is always the case).
Scilab (Score:5, Informative)
Anyway the best package for you in part depends on what you are using it for. Matlab, scilab and octave are great for doing linear algebra things -- manipulating matrices and arrays etc. Some people complain about how slow matlab is. I find matlab is pretty fast as long as you use it for what it was designed for. You should use their built in functions as much as possible and use as few loops as possible. If you find yourself using a lot of loops try writing a mex function in C or FORTRAN.
Maple and Mathmatica are great for Calculus differential equations etc. If you are doing a lot of matrix mulitiplies in Maple, you should be using matlab.
Mathcad is user friendly but it is SLOW. Even old people who have been doing insane integrals in their heads since the 50's and refuse to even look at a computer can see a Mathcad print out and tell exactly what the program is doing.
Hope this helps. Personally I like to use Octave and Scilab since they are GPL. Scilab is prettier IMHO but Octave is closer to Matlab (which I am already used to.)
PARI (Score:2, Insightful)
Instead of C/C++ (Score:2, Interesting)
Too few open-source solutions and minimizers (Score:2, Informative)
The GNU scientific library has a very crude minimizer that's too simplistic for my needs (I want things like the curvature at minimum which can be inverted to give a coordinate covariance matrix). I most often use the minimizer to fit various functional forms to observed statistical distributions.
I am surprised at the lack of an up-to-date open-source minimizer, because so many university researchers use these kinds of tools, and are in an environment where commercial solutions are painfully expensive and a schism for any multi-university collaboration. A lot of phycisists write good code prolifically, but far too few support/contribute to open-source projects!
I was reading through Numerical Recipies recently, and was also taken aback by their licensing policies. The algorithms in the book are simple solutions which have been previously published by others in journals and such. And the code is just a direct adaptation (translation really) of the algorithm. Yet somehow their code, or any translations of it, are under copyright? I think it's foul-play like this that are the reason there are so many high-quality commercial mathematics packages, and so few open-source ones.
Phew! (Score:5, Insightful)
First a global kind of classification.....
octave/matlab... are mostly vector/array oriented languages and are useful for doing work in problems that are suited for such - you can experiment easily, then recode in C,fortran... if needed. apl and j are also in this group and should not be ignored - though they're used a bit less frequently.
Macsyma/mathematica/maple/maxima/derive are symbolic math languages and can solve interestingly sized problems and get symbolic answers (that is, things like sqrt(pi/2)) as well as numeric approximations. This can be a very useful tool to have - depending on what I'm doing I use such things a couple times a week (nice to check results done by hand, or to handle all the crufty part of a solution). Most will emit fortran code which can also be useful.
vtk, opendx/khoros(?) are visualization tools - most of the other packages have some visualization tools packaged in them, but vtk and opendx both offer quite a bit more power.
Now the incredibly non specific recommendation
My suggestion is to pick one of each of these and learn it - do enough in it so you know the language/system well. Otherwise you'll be struggling with the language as well as with the problem - and finding bugs will be close to impossible.
If I put on my "computer science professor" hat (probably a wise thing if I'm to keep the top of my head from turning bright red with sunburn), I usually try to recommend that all CS students learn a smattering of these things as well. When you need one of these tools, knowing its there and how to use it can save large and wonderful quantities of time.
And now some more specific comments
On the whole my choices would be as follows - note the caveats - some of them are pretty cave-rnous (sic). I don't have piles of money to spend, so tend to prefer the open source programs just on that basis.
For array/matrix manipulation I much prefer APL or one of its derivatives (check out aplus on sourceforge). Languages in the APL family are also fun to program once you learn how. However the terseness of the syntax (and with APL itself the odd character set) tends to make these a bit forbidding, so a more popular choice would be octave (open source) or matlab. I've had good luck with octave - it seems to handle most matlab programs well enough. If you've got piles of money, go for matlab.
For symbolic math, maxima (sourceforge) is good. Its commercial cousin Macsyma has usually ranked as about the best symbolic math packages for accuracy and power and seems less expensive than the others. Actually writing programs in either of these requires learning quite a bit about the innards of the system though. My second choice for symbolic math would be Mathematica - its programming language is well integrated with the system as a whole and and for general goodness and niceness of the interface it can't be beat. (The other commercial products are building on the best parts of the Mathematica interface - I've not checked recently, but they're getting much better fast.) The visualization capabilities of Mathematica are also very good. Maple is probably the most popular, so using it will probably make it easier to find someone to help you, but on the whole I've just never found Maple as easy to program as Mathematica and I tend to want to program almost everything.
For visualization both vtk and opendx are very nice systems. vtk is more aimed at a programming interface, opendx has a labviewish kind of programming environment. I like both and have both at hand. Both these systems are big enough that you'll want to make sure you understand them before you tackle a project with them.
They don't scale well, but spreadsheets can be very convenient for small models. Careful though, its easy to have errors even in middlin sized models that can be very hard to find.
Odd Zen Endz
As has been noted there are other systems, some smaller, some more specifically focussed on a single domain. Those tend to be harder to match to a problem - unless the problem is right in the center of the domain in question.
There used to be a program AXIOM which had a lot of nice features, but it seems to have gone to that Big Bit Bucket in the sky - but its base language "Aldor" is now available at aldor.org. I have a copy, but haven't looked deeply at it.
Sourceforge is also hosting a new project "lush" - which is a lisp system that has some integration of some of these features. To the extent that I've used it I'm impressed and will probably spend some time working deeper with it in the hopes that it will prove another valuable tool.
A couple of suggestions (Score:3, Informative)
If you want to do some statistics, there's also R, a stats analysis package. It's very powerful, but it's designed for experts rather than non-statisticians who occasionally want to crunch some numbers.
Be Careful Though (Score:4, Funny)
recommendations (Score:3, Informative)
Don't underestimate the power of C++: with type checking and overloading, C++ may actually be more convenient to use for many numerical applications that even something like Numerical Python or Octave/Matlab.
Beyond that, yes, obviously, all those libraries are in use by someone if they are maintained. If you have the need to use one of them, you will know. If not, don't worry about it.
Re:Maple sucks.... (Score:2, Insightful)
Re:NAG (Score:2)
If they're going to wander like that, they should check return values, or at least the sum of components, for NaN.
Another case in point -- just last week we fed the same 5 dimensional nonlinear problem to both the MS Excel optimizer (MS calls it the "Solver" and it has historically been famously bad bad bad), and to every one of the NAG optimizers, even slow ones like simplex. Result: Excel beat them all. I was shocked.
In NAG's favor, we have another optimization problem we ask it to do in about 400 dimensions. Granted, that objective function is closer to quadratic, but even so I was impressed with NAG's performance -- always finding the unique minimum, even when it had to run for days.
Oh, and BTW, if anyone ever tells you computers are "fast enough" now, they're not.
Maxima can actually be *more* powerful than Maple (Score:2, Informative)
Maxima has some bugs, some annoyances, but at least, you can report them on sourceforge and they do get fixed. If not, someone will suggest a work-around.
Re:Maxima (Score:3, Informative)
We are trying to get the old site to direct people to the new site. Since the old site is not under our direct control, it isn't as easy as one would hope. (I am the Maxima project leader.)