Honestly a thorough performance comparison with other languages hasn't been completed. I've largely been basing performance estimates on the speed at which my own web applications execute (most pages are within the hundreds of transactions per second range) compared with what I've seen elsewhere.
I have started a more thorough performance test by converting the programs on the python vs java page (http://www.twistedmatrix.com/users/glyph/rant/pyt hon-vs-java.html) and the great language shootout page (http://www.bagley.org/~doug/shootout/) . A full suite of performance tests will be posted on the projectmoto site in the next couple days. In the meantime if anyone wants to run a performance test from the shootout page in moto now, follow these steps :
1) grab http://projectmoto.org/perftests.tar.gz . These are conversions of 11 of the tests into moto.
2) fix the value for N for tests in other languages downloaded from the shootout page (shootout tests have the number of iterations passed in)
3) compile and time the moto file with the following script:
set MOTONAME = "$1.moto";
set CNAME = "$1.c";
moto -c $MOTONAME > $CNAME;
gcc -I/usr/local/moto/include -I/usr/local/moto/mx/codex/util -I/usr/local/moto/mx/moto -D__MAIN__=main -DSHARED_MALLOC -O3 -o $1 $CNAME /usr/local/moto/mx/moto/libmx_moto.a /usr/local/moto/mx/codex/util/libmx_codex_util.a /usr/local/moto/lib/libcodex.a;
time ./$1 > out
Anyway, the performance picture for compiled moto pages currently looks like this :
Calls to functions / methods written in moto are made roughly 100 times faster in compiled moto pages than calls to functions or methods in perl, python or ruby and about 20 times faster than methods are called in Java. Part of this may be due to method inlining for small methods by gcc but this is one area where any compiled language is going to smoke an interpreted one. You can see this in the methcall, sieve, and fibo tests from the shootout page. Also, if gcc wants to optomize compiled moto code, more power too it. Its one of the reasons generating C is a good idea. C is virtually guaranteed to have more optimized compilers on more systems than any other language compiler or interpreter.
Array accesses in moto are 10 times faster in moto than they are in python or perl and about 40 times faster than in ruby. They are 2 times faster than in Java. The implementation of arrays in moto is still un-opimized and functions are called behind the scenes on array access leading me to believe I could get another 5-10 time speed boost here when that gets optimized. The biggest reason that arrays are still so much faster in moto than they are in perl, python, or ruby, is that these language don't even try to offer typed, bounded arrays, thus array access is a much more heavy weight operation. The counterpoint to this is that arrays in these languages are much more powerful. Moto, like Java, offers classes, like Vector and Stacks to do these sorts of dynamic operations. Moto may eventually have language level support for more dynamic sorts of arrays but they will be differentiated by type from the simple C/Java style arrays moto supports now. These results can be tested with the ary3 test from the shootout page.
Outputting "hello world" 100,000 times runs about twice as fast in compiled moto as in perl, and 4-5 times as fast as it does in ruby. This is at least partially due to output buffering in moto which happens by default. It runs about 50 times faster than in java but thats because System.out.println sucks
Regular expression matching and substitution in moto today is 10 or more times slower than Perl or languages that use the PCRE package. The implementation is not optimized and incomplete. I hope to one day use the libtre package for regexes giving moto the completeness and the speed of these other languages in this regard. The implementation of regexes in moto is currently a modified version of Ville Laurikari's TNFA algorithms. This does mean that I will never have a regex that takes me an exponential amount of time to match. It also means I will not support back-references in the regex ... but I never liked them anyway.
Inserting integers to and from a hash (IntHashtable in moto) is 30% faster than in Java 50% faster than in python or ruby and 3 times faster than in perl. The difference with Java is likely because my Hashtable accesses aren't synchronized, this test should be redone with a Hashmap.
Inserting objects into a Vector (adding dynamically onto an array in Perl, Python, and Ruby) is about the same as in Java, and 10-50% faster than in perl,python,and ruby . Behind the scenes perl, python, and ruby, are effectively calling the same sort of highly optimized C methods as moto and Java are. I figured native method invocation in moto would take roughly the same amount of time in moto as it does in the interpreted languages. But that wasn't exactly what was going on. Turns out the memory manager in moto isn't nearly as fast as it should be and allocation of all the objects to put into the vector is what eats up all the time. The objinst test from the shootout page (as well as any old profile of a compiled moto app) demonstrates this. Object allocation in moto, java, perl, python, and ruby take roughly the same amount of time. The memory manager in moto is to blame for this. It is completely home grown. It acts on an memory mapped file and uses splay trees for the free list. This is necessary in order to persist objects between page views on the 1.3.x versions of apache. Turns out this implementation is nowhere near as fast as system malloc. So much for long bearded algorithms. The memory manager will be swapped out in the future for apache 2 support where its likely thread safe versions of the system malloc will be used. This should be a great big speed boost to all parts of moto.
matrix multiplication is 10 times faster than in perl, python, or ruby ... but who cares
-Dave
BTW: Don't try these tests in the moto interpreter ... no one ever claimed that was fast :)