This may read like I'm a Julia fan-boy
... I guess I am.
I found out about Julia from the
Machine Learning course from Coursera. Not directly, for at that time it was Octave; the advice given there was "trust me, for machine learning, this syntax is better." Indeed for many machine learning algorithms, the basis of understanding it, is vector and matrix operations. The innovation of Matlab which both Octave, which is essentially a gnu, open-source implementation of Matlab, and Julia is making vector valued variables first class (e.g. M*X, M^-1 where M is a matrix and X is a vector) makes things succinct and clear -- btw M^-1 is a representation of the
inverse of M, an O^3 order algorithm in 4 characters?
Now yes, Python has numpy, which is close syntactically, but there are yet other comparisons were is not quite so easy, and Julia has an advantage here in that it's so new that devs are still tolerant of syntax changes -- for instance the behavior of {} was changed between Julia 0.3 and 0.4. And so if there's something new on the horizon that needs a re-org, Julia is better able to handle it.
The other thing of course which Julia and Python and R communities are attempting to do is to figure out the best way to extract the optimizations available from LLVM, and owing to it's close ties to and ability to modify to conform to changes of LLVM, Julia also has an advantage. As I've posted before, expect Julia to be able to scale almost linearly on the Xenon Phi (Knight's Landing+) for HPC linear algebra oriented applications -- expect this by Julia 0.5.