Yeah, at some point if you need to put code on something like an embedded processor or DSP you just got to get down and dirty with C code and probably some ASM. But there's still some room in the middle for less pain with good performance
Dynamic typing doesn't add any overhead when you can determine which specific method you need when generating code — which, in a dynamic language with a JIT, is very late, meaning that you can most of the time. Julia uses tons of small method definitions that call other small methods and so on, even for basic things like adding two integers, but the compiler is smart enough to compile addition into a single machine instruction. The notion that dynamic languages are slow because of their dynamism is very outdated in light of modern compiler techniques.
The C/C++ benchmarks are intentionally written in C; the only reason that's it's a C++ files instead of C is so that we can use C++'s complex template in the Mandelbrot benchmark. Otherwise the whole thing would just be done in C. The clock_now function is only used to time other code, so its performance is irrelevant.
I didn't write the title the interview article. It's definitely inaccurate since C is already the C of scientific computing.
You can't do this just yet, but we're working on it. Should be possible in the near future. At some point further into the future, you'll be able to compile Julia code into a
As I mentioned in the interview, we're working on a compiler, at which point you would even be able to use compiled Julia code in embedded systems. So you get a nice productive, interactive development environment, then you invoke the static compiler and presto! you have a compiled
There's a few points:
1. Julia is entirely dynamic, so there's no need to think about compile time versus run time, simplifying the mental model (but the performance is like that of compiled languages). It's as easy as Python or Matlab in that respect, but much faster.
2. There are just a few powerful language features (e.g. ubiquitous, fast multiple dispatch, supported with an expressive type system), rather than a lot of features that interact in complicated ways.
3. Good for general programming stuff: working with strings, calling external programs and other things that are generally pretty awkward in R and Matlab (one of the reasons why NumPy is gaining popularity).
In general, the motivation (expressed in a previous Julia blog post) is to have something that's easy to use and learn, but fast and powerful (you *can* go deep if you want to), and designed expressly for numerical work —which means, among other things, that it has to be able to store large arrays of numeric values in-line and call libraries like LAPACK on them.
It is under active development.
Karpinski does at the moment have a beard. – Karpinski
It's difficult to fathom how the authors interpret the data on page 14 as *not* supporting the hypothesis that there is a male/female variance ratio of about 1.1. Figure 1A is a bell-like curve which is clearly centered around 1.1. In Figure 1B, almost all of the points are below the 1:1 line, whereas if you plot a 1.1:1 line, its a perfect fit for the data. In Figure 1C, the x value where the regression line intersects a zero gender gap (i.e. no evidence of cultural bias), is at a variance ratio of about 1.1. All of the evidence the authors present points to an underlying variance ratio near 1.1, yet somehow they conclude the opposite.
For a fairly mature project like wikipedia (everybody knows about them, they have more pagerank than god, ignorance is unlikely to be the reason behind most non-contributors), focusing on anomalies in your contributor statistics is a good way of identifying potential issues that might be standing in the way of your growth.
Link to Original Source