The code has been released under an MIT license here: https://github.com/bbcmicrobit...
A story-so-far write up by one of the developers is here: http://ntoll.org/article/story...
> It must be a small challenge involving a relatively simple task.
I think an interesting point is that, for problems larger than small, well-defined tasks (ie. any real world project), then the speed advantages or disadvantages of the language start to get swamped by the choice of algorithms.
I spent some time working through the small programming challenges at projecteuler.net. It is worth noting that the submitted solutions by users, in many languages, vary in execution time over many orders of magnitude, and my casual inspection seemed to show that the thing that correlated with fastest execution speed was not choice of language, but choice of algorithm.
The programmers submitting solutions are amongst the set who are voluntarily spending their own time to do this - hence while they aren't the best programmers out there, they also probably aren't the worst.
My conclusion is that if you can afford to get a good programmer to spend all day optimising a small bit of code, then yes C is going to be fastest. But as soon as the problem gets larger, or as development time is reduced to more normal proportions, then most half-decent programmers choice of non-obviously sub-optimal algorithms is going to swamp that. A high level language that supports discovery and implementation of the right algorithm, by giving the programmer less low-level detail and fewer lines of code to worry about, is going to claw back some ground.
This matches with my own experience of Python. People expect that performance will be terrible, and when we measure it in benchmarks, it is terrible. But in real-world projects, it is great. So what's going on?
APL hackers do it in the quad.