Comment when search returns models (Score 2) 216
For example, Boost is really sweet when you need to slam together a pile of code and have it working out of the gate with minimal fuss, but if performance is an issue, you cant use it.
Wow, that's just bizarre. I don't know where you get your misinformation, but it's an elite grade of batshit.
The whole point of Boost is that it maintains a certain amount of abstraction without boxing you into a performance corner. Were it not for those conflicting goals, the devilishness of its internal machinery could not be justified.
Template metaprogramming essentially involves expressions converting themselves to a symbolic representation that doesn't resolve itself into a concrete expression—by means of purely functional transformation at a quasi-syntactic level;—until some final result is demanded, at which point the highest performance code path can be selected based on the actual parameters (more specifically, often exploiting which parameters vary and which parameters are constant or nearly constant).
The problem with Boost is similar to what Knuth said about the problem with literate programming.
Literate programming demands a high proficiency with two different skills: formal reasoning and verbal expression. This shrinks the available pool of adherents and adopters. And worse, there's a terrible opportunity cost, because the people out there who have extremely high proficiency in both of these skills are in extremely high demand to take on central roles in large projects where they don't spend their hours bent over literate code.
The kind of environment where Boost can be best exploited for both its abstraction and its performance is going to be wonk-filled boiler-rooms at high frequency trading companies where the cash, the talent, the commitment, and the project duration mesh together. Importantly, the project specification in these environments is often in continuous, long-term evolution as your firm chases whatever edge it thinks it might have in a chaotic, rapidly-shifting market environment. The month you spend pouring over low-level optimization gets deployed for a whole week. The month you spend automated your Boost framework to achieve nearly the same performance becomes a permanent code asset (and a competitive asset whenever you find yourself needing once again to run that old play).
Boost is in that category where if you have to ask, you can't cut the mustard. The natural Boost programmers already know who they are. Few of these people toil in the public eye. That's not where this elite, double-barrel skillset tends to land.
The Wolfram language is impossible to assess based on this video. If your application depends on Wolfram "knowledge" how do you know it will continue to meet rigorous specifications the day after tomorrow?
Is there a public regression suite on the contained knowledge against which to assess whether your program is erected on firm or porous soil?
What guarantee does one have that it's cleverness or performance characteristics will stay consistent when it matters most?
I suspect the killer application for WooL is prototyping the semantic web. The semantic web has been dragging its feet. Google and Facebook don't wish to become disintermediated. They have one foot on both sides of this fence and their hands cupped over their testicles. Doesn't make for rapid progress.
The Achilles heel of search is that search returns results rather than models. Google is trying to split the difference by having search return interactions. It's an excellent paving stone on the road to a lucrative future purveying OOXML.
If ten minutes of coding within the Wolfram Language embarrasses Google search, we have a winner here of WuLing mammoth proportions.