I have tons of programming books, engineering books, algorithms books, data manuals, etc., all of which I read for enjoyment and to better myself as an engineer. I have read some fiction, but it's been a long while since I spent much time reading fiction. And then there's puzzle books which don't really fall into either category.
When I want fictional entertainment, I'll turn on the TV or go watch a movie.
*chuckle* It's a power so great, it can only be used for good or evil!
Actually, it's useful for compile-time optimization of performance critical code. For example, I have a CRC calculation template class that generates its lookup tables at compile time based on the specified polynomial, shift, direction and field size. Need a new CRC in a different wacked out field? One line of code and it's there.
Previously, in C (or C++ w/out template metaprogramming), I'd need to either write a separate program to generate the lookup table for me offline and manually graft that into the code, or generate the lookup table at startup.
The consuming code is fairly clear, so in principle it makes other code easier to write. Of course, if there were something wrong with my implementation, debugging it is potentially more challenging.
Public Service Announcement comes on the television. Pages of code scroll by on a computer screen in the background. A strung out looking programmer stares into space, bleary eyed, obviously stressed, pulling his hair out. A voiceover begins...
Using recursion as a simple looping construct in an imperative programming language isn't normal. But, in C++ template metaprogramming, it is.
Template metaprogramming, not even once.
;-)
(Actually, I do a fair bit of template metaprogramming in C++. It can be handy for a certain class of problem.)
We actually use Perl quite heavily where I work, and its use is only growing. We've built rather significant pieces of our infrastructure around it, including a rather impressive internal project that uses Perl as a metaprogramming language. You'll get yelled at if you deviate from the standard perl-based development flows we've put in place.
So, "isn't used all that much anymore" may be more anecdotal than not? I guess it really depends on the shop whether perl use is increasing or decreasing.
There's that typo and the fact that the < got eaten.
Personally, I don't see the point in a pissing match between Perl 5 and C++14. I use both. Perl's great for rapid prototyping and programs that need a certain flexibility. C++14 is great for rapid execution.
The perl code is fairly idiomatic, and a perl programmer would type it without thinking. It'll likely compile into an optimized sort, since this type of sort is common in perl.
The C++14 version is also idiomatic to C++14 (although I think non-member begin/end would be preferred there), and has the advantage that it'll compile an optimized sort for whatever type you're sorting.
In C++14, I can use std::vector, std::map, std::unordered_map, std::regex, std::shared_ptr, std::unique_ptr, gobs of standard algorithms, range-based for() and lambdas. These give me very similar containers and tools to what I have access to in Perl 5. That makes the conceptual leap between the two shorter. I quit worrying about syntax ages ago. Know the syntax for the language you're programming, and spend your energy on the semantics of the program and problem you're trying to solve. This is work, not a beauty contest.
I've been waiting patiently for a usable Perl 6. I did install a version of Rakudo Star a couple years ago to compete in a Perl 6 coding contest (over Christmas, it so happened). It was fun picking up the language, and I was able to implement some interesting stuff quickly, including an A* search. But, it was definitely not ready for prime time. I ran into several rough edges, and execution time was uninspiring. I've gotten accustomed to how fast Perl 5 is.
Now I'm hoping for a nice Perl 6 Christmas present this year, although I won't get my hopes up too high.
Did you read the part in the article where they're actually doing the matching based on the ASTs (abstract syntax trees), and so are able to identify authors even after the code goes through an obfuscator? Relevant quotes:
Their real innovation, though, was in developing what they call “abstract syntax trees” which are similar to parse tree for sentences, and are derived from language-specific syntax and keywords. These trees capture a syntactic feature set which, the authors wrote, “was created to capture properties of coding style that are completely independent from writing style.” The upshot is that even if variable names, comments or spacing are changed, say in an effort to obfuscate, but the functionality is unaltered, the syntactic feature set won’t change.
Accuracy rates weren’t statistically different when using an off-the-shelf C++ code obfuscators. Since these tools generally work by refactoring names and removing spaces and comments, the syntactic feature set wasn’t changed so author identification at similar rates was still possible.
Regarding the first quote: The author of the article probably didn't realize that ASTs aren't a new thing; it's just this application of ASTs that's new. ASTs are as old as the hills. I learned about them from the Dragon Book, and by the time that was written they were old hat.
E = MC ** 2 +- 3db