Please create an account to participate in the Slashdot moderation system


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Submission + - Book review of the Handbook of Neuroevolution Through Erlang (

An anonymous reader writes: The subject of Computational Intelligence (CI) primarily deals with Neural Networks, Fuzzy Logic, Evolutionary Computation, or any combination of the same. Though there are a number of books discussing the subject of CI and its goals, there are no books concentrating on the actual subject of Neuroevolution, and particularly the subject of Topology and Parameter Evolving Artificial Neural Network Systems (TWEANNs). TWEANNs are the merging of genetic algorithms and neural networks (and fuzzy logic, if you add that feature to the neurons). TWEANNs represent the state of the art in CI, allowing you to use evolution to evolve the parameters and the topology of neural network based intelligent systems, artificial brains. The Handbook of Neuroevolution Through Erlang is the first book to concentrate on this subject matter, and not only cover it, but actual demonstrate the construction of the state of the art within the domain. The book has already a review of it written on amazon by Josh Bongard, a pioneer and very well known researcher in evolutionary robotics, who gave the book 5 stars, and noted that the system built within the volume is state of the art, and the use of Erlang promoted within the book for this type of research, is certainly the right tool for the job, which takes us to the next part.

The book is not just about neuroevolution, it's also about building such systems using the Erlang programming language, which itself is just emerging into the mainstream. You know how when you think about futuristic CI you think to yourself of how it should evolve, and be able to rewrite itself, and be distributed amongst multiple machines, servers, or even Internet as a whole? Well, Erlang was made to create distributed, robust, fault tolerant systems that allow code hot-swapping (rewriting the source code, compiling it, and then using it without going offline). Not to mention, the basic element in Erlang is a process, a concurrent processing element, analogous to the Neuron in a brain, capable of communicating with other processes through message passing, again, analogous to a neuron in the brain. Though Erlang has been around for over 20 years, it's only recently began entering the mainstream, as can be seen from the number of new books on the subject. But it is this book and the neuroevolutionary system developed within, called DXNN, that is the first to take advantage of the perfect conceptual match between Erlang and the subject domain of computational intelligence, with the foreword to the book itself written by Joe Armstrong, the father of Erlang.

The book is very clear, and presents the material exceptionally well. The first part of the book gives a background on evolution, neural networks, and neuroevolution. It also covers and lists numerous applications of where you can use such systems, and why you might want to learn the subject, whether your long term goals are to apply neuroevolutionary systems to some project, or develop something more ambitious. But it also, goes further. It notes that the grand goal is of course, the construction, or in this case the evolution, of a truly intelligent system. We already know that evolution and neural networks can produce such intelligent agents, we are the proof of that after all. Thus, the neuroevolutionary approach, if cleverly enough approached, could give us the ability to evolve truly remarkable solutions and intelligence. The first part promises that we will build state of the art, evolving neural network systems, applied in robotics and artificial life, and by the end of the book, the promise is kept. The ease with which such system is built, is also rather remarkable, and this is in part thanks to the conceptually perfectly aligned to the subject domain programming language, Erlang.

The second part starts off by guiding the reader in the development of an artificial neuron. It explains all the background in a very approachable manner. Once the reader has developed the neuron and has tested it (all source code is also provided in the book, and on GitHub:, the next step is the creation of a simple neural network. At this point, there is no training/learning algorithm yet, just the construction of a concurrent neural network system.

Once the basic neurons and neural network system has been presented and elaborated upon, the author discusses the various learning and training algorithms. The simplest, yet still a very powerful one, is stochastic optimization, particularly stochastic hill climbing with random restarts. This algorithm is then added and used to optimize the NN to solve the standard and basic XOR problem.

The author then leads and constructs with the reader an Infomorph type of agent, which is a NN with sensors and actuators. Here is where things get even more interesting. At this point you begin to realize that you're building some-kind of advanced computational intelligence agent, almost something out of science fiction, only you're actually building it, and it's working! This is also where I realized just how easy it was to build such a system in Erlang, which I think I would not be able to even approach in something like C or Java, or some other language.

This construction of an evolutionary, concurrent, robust, neural network based computational intelligence system capable of evolving new sensors and actuators, and evolving new topologies, and be further very easily extended and even made to rewrite itself (though not covered in this book, it is made clear just how easy such a feature too can be added, and I assume this will most likely be covered in another volume) is covered in detail. Then a chapter on debugging is presented, demonstrating that when using evolutionary computation, some bugs can go unnoticed because the system actually evolves around them, and thus can still be functional with certain bugs.

Finally, once the whole system is built, and a double pole balancing and T-Maze navigation benchmarks have been added and tested on, the indirect encoding approach gets covered. The substrate encoding, popularized in the HyperNEAT, is extended and added to the system developed in this book. Also numerous types of neural plasticity algorithms are added, such as general Hebbian, Ojas... Once substrate encoding is added, plasticity is added to it as well. Basically, by the time you get through 80% of the book, you have a system capable of evolving learning neural network based agents, with enormous potential. In Ch-18 a 2d simulated environment is developed based lightly on Flatland (the book). The system we've developed so far is then used to evolve intelligence in agents within this 2d world, to run around, gather food, hunt... it's rather amazing. Then in the next chapter the same system is used to evolve agents capable of trading currency, using real financial data.

By the end of the book, I had built something powerful, and had an intuitive understanding of the evolutionary approach. At this point I can safely say that I can probably turn any problem into one for which I can develop an evolutionary algorithm based solution. The book provides a deep yet understandable background. It's clear, written with passion, and it takes you from start to the state of the art finish. It's unlike any other textbook I've read. At the end the system developed is called DXNN v2, which is the next version of the author's original DXNN system, on which he has given numerous conference talks that can be found online. The benchmarks on standard problems show it to be at the top, not to mention the only one written in Erlang, and thus also leveraging the power of that programming language.

The Handbook of Neuroevolution Through Erlang by Gene Sher is highly recommended. It's powerful, easy to read, yet deep and the proof of the system, well, it's not just words, you get to build it, you have the source code on GitHub and in the book, and you get to test it and actually apply it. The promise the author set out at the start, was certainly kept. It's endorsed by legends like Josh Bongard and Joe Armstrong. It's a bit on the expensive side ($150-$190, depending on whether you buy from Amazon or directly from Springer), but well worth it. Do not get the Kindle version though, it seems that it was created by Amazon directly from XML, and they did not retain indentation in code during their conversion. Go directly for the original PDF from Springer (, which is excellent, or get the hardcover ( version from Amazon or Barnes & Noble.

This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Book review of the Handbook of Neuroevolution Through Erlang

Comments Filter:

"Our vision is to speed up time, eventually eliminating it." -- Alex Schure