Some time ago I got an email asking for the code of one of my old papers. Unfortunately the code was lost. But that reminded me that I still had some other code that could actually be made available. Essentially, some Genetic Programming (GP) libraries that I did around seven years ago (which in turn started on some other old code that I had done before).
Fast foward some weeks and it’s now available on my github account these two GP libraries:
- mini-gp: a minimalistic library that only allows simple tree-based GP
- core-gp: a larger library that does GP, STGP, GE and GAs.
The code is released as is and it’s not even re-tested. Basically it’s there not to be lost in some old external hard drive. Still, it might be useful to someone who has some interested in GP and Common Lisp. The code it’s not particulary nice in some aspects and it even contains some over-engineering parts, but it’s was fun to do it and back then was very useful to me.
PS: I still have another library for Ant Colony Optimization that I would like to put it on Github too but it still requires a little clean-up. Hopefully in the near future it’s done.
In the last few weeks, a few papers containing evolutionary techniques applied in the context of deep neural networks have been published. For someone with a background on evolutionary computing and interested in everything that is bio-inspired, these are great news! Recently we’ve seen: Evolving Deep Neural Networks , Genetic CNN, Large-Scale Evolution of Image Classifiers and PathNet: Evolution Channels Gradient Descent in Super Neural Networks.
These recent papers are not the first ones on the topic (and won’t be the last) since many different applications of evolutionary techniques to neural networks, including deep ones, have been published in the past. However, it “feels” that finally the field is catching up and paying attention to the very fast developments in neural networks. Especially when organizations like DeepMind and the Google Brain Team are investing in the topic.
The research and development of evolutionary techniques for deep nets is, in my opinion, very important. These methods have achieved many “human competitive results” and thus have the potential to present innovative solutions and at the same time, reduce human intervention in the process of design and optimization of a deep model. It can also be used to produce new insights that can latter be used to develop new methods, by looking and analyzing the proposed solutions. Some people may simply criticize these approaches by claiming it’s playing LEGOs, “forgetting” that humans are already playing it. Or for the amount of resources required. This is a valid point which just means that more understanding and development is required.
Since there are no free lunches, you need to understand when it makes sense to apply this type of methods as well as how to design them. Unfortunately, it’s very common to see direct applications of concepts that are already outdated. There’s no value in using the standard genetic algorithm from John Holland or the Koza-style genetic programming. Even though they are easy to apply and understand, they are outdated! Another example is not knowing how to analyze an evolutionary algorithm through their representation properties. This means that most likely you will have an inefficient approach.
Recently, the inspiration for most of the new advances in deep learning are coming from the math/game theory/etc., side. And not so much from the biological side, e.g., neuroscience. I would expect more coming from this area of inspiration (an attempt can be found here for example: Towards an integration of deep learning and neuroscience) since ultimately, the human brain is used as the main example of what kind of AI we want/would like to build. However, the brain is not a final product, and it was not designed in a single step. It’s the product of a long evolutionary process (which still goes on)! It means that we need to study and understand more these two so that we can effectively use them for the artificial variants. Deep Neuroevolution should be a path to pursue.