Recently I made available some GP code I had in Common Lisp (see previous post). Today I also put on my github account the library I did for Ant Colony Optimzation (cl-aco) and a few others like parsers for MKP and QAP, but also a very basic CFFI bindings for libLBFGS. Like before, these are mostly not to get lost since I’m not using/developing them anymore. But if someone finds this useful in any way, that’s great!
In the last few weeks, a few papers containing evolutionary techniques applied in the context of deep neural networks have been published. For someone with a background on evolutionary computing and interested in everything that is bio-inspired, these are great news! Recently we’ve seen: Evolving Deep Neural Networks , Genetic CNN, Large-Scale Evolution of Image Classifiers and PathNet: Evolution Channels Gradient Descent in Super Neural Networks.
These recent papers are not the first ones on the topic (and won’t be the last) since many different applications of evolutionary techniques to neural networks, including deep ones, have been published in the past. However, it “feels” that finally the field is catching up and paying attention to the very fast developments in neural networks. Especially when organizations like DeepMind and the Google Brain Team are investing in the topic.
The research and development of evolutionary techniques for deep nets is, in my opinion, very important. These methods have achieved many “human competitive results” and thus have the potential to present innovative solutions and at the same time, reduce human intervention in the process of design and optimization of a deep model. It can also be used to produce new insights that can latter be used to develop new methods, by looking and analyzing the proposed solutions. Some people may simply criticize these approaches by claiming it’s playing LEGOs, “forgetting” that humans are already playing it. Or for the amount of resources required. This is a valid point which just means that more understanding and development is required.
Since there are no free lunches, you need to understand when it makes sense to apply this type of methods as well as how to design them. Unfortunately, it’s very common to see direct applications of concepts that are already outdated. There’s no value in using the standard genetic algorithm from John Holland or the Koza-style genetic programming. Even though they are easy to apply and understand, they are outdated! Another example is not knowing how to analyze an evolutionary algorithm through their representation properties. This means that most likely you will have an inefficient approach.
Recently, the inspiration for most of the new advances in deep learning are coming from the math/game theory/etc., side. And not so much from the biological side, e.g., neuroscience. I would expect more coming from this area of inspiration (an attempt can be found here for example: Towards an integration of deep learning and neuroscience) since ultimately, the human brain is used as the main example of what kind of AI we want/would like to build. However, the brain is not a final product, and it was not designed in a single step. It’s the product of a long evolutionary process (which still goes on)! It means that we need to study and understand more these two so that we can effectively use them for the artificial variants. Deep Neuroevolution should be a path to pursue.