top of page

Inspirations from Biology: Genetic Algorithms

Genetic algorithms (GAs) belong to a larger class of evolutionary algorithms and are based on the concepts of natural selection that were first introduced by Charles Darwin in his work 'On the Origin of Species (1859)'. It should come as no surprise that a biological concept so ubiquitous would eventually be adapted to solve some challenging problems. The application of genetic algorithms to Computer Science were first proposed by Alan Turing, and this case was proposed in his famous paper outlining what we now call the Turing Test (A. M. TURING; I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, 1 October 1950, Pages 433–460). Since then a lot of work has been done on refining his proposal and it has become a subfield of AI in its own right and the topic which we will be exploring further here.

Before we begin on our journey into genetic algorithms, let us issue the following warning: Genetic programs (GPs - Mitchell, M. - An Introduction to Genetic Programming, MIT Press 1998) are not to be confused with genetic algorithms. And while the former also share the same abbreviation with another beloved tool among machine learning researchers, Gaussian processes, woe be unto the person that confuses the three different concepts.

GAs are still a researched field (although nothing compared to yesteryear) with research ongoing to determine what fields GAs will excel at and why they do so. Despite the simplicity of GAs it is currently unknown as why they seem to excel at so many tasks and the current best hypothesis is the Building Block Hypothesis, but we have chosen to omit that at this point. If you wish to read more about GAs and BBH Edinburgh University has some great slides on it.

Why Use GAs

Genetic Algorithms have been applied across a diverse set of problems from code breaking, through to FOREX trading and to improving engineering designs, like in the case of NASA ‘evolved’ antenna making. Generally GAs seem to excel when we have a complex search space that cannot be easily modelled or a search space that is too large for complete search. It’s very challenging to find an example of acceptable quality for demonstration purposes, hence why we use a trivial and simplistic example in the following paragraphs.

GAs has been also successfully used to boost the performance of artificial neural networks (ANNs). They are often used on the optimisation of the weight functions between its connections, which is normally achieved by the gradient descent type methods (e.g. backpropagation). GAs are also used to fix a limititation of ANNs in their static topology that generally comes from a human researcher who decides on their architecture. GAs can alleviate this problem and extensive work has been done to combine the GAs with ANNs resulting in neuroevolution methods that has been developed to mimic evolutionary processes of the brain more closely. In short this means that a GA can evolve the topology as well as the initial values for ANNs. This is an incredibly interesting field and for more details on the neuroevolutional methods please refer to this OReilly post.

GAs offer an alternative to using more formal mathematical methods that are too difficult to apply to problems that are complex, highly dimensional and poorly understood. Incidentally, stock markets exhibit similar problems, making GAs potentially useful in an attempt to model price changes, optimising portfolio, selecting financial indicators or evolving trading strategies.

Whilst they are called genetic algorithms to some degree GAs are more of genetic heuristics. Unlike complete search algorithms they are not guaranteed to find the optimal solution – but this is the trade we make. GAs can run far quicker than complete search (searches that are guaranteed to find the optimal solution) for very large search spaces and often return a result that can be considered ‘good enough’.

What is a GA made of?

Given a problem they will search a space of possible solution candidates using ideas based on natural selection and genetics. This is achieved by initialising a random population of potential candidates and 'evolving' it using a fixed set of genetically inspired operations. Each evolution step involves evaluating the current population and selecting top performing candidates. Those candidates are then used to generate new population and the process is repeated.