top of page
Search

Inspirations from Biology: Genetic Algorithms

Genetic algorithms (GAs) belong to a larger class of evolutionary algorithms and are based on the concepts of natural selection that were first introduced by Charles Darwin in his work 'On the Origin of Species (1859)'. It should come as no surprise that a biological concept so ubiquitous would eventually be adapted to solve some challenging problems. The application of genetic algorithms to Computer Science were first proposed by Alan Turing, and this case was proposed in his famous paper outlining what we now call the Turing Test (A. M. TURING; I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, 1 October 1950, Pages 433–460). Since then a lot of work has been done on refining his proposal and it has become a subfield of AI in its own right and the topic which we will be exploring further here.

Before we begin on our journey into genetic algorithms, let us issue the following warning: Genetic programs (GPs - Mitchell, M. - An Introduction to Genetic Programming, MIT Press 1998) are not to be confused with genetic algorithms. And while the former also share the same abbreviation with another beloved tool among machine learning researchers, Gaussian processes, woe be unto the person that confuses the three different concepts.

GAs are still a researched field (although nothing compared to yesteryear) with research ongoing to determine what fields GAs will excel at and why they do so. Despite the simplicity of GAs it is currently unknown as why they seem to excel at so many tasks and the current best hypothesis is the Building Block Hypothesis, but we have chosen to omit that at this point. If you wish to read more about GAs and BBH Edinburgh University has some great slides on it.


Why Use GAs

Genetic Algorithms have been applied across a diverse set of problems from code breaking, through to FOREX trading and to improving engineering designs, like in the case of NASA ‘evolved’ antenna making. Generally GAs seem to excel when we have a complex search space that cannot be easily modelled or a search space that is too large for complete search. It’s very challenging to find an example of acceptable quality for demonstration purposes, hence why we use a trivial and simplistic example in the following paragraphs.

GAs has been also successfully used to boost the performance of artificial neural networks (ANNs). They are often used on the optimisation of the weight functions between its connections, which is normally achieved by the gradient descent type methods (e.g. backpropagation). GAs are also used to fix a limititation of ANNs in their static topology that generally comes from a human researcher who decides on their architecture. GAs can alleviate this problem and extensive work has been done to combine the GAs with ANNs resulting in neuroevolution methods that has been developed to mimic evolutionary processes of the brain more closely. In short this means that a GA can evolve the topology as well as the initial values for ANNs. This is an incredibly interesting field and for more details on the neuroevolutional methods please refer to this OReilly post.

GAs offer an alternative to using more formal mathematical methods that are too difficult to apply to problems that are complex, highly dimensional and poorly understood. Incidentally, stock markets exhibit similar problems, making GAs potentially useful in an attempt to model price changes, optimising portfolio, selecting financial indicators or evolving trading strategies.

Whilst they are called genetic algorithms to some degree GAs are more of genetic heuristics. Unlike complete search algorithms they are not guaranteed to find the optimal solution – but this is the trade we make. GAs can run far quicker than complete search (searches that are guaranteed to find the optimal solution) for very large search spaces and often return a result that can be considered ‘good enough’.


What is a GA made of?

Given a problem they will search a space of possible solution candidates using ideas based on natural selection and genetics. This is achieved by initialising a random population of potential candidates and 'evolving' it using a fixed set of genetically inspired operations. Each evolution step involves evaluating the current population and selecting top performing candidates. Those candidates are then used to generate new population and the process is repeated.

The hope is that as we choose top scoring solutions at every step the population will evolve towards the actual solution to the problem. The final solution may not be exact, but even then it's useful as it can identify features that are of high importance and offer valuable insight into the nature of the problem without spending prohibitively long time on it.

Typical genetic operations that are used to evolve the population are:

- selection - realised through the fitness function, it is a user defined function that we are trying to minimise/maximise and is used to rank the individuals in our population to be chosen for later breeding (usually the ones that are best performing)

- crossover - selected candidates are used for reproduction which result in a new population of children taking certain (dominating) characteristics after their parents

- mutations - introduces additional genetic diversity of a population by randomly changing some of the candidates' parameters

There is an implicit fourth stage where the weakest genes are pruned from the pool. These steps are repeated for a certain number of generations or until a satisfactory solution is obtained. Further these are only typical because these are often use - there are a lot of GAs that have additional steps (such as migration) to alleviate certain problems.

To show how GAs can be realised in practise we will use a recurring example which is intentionally simplistic. Consider a 4-digit long string with each value between 0 and 9. Let's assume that the optimum solution is achieved when all the digits are equal to one (i.e. 1111). Now, we will try to set a problem and build an algorithm that uses previously defined genetic operations to solve it.

GAs generally begin by initialising a random population of potential solutions to the problem we are trying to solve, defining a fitness function we are trying to maximise/minimise and performing the following genetic operators.

For our first example consider we had the initial population for eight entries which are:


We could have our fitness function (f(x)) be the sum of the absolute difference between each digit and our optimal value. As an example for f(1235) we would be get (1 – 1) + (2 – 1) + (3 – 1 ) + (5 – 1) giving us 0 + 1 + 2 + 4 = 7. Obviously in this case an output of zero would be optimal.

For a crossover function (c(x, y)) we could add each digit and take that value modulo 9. This does mean that our fitness function is not perfectly suited for our crossover function which can happen in real systems which is a problem affiliated with GAs. For example if we had a crossover of c(8943, 3278) we would get our optimal result of 1111, but both of these have high fitness values and are considered “weak contenders”. Often with our algorithms we may not know if our fitness function is bad, and this can skew their effectiveness.

Our mutation function (m(x)) would randomly add a random value to some digits.



To begin with we could simply apply the crossover function to each value and its neighbour. At this stage we aggressively disregard our worst performing half.

Our final result has a fitness of 15 which is far higher than the lowest we began with (which was four). This example demonstrates how GAs can work on a simple example and highlights the necessity of good fitness functions. Perhaps a better fitness function would be the addition of each digit to our optimal value modulo 9. Perhaps we could have performed better if we disregard immediately the bottom half of our pool.

Or perhaps a better fitness function would be the percentage of correct values and this case the optimal value would be 100.

Given the same starting values our fitness function now gives us the following results:


And this time we could begin by eliminating all values which are zero since they contribute nothing. This will give us:


For our crossover function we could try taking the two values from one and the other two from the other. Since we have an oddly numbered population we can let our next value “move on” unaffected. Our mutation function will remain the same as above.


This is a far superior fitness function and gives us the result we wanted.

Whilst in this example we used a 4-long string with numbers between 0 and 9 usually the solution candidates are frequently represented as strings of 0s and 1s. These can encode various other problems instances, such as if-then rules. Crossover then means that two such vectors, say 11000 and 10101, may mate and produce a new vector, 11101. The crossover rule could be defined by selecting, as was the case in this example, the first two bits of the first and the rest of the bits. Mutation means flipping bits. The fitness function is some real-valued function on the set of all such vectors.

Genetic algorithms may seem like a simple thing to you: You have your population, your genetically inspired operators and your fitness function and if you just iterate often enough, it will "magically" produce an individual that has high fitness. The behaviour of GA is quite complicated (even for simple problems) and the theoretical foundations are fairly sophisticated and their exploration for certain algorithm instances is still subject of research.


Financial Applications

In a particular case of financial markets, GAs may be used to optimise a set of indicators or parameters used by traders when coming up with technical trading rules. For example, a trading rule may involve parameters such as moving averages, volume, volatility, look-back periods, etc. and GAs could be used to optimize those parameters based on historic data to inform the strategy and maximise profits as well as try to correlate different indicators with given securities.

The following two examples show how GAs can be used to solve simple problems. The sole purpose of them is to showcase a potential of application of evolutionary algorithms to financial problems. Problems are simple conceptually and will help the reader to gain intuition about the inner workings of these algorithms and fast-start on applying them to relevant problems.

This algorithm is attempting to find the best momentum based strategy by generating a random population of strategies, testing them for a fixed period of time and evolving by following the general rules of natural selection and performing three genetic operators outlined before. (Python implementation).

In another example – by TheRTrader - the algorithm is trying to maximise Sharpe ratio for a particular traded equity by optimising the four parameters based on long term and short term moving averages and Relative Strength Index (RSI). (R implementation)


Parallel Genetic Algorithms

Genetic Algorithms are far less well known than traditional Neural Networks (especially ‘Deep Learning’) and whilsts ANNs have their use we still believe that there is a place for GAs in the computational finance landscape. The main advantage of this is that our fitness function can be computationally expensive which can make running genetic algorithms slow, but by running them concurrently we can reduce the overall runtime (even if the CPU time increases).


There are a few ways that a GA could be parallelised, the most obvious of which would be to ‘divide and conquer’ – in this method, given an original population of 10 potential results, would split them evenly across your unit of concurrency (threads or processes). This is a particularly nice method of implementing parallelisation because it allows us to operate on very large original values and it allows us to use more complex fitness functions. The downside is the same as it is for evolution in reality: the gene pool is smaller and weaker genes in one pool can survive to the end because that pool had weaker original values, this means when we perform our final crossover we could be left with a weaker than optimal solution.


Drawbacks

A large drawback of GAs is that they are not guaranteed to find the global maxima (or minima) this means that we might not be left with the optimal solution to our problem which someone else may be able to find with their GA.

In addition to this it can be challenging to decide on a good fitness function and having a weak function (as shown in our example) can lead to poor results overpowering the population which will prevent us from moving from a local maxima/minima to our global maxima/minima.

And finally ANNs excel at many problems in the financial landscape and their use has to not be overlooked just because we believe GAs deserve some love too.


-The Team

Comentarios


bottom of page