A genetic algorithm is a class of adaptive stochastic optimizationalgorithms involving search and optimization.
Genetic algorithms were first used by Holland (1975).
The basic idea is to try to mimic a simple picture of natural selection in order to find a good algorithm. The first step is to mutate, or randomly vary, a given collection of sample programs. The second step is a selection step, which is often done through measuring against a fitness function. The process is repeated until a suitable solution is found.
There are a large number of different types of genetic algorithms. The step involving mutation depends on how the sample programs are represented, as well as whether the programmer includes various crossover techniques. The test for fitness is also up to the programmer.
Like a gradient flow optimization, it is possible for the process to get stuck in a local maximum of the fitness function. One advantage of a genetic algorithm is
that it does not require the fitness function to be very smooth, since a random search
is done instead of following the path of least resistance. But to be successful,
there needs to be some nice relationship between the modifiable parameters to the
fitness. In general, one runs into computational
irreducibility.
Holland created an electronic organism as a binary string ("chromosome"), and then used genetic and evolutionary principles of fitness-proportionate selection for reproduction (including random crossover and mutation) to search enormous solution spaces efficiently. So-called genetic programming languages apply the same principles, using an expression tree instead of a bit string as the "chromosome."