GA is a kind of hill-climbing
search; more specifically it is very similar to a randomized beam search. As
with all hill-climbing algorithms, there is a problem of local maxima. There is
no absolute assurance that a genetic algorithm will find a global optimum. It
happens very often when the populations have a lot of subjects. Local maxima in
a genetic problem are those individuals that get stuck with a pretty good, but
not optimal, fitness measure. [7]
When applying
genetic algorithm for solving large-scale and complex real-world problems,
premature convergence is the one of the most frequently encountered
difficulties. In that situation ,
the solving procedure
is trapped in the
suboptimal state and most of the operators can’t produce offspring surpassing their parents
any more . It has been proven that the genetic algorithm can't converge to the
global optimal solution [8]
Any small mutation gives worse fitness. Fortunately, crossover can help
them get out of a local maximum. Also, mutation is a random process, so it is
possible that we may have a sudden large mutation to get these individuals out
of this situation. (In fact, these individuals never get out. It's their
offspring that get out of local maxima.) Overall, GAs have less problems with
local maxima than back-propagation neural networks. [9]
Sources:
[8] Rudolph G., 1994,
“Convergence Analysis of Canonical Genetic Algorithms”, IEEE Trans. On Neural Networks, Vo1.5(1), p96-101.
Hiç yorum yok:
Yorum Gönder