ad

se

29 Mart 2013 Cuma

Ant Colony Algorithm Lecture and Videos

                                                Ant Colony Lecture and Examples Video

Ant Colony Algorithms are inspired by the behaviour of natural ant colonies, in the sense that they solve their problems by multi agent cooperation using indirect communication through modifications in the environment.

Natural, or real, ants release a certain amount of pheromone while walking, and each ant prefers(probabilistically) to follow a direction which is rich of pheromone. This simple behaviour explains why ants are able to adjust to changes in the environment, such as new obstacles interrupting the currently shortest path. Let me illustrate:

What happens when an ant colony following a shortest path between a food source and the nest (see below) gets interrupted by an obstacle appearing somewhere in the path? When the ants reach the obstacle they will randomly choose some way around it(right, left, over or under).

If we assume that the only way around the obstacle is either right or left, we can safely assume that approximately half of the ants will go right and the other half left, as illustrated below.


The ants that happen to pick the shorter path will obviously create a strong trail of pheromone a lot faster than the ones choosing a longer path(see below). This will cause more and more ants to choose the shorter path until eventually all ants have found the shortest path.

Ant Colony Algorithms attempt somehow to apply similar techniques in order to solve real life problems. The main idea is to use repeated and often recurrent simulations of artificial ants (mobile agents inspired by real ant behaviour) to generate new solutions to the problem at hand. 

The ants use information collected during past simulations to direct their search and this information is available and modified through the environment. Many different artificial ant algorithms have been implemented and no universal definition of an artificial ant fits them all.

 In the next section we will take a look at an example where an artificial ant colony solves the travelling salesman problem. 

22 Mart 2013 Cuma

HEA Profil Ölçüleri

HEA Profil Ölçüleri Tablosu


NPI PROFİL ÖLÇÜ TABLOSU İÇİN TIKLAYIN!
NPU PROFİL ÖLÇÜ TABLOSU İÇİN TIKLAYIN!



NPI Profil Ölçüleri

NPI profilleri; başlıca çelik yapı sektörü ve makina imalat sektöründe kullanılan, dayanıklı ve destekleyici yapıya sahip demirlerdir.






NPU PROFİL ÖLÇÜ TABLOSU İÇİN TIKLAYIN!
HEA PROFİL ÖLÇÜ TABLOSU İÇİN TIKLAYIN!




DISADVANTAGES AND LIMITATIONS OF GENETIC ALGORITHM


DISADVANTAGES AND LIMITATIONS OF GENETIC ALGORITHM

  1. The problems occurs identifying fitness function
  2. Definition of representation for the problem
  3. The problem of choosing the various parameters like the size of the population, mutation rate, cross over rate, the selection method and its strength.
  4. Premature convergence occurs. [4]
  5. Cannot easily incorporate problem specific information
  6. Not good at identifying local optima
  7. No effective terminator.
  8. Not effective for smooth unimodal functions
  9. Needs to be coupled with a local search technique.
  10. Require large number of response (fitness) function evaluations
  11. Configuration is not straightforward
  12. Cannot use gradients.[5]
  13. Not all problems can be framed in the mathematical manner that genetic algorithms demand
  14. Development of a genetic algorithm and interpretation of the results requires an expert who has both the programming and statistical/mathematical skills demanded.
  15. Most genetic algorithms rely on random number generators that produce different results each time the model runs. Although there is likely to be a high degree of consistency among the runs, they may vary. [6]





ADVANTAGES OF GENETIC ALGORITHM


The advantages of genetic algorithm includes,


1    Parallelism and liability
2.   Many ways to speed up and improve a GA application as knowledge about problem domain is gained
3.   Solution space is wider
4.   The fitness landscape is complex
5.    Easy to discover global optimum
6.    The problem has multi objective function
7.    Only uses function evaluations.
8     Easily modified for different problems.
9.    Handles noisy functions well.
10.  Handles large, poorly understood search spaces easily
11.  Good for multi-modal problems returns a suite of solutions.
12.  Very robust to difficulties in the evaluation of the objective function. [1]
13.  They require no knowledge or gradient information about the response surface
14.  Discontinuities present on the response surface have little effect on overall optimization performance
15.  They perform very well for large-scale optimization problems
16.  Can be employed for a wide variety of optimization problems [2]
17.  Genetic algorithm is a method which is very easy to understand and it practically does not demand the knowledge of mathematics.[3]

PREMATURE CONVERGENCE IN GENETIC ALGORITHM


GA is a kind of hill-climbing search; more specifically it is very similar to a randomized beam search. As with all hill-climbing algorithms, there is a problem of local maxima. There is no absolute assurance that a genetic algorithm will find a global optimum. It happens very often when the populations have a lot of subjects. Local maxima in a genetic problem are those individuals that get stuck with a pretty good, but not optimal, fitness measure. [7]



When applying genetic algorithm for solving large-scale and complex real-world problems, premature convergence is the one of the most frequently encountered difficulties. In that situation  , the  solving  procedure  is  trapped  in  the suboptimal state and  most  of the operators can’t  produce offspring surpassing their parents any more . It has been proven that the genetic algorithm can't converge to the global optimal solution [8]

Any small mutation gives worse fitness. Fortunately, crossover can help them get out of a local maximum. Also, mutation is a random process, so it is possible that we may have a sudden large mutation to get these individuals out of this situation. (In fact, these individuals never get out. It's their offspring that get out of local maxima.) Overall, GAs have less problems with local maxima than back-propagation neural networks. [9]

Sources:

[7] [9]  C. R. Dyer, CS 540 Lecture Notes, University of Wisconsin – Madison
[8] Rudolph G., 1994, “Convergence Analysis of Canonical Genetic Algorithms”, IEEE Trans.  On Neural Networks, Vo1.5(1), p96-101.