IEEM 5119 Genetic Algorithms and Other Nature-Inspired Metaheuristic Algorithms - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

IEEM 5119 Genetic Algorithms and Other Nature-Inspired Metaheuristic Algorithms

Description:

IEEM 5119 Genetic Algorithms and Other Nature-Inspired Metaheuristic Algorithms Maw-Sheng Chern http://chern.ie.nthu.edu.tw/gen/gen.htm IEEM 5119 Genetic Algorithms ... – PowerPoint PPT presentation

Number of Views:234
Avg rating:3.0/5.0
Slides: 53
Provided by: chernIeN
Category:

less

Transcript and Presenter's Notes

Title: IEEM 5119 Genetic Algorithms and Other Nature-Inspired Metaheuristic Algorithms


1
IEEM 5119Genetic Algorithmsand Other
Nature-Inspired Metaheuristic Algorithms
Maw-Sheng Chern
http//chern.ie.nthu.edu.tw/gen/gen.htm
2
This course is tended to introduce the stochastic
and other extended local search methods (genetic
algorithms, simulated annealing, tabu search, ant
algorithms, particle swarm optimization, bee
algorithm, etc.) and their applications to
difficult-to-solve optimization problems in
industrial engineering and manufacturing systems
design. Genetic algorithms is based on the
concepts from population genetics and evolution
theory. The algorithm are constructed to optimize
fitness of a population of elements through
crossover (recombination) and mutation
(perturbation) operations on their genes.
3
Simulated annealing is based on an analog of
cooling the material in a heat bath a process
known as annealing. A solid material is heated in
a heat bath until it melts, then cooling it down
slowly until it crystallizes into a solid state
(low-energy state). The atoms in the material
have high energies at high temperatures and more
freedom to arrange themselves. As the temperature
is reduced, the atom energies decrease. The
structural properties of the solid depend on the
rate of cooling. From the point of view of search
methods for optimization problems, simulated
annealing is a stochastic local search method. It
always accepts a selected better-cost local
solution and it may also accept a worse-cost
local solution with a probability which is
gradually decreased in the course of algorithms
execution.
1
4
The Ant algorithm is based on the observation of
real ants behavior. Ants can coordinate their
activities via stigmergy, a way of indirect
communication through the modification of the
environment. The main idea of ant algorithm is to
use self-organizing principles of artificial
agents which collaborate to solve the problems.
Tabu search is a deterministic iterative
improvement local search method with a
possibility to accept worse-cost local solution
in order to escape from local optimum. The set of
legal local solutions are restricted by a tabu
list which is designed to present from going back
to the recently visited solutions. The set of
solutions in tabe list are not accepted in the
next iteration.
1
5
Particle Swarm Optimization (PSO) algorithm is a
population-based stochastic optimization method
proposed by James Kennedy and R. C. Eberhart in
1995. It is motivated by social behavior of
organisms such as bird flocking and fish
schooling. In the PSO algorithm, the potential
solutions called particles, are flown in the
problem hyperspace. Change of position of a
particle is called velocity. The particle changes
their position with time. During flight,
particles velocity is stochastically accelerated
toward its previous best position and toward a
neighborhood best solution. POS has been
successfully applied to solve various
optimization problems, artificial neural network
training, fuzzy system control, and others.
1
6
The Bees Algorithm is a new population-based
search algorithm, first developed in 2005 by Pham
DT etc. 1 and Karaboga.D 2 independently. The
algorithm mimics the food foraging behaviour of
swarms of honey bees. In its basic version, the
algorithm performs a kind of neighbourhood search
combined with random search and can be used for
optimization problems.
1
7
Textbook References
  • Geng, Mitsuo and Cheng, Runwei, Genetic
    Algorithms Engineering Design, John Wiley
    Sons, New York, 1997.
  • Geng, Mitsuo and Cheng, Runwei, Genetic
    Algorithms Engineering Optimization, John Wiley
    Sons, New York, 2000.
  • Mitsuo Gen, Runwei Cheng, Lin Lin,,Network Models
    and Optimization electronic resource
     Multiobjective Genetic Algorithm Approach,
    London  Springer-Verlag, 2008.

3. Lecturer Slides
4. Word-Wide-Web
5. Course homepage
http//chern.ie.nthu.edu.tw/gen/gen.htm
8
References Genetic Algorithms Michalewicz ,
Zbigniew, Genetic Algorithms Data Structures
Evolution Programs, 1994, Third Revised and
Extended Edition, Springer, New York, 1999.
Goldberg, David E., Genetic Algorithms in
Search, Optimization Machine Learning,
Addison-Wesley Publishing Company, Inc. New York,
1989. Scott Robert Ladd., Genetic Algorithms
in C, MT Books, New York, 1996. Bagchi,
Tapan P., Multiobjective Scheduling by Genetic
Algorithms, Kluwer Academic Publishers, Boston,
Golderberg, David E., Genetic Algorithms in
Search, Optimization Machine Learning,
Addison-Wesley Publishing Company, Inc., New
York, 1989. Mitsuo Gen, Runwei Cheng, Lin
Lin,,Network Models and Optimization electronic
resource  Multiobjective Genetic Algorithm
Approach, London  Springer-Verlag, 2008. .
9
Bauer, J., Genetic Algorithms and Investment
Strategies, John Wiley Sons, New York, 1994.
Michell, M., An Introduction to Genetic
Algorithms, MIT Press, Cambridge, MA, 1996. Duc
Truong Pham and Dervis Karaboga, Intelligent
Algorithms, tabu search, simulated annealing and
neural networks, Springer, New York,
1998. Charles, L. Karr and L. Michael Freeman
(ed.), Industrial Applications of Genetic
Algorithms, CRC Press, New York, 1998. Man, K.
F., K.S. Tang and S. Kwong, Genetic Algorithms,
Springer, New York, 1999. (Refer a genetic game
in this book) Erick Cantu-Paz , Efficient and
Accurate Parallel Genetic Algorithms (Genetic
Algorithms and Evolutionary Computation Volume 1)
10
Randy L. Haupt, Sue Ellen Haupt, Practical
Genetic Algorithms George Lawton, A Practical
Guide to Genetic Algorithms in C/Book and
Disk Andrzej Osyczka, Evolution Algorithms for
single and Multicriteria Design Optimization,
Physica-Verlag, Heidelberg, 2002. Thomas Bäck,
Evolution Algorithms in Theory and Practice
Evolution Strategies, Evolution Programming,
Genetic Algorithms, Oxford University Press,
Oxford, 1996. Colin R. Reeves, Jonathan E. Rowe,
Evolution Algorithms Principles and
Perspectives, A Guide to GA Theory, Kluwer
Academic Publishers, Boston, 2003. David Corne,
Marco Dorigo, and Fred Glover (ed.), New Ideas in
Optimization, The McGraw-Hill Companies, NY,
1999.
11
Ant Algorithms Marco Dorigo, Thomas Stützle,
Ant colony optimization, Cambridge, Mass., MIT
Press, c2004 Eric Bonabeau, Marco Dorigo, and
Guy Theraulaz, Swarm Intelligence From Natural
to Artificial Systems, Oxford University Press,
NY, 1999. David Corne, Marco Dorigo, and Fred
Glover (ed.), New Ideas in Optimization, The
McGraw-Hill Companies, NY, 1999. James Kennedy,
Russell C. Eberhart, and Yuhui Shi, Swarm
Intelligence, Morgan Kaufmann Publishers, San
Francisco, 2001.
Simulated Annealing Emile Aarts and Jan Korst,
Simulate Annealing and Boltzmann Machines, John
Wiley Sons, Inc. NY, 1989.
1
12
R.H.J.M. Otten and L.P.P.P. van Ginneken, The
Annealing Algorithm, Kluwer Academic Publishers,
Boston, MA, U.S.A, 1989.
Tabu Search F. Glover and M.Laguna, Tabu
Search, Kluwer Academic Publishers, Boston, MA,
U.S.A, 1997.
General Stochastic Search Methods Emile H. L.
Aarts and Jan Karel Lenstra, Local Search in
Combinatorial Optimization, John Wiley and Sons,
NY, 1997. Holger H. Hoos and Thomas Stutzle,
Stochastic Local Search Fundamentals and
Applications, Morgan Kaufman Publishers, NY, 2005.
1
13
J. C. Spall, Introduction to Stochastic Search
and Optimization Solving Estimation, Simulation,
and Control, John Wiley and Sons Inc., NY,
2003. George S. Tarasenko, Stochastic
Optimization in the Soviet Union Random Search
Algorithms, Delphic Associates, Inc., VA, 1985.
Colin R. Reeves (ed.), Mordern Heuristic
Techniques for Combinatorial Problems, John
Wiley Sons, Inc. NY, 1993. Sadiq M. Sait and
Habib Youssef, Iterative Computer Algorithms with
Application to Engineering Solving Combinatorial
Optimization Problems, IEEE Computer Society, LA,
1999. Xin-She Yang, Nature-Inspired Metaheuristic
Algorithms, Luniver Press 2008. Raymond Chiong
(ed.), Nature-inspired algorithms for
optimisation electronic resource, Berlin,
Heidelberg  Springer Berlin Heidelberg, 2009.
Randomized Algorithms Rajeev Motwani and
Prabhakar Raghavan, Randomized Alogorithms,
Cambridge University Press, NY, 1995.
1
14
(No Transcript)
15
(No Transcript)
16
(No Transcript)
17
(No Transcript)
18
1. The Computational complexity We do not
expect NP-hard (and NP-complete) problems to be
solved in polynomial steps. Ambati et al.
(1991) An evolution algorithm that can achieve
heuristic solutions 25 worse than the expected
optimal solution on random Traveling Salesman
Problem in O(NlogN) time. Fogel (1993) An
evolution algorithm that can achieve heuristic
solutions 10 worse than the expected optimal
solution on random Traveling Salesman Problem in
O(N2) time.
19
The length of the minimal tour is 21134 km.
Aarts 1989
20
1
21
The sizes of the Traveling Salesman
Problem 100,000 105 people in a
stadium. 5,500,000,000 5.5 ? 109 people on
earth. 1,000,000,000,000,000,000,000 1021
liters of water on the earth. 1010 years 3 ?
1017 seconds The age of the universe
22
of possible solutions
( 179 digits)
The shortest roundtrip through 120 German cities.
The length of this tour is 6942 km.
23
number of digits
24
2. Search Spaces (Solution Space)
Algorithm Problem Solving
How to do efficient search in the state space?
9
25
e.g.
Karash-Kuhn-Tucker condition
1. To obtain a good initial state.
2. Goal identification (identify the optimal
condition).
For some problems, we are not able to identify
the goal state.
3. Control the search process.
1
26

State Space Representation
( a partial solution or a complete solution )
( the desired solution )
How to search in the state space?
Simplex Method, . . . , Tabu Search, Simulated
Annealing, . . .
27
Evolution Process
. . . .
. . . .
?
?
?
n-th Generation
Initial Populations
1st Generation
2nd Generation
Genetic Algorithm, Ant Algorithm, Particle Swarm
Optimization, Bee Algorithm, Firefly Algorithm, .
. .
9
28
Goal identification 1. The goal state can be
identified. (e.g. linear program, convex program
) 2. The goal state cannot be identified. (e.g.
traveling salesman problem, quadratic assignment
problem, knapsack problem, scheduling problems, .
. . )
Example 1 Simplex algorithm for linear program
Goal identification primal feasible dual
feasible condition
Control the search process moving to improved
adjacent basic solution
Example 2 Steepest descent algorithm for convex
program
Goal identification Karash-Kuhn-Tucker condition
Control the search process moving to the
steepest descent direction ( minimization
problem)
29
Example 3 Genetic algorithm for the Traveling
Salesman Problem
Goal identification The optimal condition can
not be identified efficiently. Using heuristic
rules to identify approximate solution.
Control the search process Crossover, mutation
and selection operations.
Example 4 Ant algorithm for the Traveling
Salesman Problem
Goal identification The optimal condition can
not be identified efficiently. Using heuristic
rules to identify approximate solution.
Control the search process pheromone state
transition probability
30
A state space of a Traveling Salesman Problem
with n 4.
1
31
3. Search Methods
1
32
Exact search methods Advantage To produce
an optimal solution and It is able to detect that
a given problem has no feasible
solution. Disadvantage It is time-consuming.
For example, it is infeasible for real-time
problems.
Traversal Search, Backtracking Algorithm, Branch
and Bound Algorithm, Dynamic Programming Method,
etc.
Local search methods Advantage It is
time-efficient and easy to write a program.
Disadvantage It may not produce an optimal
solution and is not able to detect that a given
problem has no feasible solution.
Traditional Local search does not provide a
mechanism for the search to escape from a local
optimum. The goal of local search is find a
solution which is as close as possible to the
optimum.
1
33
neighborhood search, simulated annealing
population-based search It makes use the
information of a set of solutions. e.g. genetic
algorithms, ant algorithm, . . .
1
34
RefinementThe best solution comes from a
process of repeatedly refining and inventing
alternative solutions.
35
Constructive Search Methods To generate a
complete solution by iteratively extending
partial solutions.
A ? J ? D ? E ? F ? K ? H ? I ? G ? C ? B ?
A (A J D E F K H I G C B) is a complete solution.
1
36
Perturbation Search Methods For a complete
solution, we can easily change it into new
complete solution by modifying one or more
solution components.
For example, in TSP a complete solution (ABCD) is
changes into a new solution (ADCB) by interchange
the positions of B and D. ( neighborhood search
methods, mutation operations i.e. )
( The Liberty Times, July 27, 2005 )
1
37
For a set of complete solutions, we can easily
change them into new complete solutions by
modifying one or more solution components among
the solutions. ( crossover and mutation
operations in genetic algorithms. etc. )
1
38
Deterministic Algorithms In each search step, it
progresses toward the complete solution by making
deterministic decision. e.g. Simplex method,
Quasi-Newton algorithms, tabu search and many
other conventional algorithms.
Deterministic algorithm will produce the same
solution for a given problem instance. Even for
the same instance, the stochastic algorithm
usually product distinct solutions at each run.
Ackleys function
39
4. Stochastic Algorithms It make a random
decision at each search step. e.g. Monte Carlo
algorithms, simulated annealing, genetic
algorithms, ant algorithms, etc. There are two
cases. (1) The available information the
objective function to be optimized may be
considered possible erroneous or corrupted by
random noise. (2) For the case with perfect
information, we may introduce a random element to
guide us when searching for the optimum solution.
1
40
Hoos 2005
1
41
Why stochastic algorithms?
1. They are efficient for the practical uses. 2.
They are simple to implement. For many
applications, stochastic algorithm is the
simplest algorithm available, or fastest, or
both. 3. They are every general and can be
implemented for a wide class of optimization. For
example, no differential function of real valued
parameters is required. It need not be
expressible in any particular constraint
language. 4. They can run in parallel. The
quality of solutions may be improved time by time.
trade-off
computing time ?? solution
quality
42
Configuration Graph
Transition probabilities for a deterministic
algorithm
Minimize f (x) subject to x ? S
f (xk) 2
Deterministic algorithm will produce the same
solution for a given problem instance.
43
Transition probabilities for a stochastic
algorithm
Remark Transition probabilities may be dependent
on the number of iterations.
Even for the same instance, the stochastic
algorithm usually product distinct solutions at
each run.
44
T the number of iterations.
1
45
There are two ways to avoid getting trapped in a
local optimum.
  • Accommodate nongreedy search move. It is allowed
    to move to a neighborhood state with a worse
    function value. ( Tabu search, simulated
    annealing, . . . )
  • To increase the number of edges in the
    configuration graph. However, the denser the
    configuration graph is, the more inefficient
    search step will be.

To enlarge the neighborhood for each state.
46
5. Stochastic Quicksort Algorithm
Quicksort
Pick a number as pivot element.
21, 15, 36, 28, 32, 18, 84, 57, 72, 50
15
84
21
36
72
18
50
57
28
32
21, 15, 18, 28, 32, 36, 84, 57, 72, 50
47
50, 57, 72, 84
48
15 18 21 28 32
36 50 57 72 84
49
Deterministic Quick Sort Algorithm The pivot
element at each step is selected by using
deterministic rules.
Las Vegas Algorithm
Stochastic Quick Sort Algorithm The pivot
element at each step is selected randomly.
Theorem 5.1 Motwani 1995 The expected number
of comparisons in an execution of Stochastic
Quicksort algorithm is O(nlogn) where n elements
are to be sorted.
1
50
6. A Stochastic Algorithm for Min-Cut
Problem For a multigraph G, a cut is a set of
edges whose removal results in G being broken
into two or more components. A min-cut is a cut
with minimum cardinality.
Multigraph G
a, e, g , f, g , a, b, d , c, d ,
a, b, e, g are cuts of G.
f, g , c, d are min-cuts of G.
1
51
Contracting an edge
The effect of contracting edge e.
Important Observation An edge contraction does
not reduce the min-cut size of G.
Monte-Carlo algorithm
  • A stochastic min-cut algorithm
  • Pick an edge uniformly at random and merge its
    two vertices.
  • With each contraction, the number of
    vertices of G decreased by one.
  • 2. It continues the contraction process until
    only two vertices remain. Then, the set of edges
    between these two vertices is a cut of G.

1
52
select e
c, d is a cut.
select a
select g
Theorem 6.1 Motwani 1995 The probability of
discovering a particular min-cut is larger than
2/n2 where n is the number of nodes.
Theorem 6.2 Motwani 1995 If we run the
algorithm n2/2 times, making independent random
choices each time, then the probability that a
min-cut is not found in any of the n2/2
attempts is at most

lt 1/e 1/3.1416.
1
Write a Comment
User Comments (0)
About PowerShow.com