CSCI 4310 - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

CSCI 4310

Description:

CSCI 4310 Lecture 10: Local Search Algorithms Adapted from Russell and Norvig & Coppin – PowerPoint PPT presentation

Number of Views:108
Avg rating:3.0/5.0
Slides: 22
Provided by: unt95
Learn more at: http://www.cse.unt.edu
Category:

less

Transcript and Presenter's Notes

Title: CSCI 4310


1
CSCI 4310
  • Lecture 10 Local Search Algorithms
  • Adapted from Russell and Norvig Coppin

2
Reading
  • Section 5.8 in Artificial Intelligence
    Illuminated by Ben Coppin
  • ISBN 0-7637-3230-3
  • Chapter 4 in Artifical Intelligence, A Modern
    Approach by Russell and Norvig
  • ISBN 0-13-790395-2
  • Chapters 4 and 25 in Winston

3
Techniques as Metaheuristics
  • wikipedia
  • Methods for solving problems by combining
    heuristics - hopefully efficiently.
  • Generally applied to problems for which there is
    no satisfactory problem-specific algorithm
  • Not a panacea

4
What is local search?
  • Local search algorithms operate using a single
    current state
  • Search involves moving to neighbors of the
    current state
  • We dont care how we got to the current state
  • ie No one cares how you arrived at the 8 queens
    solution
  • Dont need intermediate steps
  • We do need a path for TSP
  • But not the discarded longer paths

5
Purpose
  • Local search strategies
  • Can often find good solutions in large or
    infinite search spaces
  • Work well with optimization problems
  • An objective function
  • Like Genetic Algorithms
  • Nature has provided reproductive fitness as an
    objective function
  • No goal test or path cost as we saw with directed
    search

6
Local Search
  • State space landscape
  • State is current location on the curve
  • If height of state is cost, finding the global
    minimum is the goal

7
Local Search
  • Complete local search algorithm
  • Always finds a goal if one exists
  • Optimal local search algorithm
  • Always finds a global min / max

Google image search for Global Maxima
8
Hill climbing
  • Greedy local search
  • If minimizing, this is gradient descent
  • The algorithm will never get worse
  • Suffers from the same mountain climbing problems
    we have discussed
  • Sometimes worse must you get in order to find the
    better
  • -Yoda
  • We can do better

9
Stochastic Hill Climbing
  • Generate successors randomly until one is better
    than the current state
  • Good choice when each state has a very large
    number of successors
  • Still, this is an incomplete algorithm
  • We may get stuck in a local maxima

10
Random Restart Hill Climbing
  • Generate start states randomly
  • Then proceed with hill climbing
  • Will eventually generate a goal state as the
    initial state
  • we should have a complete algorithm by dumb luck
    (eventually)
  • Hard problems typically have an large number of
    local maxima
  • This may be a decent definition of difficult as
    related to search strategy

11
Simulated Annealing
  • Problems so far
  • Never making downhill moves is guaranteed to be
    incomplete
  • A purely random walk (choosing a successor state
    randomly) is complete, but boring and inefficient
  • Lets combine and see what happens

12
Simulated Annealing
  • Uses the concept of temperature which decreases
    as we proceed
  • Instead of picking the best next move, we choose
    randomly
  • If the move improves, we accept
  • If not, we accept with a probability that
    exponentially decreases with the badness of the
    move
  • At high temperature, we are more likely to accept
    random bad moves
  • As the system cools, bad moves are less likely

13
Simulated Annealing
  • Temperature eventually goes to 0.
  • At Temperature 0, this is the greedy algorithm

14
Simulated Annealing
  • s s0 e E(s) // Initial
    state, energy.
  • sb s eb e // Initial
    "best" solution
  • k 0 // Energy
    evaluation count.
  • while k lt kmax and e gt emax // While
    time left not good enough
  • sn neighbour(s) // Pick
    some neighbour.
  • en E(sn) // Compute
    its energy.
  • if en lt eb then // Is this
    a new best?
  • sb sn eb en // Yes,
    save it.
  • if P(e, en, temp(k/kmax)) gt random() then //
    Should we move to it?
  • s sn e en // Yes,
    change state.

15
Simulated Annealing
Fast cooling
Slow cooling
  • Similar colors attract at short distances and
    repel at slightly larger distances. Each move
    swaps two pixels
  • Pictures wikipedia

16
Tabu Search
  • Keep a list of k states previously visited to
    avoid repeating paths
  • Combine this technique with other heuristics
  • Avoid local optima by rewarding exploration of
    new paths, even if they appear relatively poor
  • a bad strategic choice can yield more
    information than a good random choice.
  • The home of Tabu search

17
Ant Colony Optimization
  • Send artificial ants along graph edges
  • Drop pheromone as you travel
  • Next generation of ants are attracted to
    pheromone
  • Applied to Traveling Salesman
  • Read this paper

18
Local Beam Search
  • Hill climbing and variants have one current state
  • Beam search keeps k states
  • For each of k states, generate all potential next
    states
  • If any next state is goal, terminate
  • Otherwise, select k best successors
  • Each search thread shares information
  • So, not just k parallel searches

19
Local Beam Search 2
  • Quickly move resources to where the most progress
    is being made
  • But suffers a lack of diversity and can quickly
    devolve into parallel hill climbing
  • So, we can apply our same techniques
  • Randomize choose k successors at random
  • At the point in any algorithm where we are
    getting stuck just randomize to re-introduce
    diversity
  • What does this sound like in the genetic realm?

20
Stochastic Beam Search
  • Choose a pool of next states at random
  • Select k with probability increasing as a
    function of the value of the state
  • Successors (offspring) of a state (organism)
    populate the next generation according to a value
    (fitness)
  • Sound familiar?

21
Genetic Algorithms
  • Variant of stochastic beam search
  • Rather than modifying a single state
  • Two parent states are combined to form a
    successor state
  • This state is embodied in the phenotype
Write a Comment
User Comments (0)
About PowerShow.com