Heuristic Search - PowerPoint PPT Presentation

1 / 71
About This Presentation
Title:

Heuristic Search

Description:

If FRINGE is empty then return failure. n REMOVE(FRINGE) s STATE(n) ... Webster's Revised Unabridged Dictionary (1913) (web1913) Heuristic Heu*ris'tic, a. [Greek. ... – PowerPoint PPT presentation

Number of Views:112
Avg rating:3.0/5.0
Slides: 72
Provided by: LiseG
Category:

less

Transcript and Presenter's Notes

Title: Heuristic Search


1
Heuristic Search
  • Russell and Norvig Chapter 4
  • CSMSC 421 Fall 2005

2
Modified Search Algorithm
  • INSERT(initial-node,FRINGE)
  • Repeat
  • If FRINGE is empty then return failure
  • n ? REMOVE(FRINGE)
  • s ? STATE(n)
  • If GOAL?(s) then return path or goal state
  • For every state s in SUCCESSORS(s)
  • Create a node n
  • INSERT(n,FRINGE)

3
Search Algorithms
  • Blind search BFS, DFS, uniform cost
  • no notion concept of the right direction
  • can only recognize goal once its achieved
  • Heuristic search we have rough idea of how good
    various states are, and use this knowledge to
    guide our search

4
Best-first search
  • Idea use an evaluation function f(n) for each
    node
  • Estimate of desirability
  • Expand most desirable unexpanded node
  • Implementation fringe is queue sorted by
    descreasing order of desirability
  • Greedy search
  • A search

5
Heuristic
  • Webster's Revised Unabridged Dictionary (1913)
    (web1913)
  • Heuristic \Heuris"tic\, a. Greek. to discover.
    Serving to discover or find out.
  • The Free On-line Dictionary of Computing
    (15Feb98)
  • heuristic 1. ltprogramminggt A rule of thumb,
    simplification or educated guess that reduces or
    limits the search for solutions in domains that
    are difficult and poorly understood. Unlike
    algorithms, heuristics do not guarantee feasible
    solutions and are often used with no theoretical
    guarantee. 2. ltalgorithmgt approximation
    algorithm.
  • From WordNet (r) 1.6
  • heuristic adj 1 (computer science) relating to
    or using a heuristic rule 2 of or relating to a
    general formulation that serves to guide
    investigation ant algorithmic n a
    commonsense rule (or set of rules) intended to
    increase the probability of solving some problem
    syn heuristic rule, heuristic program

6
Informed Search
  • Add domain-specific information to select the
    best path along which to continue searching
  • Define a heuristic function, h(n), that estimates
    the goodness of a node n.
  • Specifically, h(n) estimated cost (or distance)
    of minimal cost path from n to a goal state.
  • The heuristic function is an estimate, based on
    domain-specific information that is computable
    from the current state description, of how close
    we are to a goal

7
Greedy Search
  • f(N) h(N) ? greedy best-first

8
Robot Navigation
9
Robot Navigation
f(N) h(N), with h(N) Manhattan distance to
the goal
10
Robot Navigation
f(N) h(N), with h(N) Manhattan distance to
the goal
5
8
7
4
6
2
3
3
5
4
6
3
7
4
5
5
0
0
2
1
1
6
3
2
4
What happened???
7
7
6
5
7
8
3
6
5
2
4
4
3
5
6
11
Greedy Search
  • f(N) h(N) ? greedy best-first
  • Is it complete?
  • If we eliminate endless loops, yes
  • Is it optimal?

12
More informed search
  • We kept looking at nodes closer and closer to the
    goal, but were accumulating costs as we got
    further from the initial state
  • Our goal is not to minimize the distance from the
    current head of our path to the goal, we want to
    minimize the overall length of the path to the
    goal!
  • Let g(N) be the cost of the best path found so
    far between the initial node and N
  • f(N) g(N) h(N)

13
Robot Navigation
f(N) g(N)h(N), with h(N) Manhattan distance
to goal
70
81
14
Can we Prove Anything?
  • If the state space is finite and we avoid
    repeated states, the search is complete, but in
    general is not optimal
  • If the state space is finite and we do not avoid
    repeated states, the search is in general not
    complete
  • If the state space is infinite, the search is in
    general not complete

15
Admissible heuristic
  • Let h(N) be the true cost of the optimal path
    from N to a goal node
  • Heuristic h(N) is admissible if 0
    ? h(N) ? h(N)
  • An admissible heuristic is always optimistic

16
A Search
  • Evaluation function f(N) g(N)
    h(N) where
  • g(N) is the cost of the best path found so far to
    N
  • h(N) is an admissible heuristic
  • Then, best-first search with this evaluation
    function is called A search
  • Important AI algorithm developed by Fikes and
    Nilsson in early 70s. Originally used in Shakey
    robot.

17
Completeness Optimality of A
  • Claim 1 If there is a path from the initial to
    a goal node, A (with no removal of repeated
    states) terminates by finding the best path,
    hence is
  • complete
  • optimal
  • requirements
  • 0 lt ? ? c(N,N)
  • Each node has a finite number of successors

18
Completeness of A
  • Theorem If there is a finite path from the
    initial state to a goal node, A will find it.

19
Proof of Completeness
  • Let g be the cost of a best path to a goal node
  • No path in search tree can get longer than g/?,
    before the goal node is expanded

20
Optimality of A
  • Theorem If h(n) is admissable, then A is
    optimal.

21
Proof of Optimality
f(G1) g(G1)
f(N) g(N) h(N) ? g(N) h(N)
f(G1) ? g(N) h(N) ? g(N) h(N)
22
Example of Evaluation Function
f(N) (sum of distances of each tile to its
goal) 3 x (sum of score functions
for each tile) where score function for a
non-central tile is 2 if it is not followed by
the correct tile in clockwise order and 0
otherwise
f(N) 2 3 0 1 3 0 3 1
3x(2 2 2 2 2 0 2) 49
goal
N
23
Heuristic Function
  • Function h(N) that estimates the cost of the
    cheapest path from node N to goal node.
  • Example 8-puzzle

h(N) number of misplaced tiles 6
goal
N
24
Heuristic Function
  • Function h(N) that estimate the cost of the
    cheapest path from node N to goal node.
  • Example 8-puzzle

h(N) sum of the distances of every
tile to its goal position 2 3 0 1
3 0 3 1 13
goal
N
25
8-Puzzle
f(N) h(N) number of misplaced tiles
26
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
27
8-Puzzle
f(N) h(N) ? distances of tiles to goal
28
8-Puzzle
N
goal
  • h1(N) number of misplaced tiles 6 is
    admissible
  • h2(N) sum of distances of each tile to goal
    13 is admissible
  • h3(N) (sum of distances of each tile to goal)
    3 x (sum of score functions for
    each tile) 49 is not admissible

29
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
30
Robot navigation
f(N) g(N) h(N), with h(N) straight-line
distance from N to goal
Cost of one horizontal/vertical step 1 Cost of
one diagonal step ?2
31
About Repeated States
  • g(N1) lt g(N)
  • h(N) lt h(N1)
  • f(N) lt f(N1)
  • g(N2) lt g(N)

32
Consistent Heuristic
  • The admissible heuristic h is consistent (or
    satisfies the monotone restriction) if for every
    node N and every successor N of Nh(N) ?
    c(N,N) h(N)(triangular inequality)

33
8-Puzzle
N
goal
34
Robot navigation
h(N) straight-line distance to the goal is
consistent
35
Claims
  • If h is consistent, then the function f alongany
    path is non-decreasing f(N) g(N) h(N)
    f(N) g(N) c(N,N) h(N)

36
Claims
  • If h is consistent, then the function f alongany
    path is non-decreasing f(N) g(N) h(N)
    f(N) g(N) c(N,N) h(N) h(N) ? c(N,N)
    h(N) f(N) ? f(N)

37
Claims
  • If h is consistent, then the function f alongany
    path is non-decreasing f(N) g(N) h(N)
    f(N) g(N) c(N,N) h(N) h(N) ? c(N,N)
    h(N) f(N) ? f(N)
  • If h is consistent, then whenever A expands a
    node it has already found an optimal path to the
    state associated with this node

38
Avoiding Repeated States in A
  • If the heuristic h is consistent, then
  • Let CLOSED be the list of states associated with
    expanded nodes
  • When a new node N is generated
  • If its state is in CLOSED, then discard N
  • If it has the same state as another node in the
    fringe, then discard the node with the largest f

39
Complexity of Consistent A
  • Let s be the size of the state space
  • Let r be the maximal number of states that can be
    attained in one step from any state
  • Assume that the time needed to test if a state is
    in CLOSED is O(1)
  • The time complexity of A is O(s r log s)

40
Heuristic Accuracy
  • h(N) 0 for all nodes is admissible and
    consistent. Hence, breadth-first and uniform-cost
    are particular A !!!
  • Let h1 and h2 be two admissible and consistent
    heuristics such that for all nodes N h1(N) ?
    h2(N).
  • Then, every node expanded by A using h2 is also
    expanded by A using h1.
  • h2 is more informed than h1

41
Iterative Deepening A (IDA)
  • Use f(N) g(N) h(N) with admissible and
    consistent h
  • Each iteration is depth-first with cutoff on the
    value of f of expanded nodes

42
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
43
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
44
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
45
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
46
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff4
47
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
48
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
49
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
50
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
51
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
52
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
53
8-Puzzle
f(N) g(N) h(N) with h(N) number of
misplaced tiles
Cutoff5
54
About Heuristics
  • Heuristics are intended to orient the search
    along promising paths
  • The time spent computing heuristics must be
    recovered by a better search
  • After all, a heuristic function could consist of
    solving the problem then it would perfectly
    guide the search
  • Deciding which node to expand is sometimes called
    meta-reasoning
  • Heuristics may not always look like numbers and
    may involve large amount of knowledge

55
Whats the Issue?
  • Search is an iterative local procedure
  • Good heuristics should provide some global
    look-ahead (at low computational cost)

56
Another approach
  • for optimization problems
  • rather than constructing an optimal solution from
    scratch, start with a suboptimal solution and
    iteratively improve it
  • Local Search Algorithms
  • Hill-climbing or Gradient descent
  • Potential Fields
  • Simulated Annealing
  • Genetic Algorithms, others

57
Hill-climbing search
  • If there exists a successor s for the current
    state n such that
  • h(s) lt h(n)
  • h(s) lt h(t) for all the successors t of n,
  • then move from n to s. Otherwise, halt at n.
  • Looks one step ahead to determine if any
    successor is better than the current state if
    there is, move to the best successor.
  • Similar to Greedy search in that it uses h, but
    does not allow backtracking or jumping to an
    alternative path since it doesnt remember
    where it has been.
  • Not complete since the search will terminate at
    "local minima," "plateaus," and "ridges."

58
Hill climbing on a surface of states
  • Height Defined by Evaluation Function

59
Hill climbing
  • Steepest descent ( greedy best-first with no
    search) ? may get stuck into local minimum

60
Robot Navigation
Local-minimum problem
f(N) h(N) straight distance to the goal
61
Examples of problems with HC
  • applet

62
Drawbacks of hill climbing
  • Problems
  • Local Maxima peaks that arent the highest point
    in the space
  • Plateaus the space has a broad flat region that
    gives the search algorithm no direction (random
    walk)
  • Ridges flat like a plateau, but with dropoffs to
    the sides steps to the North, East, South and
    West may go down, but a step to the NW may go up.
  • Remedy
  • Introduce randomness
  • Random restart.
  • Some problem spaces are great for hill climbing
    and others are terrible.

63
Whats the Issue?
  • Search is an iterative local procedure
  • Good heuristics should provide some global
    look-ahead (at low computational cost)

64
Hill climbing example
start
h 0
goal
h -4
-2
-5
-5
h -3
h -1
-4
-3
h -2
h -3
-4
f(n) -(number of tiles out of place)
65
Example of a local maximum
-4
start
goal
-4
0
-3
-4
66
Potential Fields
  • Idea modify the heuristic function
  • Goal is gravity well, drawing the robot toward it
  • Obstacles have repelling fields, pushing the
    robot away from them
  • This causes robot to slide around obstacles
  • Potential field defined as sum of attractor field
    which get higher as you get closer to the goal
    and the indivual obstacle repelling field (often
    fixed radius that increases exponentially closer
    to the obstacle)

67
Does it always work?
  • No.
  • But, it often works very well in practice
  • Advantage 1 can search a very large search
    space without maintaining fringe of possiblities
  • Scales well to high dimensions, where no other
    methods work
  • Example motion planning
  • Advantage 2 local method. Can be done online

68
Example RoboSoccer
  • All robots have same field attracted to the ball
  • Repulsive potential to other players
  • Kicking field attractive potential to the ball
    and local repulsive potential if clase to the
    ball, but not facing the direction of the
    opponents goal. Result is tangent, player goes
    around the ball.
  • Single team kicking field repulsive field to
    avoid hitting other players player position
    fields (paraboilic if outside your area of the
    field, 0 inside). Player nearest to the ball has
    the largest attractive coefficient, avoids all
    players crowding the ball.
  • Two teams identical potential fields.

69
Simulated annealing
  • Simulated annealing (SA) exploits an analogy
    between the way in which a metal cools and
    freezes into a minimum-energy crystalline
    structure (the annealing process) and the search
    for a minimum or maximum in a more general
    system.
  • SA can avoid becoming trapped at local minima.
  • SA uses a random search that accepts changes that
    increase objective function f, as well as some
    that decrease it.
  • SA uses a control parameter T, which by analogy
    with the original application is known as the
    system temperature.
  • T starts out high and gradually decreases toward
    0.

70
Simulated annealing (cont.)
  • A bad move from A to B is accepted with a
    probability
  • (f(B)-f(A)/T)
  • e
  • The higher the temperature, the more likely it is
    that a bad move can be made.
  • As T tends to zero, this probability tends to
    zero, and SA becomes more like hill climbing
  • If T is lowered slowly enough, SA is complete and
    admissible.

71
The simulated annealing algorithm
72
Simulated Annealing
  • applet
  • Successful application circuit routing,
    traveling sales person (TSP)

73
Summary Local Search Algorithms
  • Steepest descent ( greedy best-first with no
    search) ? may get stuck into local minimum
  • Better Heuristics Potential Fields
  • Simulated annealing
  • Genetic algorithms

74
When to Use Search Techniques?
  • The search space is small, and
  • There is no other available techniques, or
  • It is not worth the effort to develop a more
    efficient technique
  • The search space is large, and
  • There is no other available techniques, and
  • There exist good heuristics

75
Summary
  • Heuristic function
  • Best-first search
  • Admissible heuristic and A
  • A is complete and optimal
  • Consistent heuristic and repeated states
  • Heuristic accuracy
  • IDA
Write a Comment
User Comments (0)
About PowerShow.com