Informed Search - PowerPoint PPT Presentation

1 / 56
About This Presentation
Title:

Informed Search

Description:

Title: Heuristic Search Author: Lise Getoor Description: based on material from Jean Claude Latombe, Russell and Norvig and Marie desJardins Last modified by – PowerPoint PPT presentation

Number of Views:128
Avg rating:3.0/5.0
Slides: 57
Provided by: Lise199
Learn more at: http://www.cs.umd.edu
Category:

less

Transcript and Presenter's Notes

Title: Informed Search


1
Informed Search
  • Russell and Norvig Ch. 4.1 - 4.3
  • CSMSC 421 Fall 2006

2
Outline
  • Informed use problem-specific knowledge
  • Which search strategies?
  • Best-first search and its variants
  • Heuristic functions?
  • How to invent them
  • Local search and optimization
  • Hill climbing, simulated annealing, local beam
    search,

3
Previously Graph search algorithm
  • function GRAPH-SEARCH(problem,fringe) return a
    solution or failure
  • closed ? an empty set
  • fringe ? INSERT(MAKE-NODE(INITIAL-STATEproblem)
    , fringe)
  • loop do
  • if EMPTY?(fringe) then return failure
  • node ? REMOVE-FIRST(fringe)
  • if GOAL-TESTproblem applied to STATEnode
    succeeds
  • then return SOLUTION(node)
  • if STATEnode is not in closed then
  • add STATEnode to closed
  • fringe ? INSERT-ALL(EXPAND(node, problem),
    fringe)
  • A strategy is defined by picking the order of
    node expansion

4
Blind Search
  • the ant knew that a certain arrangement had to
    be made, but it could not figure out how to make
    it. It was like a man with a tea-cup in one hand
    and a sandwich in the other, who wants to light a
    cigarette with a match. But, where the man would
    invent the idea of putting down the cup and
    sandwichbefore picking up the cigarette and the
    matchthis ant would have put down the sandwich
    and picked up the match, then it would have been
    down with the match and up with the cigarette,
    then down with the cigarette and up with the
    sandwich, then down with the cup and up with the
    cigarette, until finally it had put down the
    sandwich and picked up the match. It was
    inclined to rely on a series of accidents to
    achieve its object. It was patient and did not
    think Wart watched the arrangements with a
    surprise which turned into vexation and then into
    dislike. He felt like asking why it did not
    think things out in advance
    T.H. White, The Once and Future
    King

5
Search Algorithms
  • Blind search BFS, DFS, uniform cost
  • no notion concept of the right direction
  • can only recognize goal once its achieved
  • Heuristic search we have rough idea of how good
    various states are, and use this knowledge to
    guide our search

6
Best-first search
  • General approach of informed search
  • Best-first search node is selected for expansion
    based on an evaluation function f(n)
  • Idea evaluation function measures distance to
    the goal.
  • Choose node which appears best
  • Implementation
  • fringe is queue sorted in decreasing order of
    desirability.
  • Special cases Greedy search, A search

7
Heuristic
  • Webster's Revised Unabridged Dictionary (1913)
    (web1913)
  • Heuristic \Heuris"tic\, a. Greek. to discover.
    Serving to discover or find out.
  • The Free On-line Dictionary of Computing
    (15Feb98)
  • heuristic 1. ltprogramminggt A rule of thumb,
    simplification or educated guess that reduces or
    limits the search for solutions in domains that
    are difficult and poorly understood. Unlike
    algorithms, heuristics do not guarantee feasible
    solutions and are often used with no theoretical
    guarantee. 2. ltalgorithmgt approximation
    algorithm.
  • From WordNet (r) 1.6
  • heuristic adj 1 (computer science) relating to
    or using a heuristic rule 2 of or relating to a
    general formulation that serves to guide
    investigation ant algorithmic n a
    commonsense rule (or set of rules) intended to
    increase the probability of solving some problem
    syn heuristic rule, heuristic program

8
Informed Search
  • Add domain-specific information to select the
    best path along which to continue searching
  • Define a heuristic function, h(n), that estimates
    the goodness of a node n.
  • Specifically, h(n) estimated cost (or distance)
    of minimal cost path from n to a goal state.
  • The heuristic function is an estimate, based on
    domain-specific information that is computable
    from the current state description, of how close
    we are to a goal

9
Greedy Best-First Search
  • f(N) h(N) ? greedy best-first

10
Robot Navigation
11
Robot Navigation
f(N) h(N), with h(N) Manhattan distance to
the goal
12
Robot Navigation
f(N) h(N), with h(N) Manhattan distance to
the goal
5
8
7
4
6
2
3
3
5
4
6
3
7
4
5
5
0
0
2
1
1
6
3
2
4
What happened???
7
7
6
5
7
8
3
6
5
2
4
4
3
5
6
13
Greedy Search
  • f(N) h(N) ? Greedy best-first
  • Is it complete?
  • Is it optimal?
  • Time complexity?
  • Space complexity?

14
More informed search
  • We kept looking at nodes closer and closer to the
    goal, but were accumulating costs as we got
    further from the initial state
  • Our goal is not to minimize the distance from the
    current head of our path to the goal, we want to
    minimize the overall length of the path to the
    goal!
  • Let g(N) be the cost of the best path found so
    far between the initial node and N
  • f(N) g(N) h(N) ? A

15
A search
  • Best-known form of best-first search.
  • Idea avoid expanding paths that are already
    expensive.
  • Evaluation function f(n)g(n) h(n) ? A
  • g(n) the cost (so far) to reach the node.
  • h(n) estimated cost to get from the node to the
    goal.
  • f(n) estimated total cost of path through n to
    goal.

16
Robot Navigation
f(N) g(N)h(N), with h(N) Manhattan distance
to goal
70
81
17
Can we Prove Anything?
  • If the state space is finite and we avoid
    repeated states, the search is complete, but in
    general is not optimal
  • If the state space is finite and we do not avoid
    repeated states, the search is in general not
    complete
  • If the state space is infinite, the search is in
    general not complete

18
Admissible heuristic
  • Let h(N) be the true cost of the optimal path
    from N to a goal node
  • Heuristic h(N) is admissible if 0
    ? h(N) ? h(N)
  • An admissible heuristic is always optimistic

19
Optimality of A(standard proof)
  • Suppose suboptimal goal G2 in the queue.
  • Let n be an unexpanded node on a shortest to
    optimal goal G.
  • f(G2 ) g(G2 ) since h(G2 )0
  • gt g(G) since G2 is suboptimal
  • gt f(n) since h is admissible
  • Since f(G2) gt f(n), A will never select G2 for
    expansion

20
BUT graph search
  • Discards new paths to repeated state.
  • Previous proof breaks down
  • Solution
  • Add extra bookkeeping i.e. remove more expensive
    of two paths.
  • Ensure that optimal path to any repeated state is
    always first followed.
  • Extra requirement on h(n) consistency
    (monotonicity)

21
Consistency
  • A heuristic is consistent if
  • If h is consistent, we have
  • i.e. f(n) is nondecreasing along any path.

22
Optimality of A(more useful)
  • A expands nodes in order of increasing f value
  • Contours can be drawn in state space
  • Uniform-cost search adds circles.
  • F-contours are gradually
  • Added
  • 1) nodes with f(n)ltC
  • 2) Some nodes on the goal
  • Contour (f(n)C).
  • Contour I has all
  • Nodes with ffi, where
  • fi lt fi1.

23
A search, evaluation
  • Completeness YES
  • Since bands of increasing f are added
  • Unless there are infinitely many nodes with
    fltf(G)

24
A search, evaluation
  • Completeness YES
  • Time complexity
  • Number of nodes expanded is still exponential in
    the length of the solution.

25
A search, evaluation
  • Completeness YES
  • Time complexity (exponential with path length)
  • Space complexity
  • It keeps all generated nodes in memory
  • Hence space is the major problem not time

26
A search, evaluation
  • Completeness YES
  • Time complexity (exponential with path length)
  • Space complexity(all nodes are stored)
  • Optimality YES
  • Cannot expand fi1 until fi is finished.
  • A expands all nodes with f(n)lt C
  • A expands some nodes with f(n)C
  • A expands no nodes with f(n)gtC
  • Also optimally efficient (not including ties)

27
Memory-bounded heuristic search
  • Some solutions to A space problems (maintain
    completeness and optimality)
  • Iterative-deepening A (IDA)
  • Here cutoff information is the f-cost (gh)
    instead of depth
  • Recursive best-first search(RBFS)
  • Recursive algorithm that attempts to mimic
    standard best-first search with linear space.
  • (simple) Memory-bounded A ((S)MA)
  • Drop the worst-leaf node when memory is full

28
Recursive best-first search
  • function RECURSIVE-BEST-FIRST-SEARCH(problem)
    return a solution or failure
  • return RFBS(problem,MAKE-NODE(INITIAL-STATEprobl
    em),8)
  • function RFBS( problem, node, f_limit) return a
    solution or failure and a new f-cost limit
  • if GOAL-TESTproblem(STATEnode) then return
    node
  • successors ? EXPAND(node, problem)
  • if successors is empty then return failure, 8
  • for each s in successors do
  • f s ? max(g(s) h(s), f node)
  • repeat
  • best ? the lowest f-value node in successors
  • if f best gt f_limit then return failure, f
    best
  • alternative ? the second lowest f-value among
    successors
  • result, f best ? RBFS(problem, best,
    min(f_limit, alternative))
  • if result ? failure then return result

29
Recursive best-first search
  • Keeps track of the f-value of the
    best-alternative path available.
  • If current f-values exceeds this alternative
    f-value than backtrack to alternative path.
  • Upon backtracking change f-value to best f-value
    of its children.
  • Re-expansion of this result is thus still
    possible.

30
Recursive best-first search, ex.
  • Path until Rumnicu Vilcea is already expanded
  • Above node f-limit for every recursive call is
    shown on top.
  • Below node f(n)
  • The path is followed until Pitesti which has a
    f-value worse than the f-limit.

31
Recursive best-first search, ex.
  • Unwind recursion and store best f-value for
    current best leaf Pitesti
  • result, f best ? RBFS(problem, best,
    min(f_limit, alternative))
  • best is now Fagaras. Call RBFS for new best
  • best value is now 450

32
Recursive best-first search, ex.
  • Unwind recursion and store best f-value for
    current best leaf Fagaras
  • result, f best ? RBFS(problem, best,
    min(f_limit, alternative))
  • best is now Rimnicu Viclea (again). Call RBFS for
    new best
  • Subtree is again expanded.
  • Best alternative subtree is now through
    Timisoara.
  • Solution is found since because 447 gt 417.

33
RBFS evaluation
  • RBFS is a bit more efficient than IDA
  • Still excessive node generation (mind changes)
  • Like A, optimal if h(n) is admissible
  • Space complexity is O(bd).
  • IDA retains only one single number (the current
    f-cost limit)
  • Time complexity difficult to characterize
  • Depends on accuracy if h(n) and how often best
    path changes.
  • IDA and RBFS suffer from too little memory.

34
(simplified) memory-bounded A
  • Use all available memory.
  • I.e. expand best leafs until available memory is
    full
  • When full, SMA drops worst leaf node (highest
    f-value)
  • Like RFBS backup forgotten node to its parent
  • What if all leafs have the same f-value?
  • Same node could be selected for expansion and
    deletion.
  • SMA solves this by expanding newest best leaf
    and deleting oldest worst leaf.
  • SMA is complete if solution is reachable,
    optimal if optimal solution is reachable.

35
Heuristic functions
  • E.g for the 8-puzzle
  • Avg. solution cost is about 22 steps (branching
    factor /- 3)
  • Exhaustive search to depth 22 3.1 x 1010 states.
  • A good heuristic function can reduce the search
    process.

36
Heuristic Function
  • Function h(N) that estimates the cost of the
    cheapest path from node N to goal node.
  • Example 8-puzzle

h1(N) number of misplaced tiles 6
goal
N
37
Heuristic Function
  • Function h(N) that estimate the cost of the
    cheapest path from node N to goal node.
  • Example 8-puzzle

h2(N) sum of the distances of every
tile to its goal position 2 3 0 1
3 0 3 1 13
goal
N
38
8-Puzzle
f(N) h1(N) number of misplaced tiles
39
8-Puzzle
f(N) g(N) h(N) with h1(N) number of
misplaced tiles
40
8-Puzzle
f(N) h2(N) ? distances of tiles to goal
41
8-Puzzle
EXERCISE f(N) g(N) h2(N) with h2(N) ?
distances of tiles to goal
42
Heuristic quality
  • Effective branching factor b
  • Is the branching factor that a uniform tree of
    depth d would have in order to contain N1 nodes.
  • Measure is fairly constant for sufficiently hard
    problems.
  • Can thus provide a good guide to the heuristics
    overall usefulness.
  • A good value of b is 1.

43
Heuristic quality and dominance
  • 1200 random problems with solution lengths from 2
    to 24.
  • If h2(n) gt h1(n) for all n (both admissible)
  • then h2 dominates h1 and is better for search

44
Inventing admissible heuristics
  • Admissible heuristics can be derived from the
    exact solution cost of a relaxed version of the
    problem
  • Relaxed 8-puzzle for h1 a tile can move
    anywhere
  • As a result, h1(n) gives the shortest solution
  • Relaxed 8-puzzle for h2 a tile can move to any
    adjacent square.
  • As a result, h2(n) gives the shortest solution.
  • The optimal solution cost of a relaxed problem is
    no greater than the optimal solution cost of the
    real problem.

45
Another approach Local Search
  • Previously systematic exploration of search
    space.
  • Path to goal is solution to problem
  • YET, for some problems path is irrelevant.
  • E.g 8-queens
  • Different algorithms can be used Local Search
  • Hill-climbing or Gradient descent
  • Simulated Annealing
  • Genetic Algorithms, others
  • Also applicable to optimization problems
  • systematic search doesnt work
  • however, can start with a suboptimal solution and
    improve it

46
Local search and optimization
  • Local search use single current state and move
    to neighboring states.
  • Advantages
  • Use very little memory
  • Find often reasonable solutions in large or
    infinite state spaces.
  • Are also useful for pure optimization problems.
  • Find best state according to some objective
    function.

47
Local search and optimization
48
Hill-climbing search
  • is a loop that continuously moves in the
    direction of increasing value
  • It terminates when a peak is reached.
  • Hill climbing does not look ahead of the
    immediate neighbors of the current state.
  • Hill-climbing chooses randomly among the set of
    best successors, if there is more than one.
  • Hill-climbing a.k.a. greedy local search
  • Some problem spaces are great for hill climbing
    and others are terrible.

49
Hill-climbing search
  • function HILL-CLIMBING(problem) return a state
    that is a local maximum
  • input problem, a problem
  • local variables current, a node.
  • neighbor, a node.
  • current ? MAKE-NODE(INITIAL-STATEproblem)
  • loop do
  • neighbor ? a highest valued successor of
    current
  • if VALUE neighbor VALUEcurrent then
    return STATEcurrent
  • current ? neighbor

50
Robot Navigation
Local-minimum problem
f(N) h(N) straight distance to the goal
51
Hill climbing example
start
h 0
goal
h -4
-2
-5
-5
h -3
h -1
-4
-3
h -2
h -3
-4
f(n) -(number of tiles out of place)
52
Example of a local maximum
-4
start
goal
-4
0
-3
-4
53
Examples of problems with HC
  • applet

54
Drawbacks of hill climbing
  • Problems
  • Local Maxima peaks that arent the highest point
    in the space
  • Plateaus the space has a broad flat region that
    gives the search algorithm no direction (random
    walk)
  • Ridges flat like a plateau, but with dropoffs to
    the sides steps to the North, East, South and
    West may go down, but a combination of two steps
    (e.g. N, W) may go up.
  • Remedy
  • Introduce randomness

55
Hill-climbing variations
  • Stochastic hill-climbing
  • Random selection among the uphill moves.
  • The selection probability can vary with the
    steepness of the uphill move.
  • First-choice hill-climbing
  • Stochastic hill climbing by generating successors
    randomly until a better one is found.
  • Random-restart hill-climbing
  • Tries to avoid getting stuck in local maxima.
  • If at first you dont succeed, try, try again

56
Simulated annealing
  • Escape local maxima by allowing bad moves.
  • Idea but gradually decrease their size and
    frequency.
  • Origin metallurgical annealing
  • Bouncing ball analogy
  • Shaking hard ( high temperature).
  • Shaking less ( lower the temperature).
  • If T decreases slowly enough, best state is
    reached.
  • Applied for VLSI layout, airline scheduling, etc.

57
Simulated annealing
  • function SIMULATED-ANNEALING( problem, schedule)
    return a solution state
  • input problem, a problem
  • schedule, a mapping from time to temperature
  • local variables current, a node.
  • next, a node.
  • T, a temperature controlling the probability
    of downward steps
  • current ? MAKE-NODE(INITIAL-STATEproblem)
  • for t ? 1 to 8 do
  • T ? schedulet
  • if T 0 then return current
  • next ? a randomly selected successor of current
  • ?E ? VALUEnext - VALUEcurrent
  • if ?E gt 0 then current ? next
  • else current ? next only with probability e?E
    /T

58
Simulated Annealing
  • applet
  • Successful application circuit routing,
    traveling sales person (TSP)

59
Local beam search
  • Keep track of k states instead of one
  • Initially k random states
  • Next determine all successors of k states
  • If any of successors is goal ? finished
  • Else select k best from successors and repeat.
  • Major difference with random-restart search
  • Information is shared among k search threads.
  • Can suffer from lack of diversity.
  • Stochastic variant choose k successors at
    proportionally to state success.

60
When to Use Search Techniques?
  • The search space is small, and
  • There is no other available techniques, or
  • It is not worth the effort to develop a more
    efficient technique
  • The search space is large, and
  • There is no other available techniques, and
  • There exist good heuristics

61
Summary Informed Search
  • Heuristics
  • Best-first Search Algorithms
  • Greedy Search
  • A
  • Admissible heuristics
  • Constructing Heuristic functions
  • Local Search Algorithms
Write a Comment
User Comments (0)
About PowerShow.com