Notes 2: Problem Solving using Search - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

Notes 2: Problem Solving using Search

Description:

Qualities of a Good Representation. Given a good representation/abstraction, solving a problem can be easy! Conversely, a poor representation makes a problem harder ... – PowerPoint PPT presentation

Number of Views:90
Avg rating:3.0/5.0
Slides: 43
Provided by: padhrai
Category:

less

Transcript and Presenter's Notes

Title: Notes 2: Problem Solving using Search


1
Notes 2 Problem Solving using Search
  • ICS 171, Winter 2001

2
Summary
  • Problem solving as search
  • Search consists of
  • state space
  • operators
  • start state
  • goal states
  • A Search Tree is an efficient way to represent a
    search
  • There are a variety of specific search
    techniques, including
  • Depth-First Search
  • Breadth-First Search
  • Others which use heuristic knowledge (in future
    lectures)

3
What do these problems have in common?
  • Find the layout of chips on a circuit board which
    minimize the total length of interconnecting
    wires
  • Schedule which airplanes and crew fly to which
    cities for American, United, British Airways,
    etc
  • Write a program which can play chess against a
    human
  • Build a system which can find human faces in an
    arbitrary digital image
  • Program a tablet-driven portable computer to
    recognize your handwriting
  • Decrypt data which has been encrypted but you do
    not have the key
  • Answer
  • they can all be formulated as search problems

4
Setting Up a State Space Model
  • State-space Model is a Model for The Search
    Problem
  • usually a set of discrete states X
  • e.g., in driving, the states in the model could
    be towns/cities
  • Start State - a state from X where the search
    starts
  • Goal State(s)
  • a goal is defined as a target state
  • For now all goal states have utility 1, and all
    non-goals have utility 0
  • there may be many states which satisfy the goal
  • e.g., drive to a town with an airport
  • or just one state which satisfies the goal
  • e.g., drive to Las Vegas
  • Operators
  • operators are mappings from X to X
  • e.g. moves from one city to another that are
    legal (connected with a road)

5
Summary Defining Search Problems
  • A statement of a Search problem has 4 components
  • 1. A set of states
  • 2. A set of operators which allow one to get
    from one state to another
  • 3. A start state S
  • 4. A set of possible goal states, or ways to test
    for goal states
  • Search solution consists of
  • a sequence of operators which transform S into a
    a unique goal state G (this is the sequence of
    actions the we would take to maximize the
    success function)
  • Representing real problems in a search framework
  • may be many ways to represent states and
    operators
  • key idea represent only the relevant aspects of
    the problem

6
Example 1 Formulation of Map Problem
  • Set of States
  • individual cities
  • e.g., Irvine, SF, Las Vegas, Reno, Boise,
    Phoenix, Denver
  • Operators
  • freeway routes from one city to another
  • e.g., Irvine to SF via 5, SF to Seattle, etc
  • Start State
  • current city where we are, Irvine
  • Goal States
  • set of cities we would like to be in
  • e.g., cities which are cooler than Irvine
  • Solution
  • a sequence of operators which get us a specific
    goal city,
  • e.g., Irvine to SF via 5, SF to Reno via 80, etc

7
State Space Graph Map Navigationnot to be
confused with Search Tree!!
S start, G goal, other nodes
intermediate states, links legal transitions
8
Abstraction
  • Definition of Abstraction
  • Navigation Example how do we define states and
    operators?
  • First step is to abstract the big picture
  • i.e., solve a map problem
  • nodes cities, links freeways/roads (a
    high-level description)
  • this description is an abstraction of the real
    problem
  • Can later worry about details like freeway
    onramps, refueling, etc
  • Abstraction is critical for automated problem
    solving
  • must create an approximate, simplified, model of
    the world for the computer to deal with
    real-world is too detailed to model exactly
  • good abstractions retain all important details

Process of removing irrelevant detail to create
an abstract representation high-level,
ignores irrelevant details
9
Qualities of a Good Representation
  • Given a good representation/abstraction, solving
    a problem can be easy!
  • Conversely, a poor representation makes a problem
    harder
  • Qualities which make a Representation useful
  • important objects and relations are emphasized
  • irrelevant details are suppressed
  • natural constraints are made explicit and clear
  • completeness
  • transparency
  • concise
  • efficient

10
Example 2 Puzzle-Solving as Search
  • You have a 3-gallon and a 4-gallon
  • You have a faucet with an unlimited amount of
    water
  • You need to get exactly 2 gallons in 4-gallon jug
  • State representation (x, y)
  • x Contents of four gallon
  • y Contents of three gallon
  • Start state (0, 0)
  • Goal state(s) G (2, 0), (2, 1), (2, 2)
  • Operators
  • Fill 3-gallon (0,0)-gt(0,3), fill 4-gallon
    (0,0)-gt(0,4)
  • Fill 3-gallon from 4-gallon (4,0)-gt(1,3), fill
    4-gallon from 3-gallon (0,3)-gt(3,0) or
    (1,3)-gt(4,0) or (2,3)-gt(4,0).
  • Empty 3-gallon into 4-gallon, empty 4-gallon into
    3-gallon
  • Dump 3-gallon down drain (0,3)-gt(0,0), dump
    4-gallon down drain (4,0)-gt(0,0)

11
Example 3 The 8-Puzzle Problem
Start State
1
2
3
4
6
8
7
5
1
2
3
4
6
5
8
7
1
2
3
4
5
6
7
8
Goal State
12
8-puzzle as a Search Problem
  • States
  • ?
  • Operators
  • ?
  • Start State
  • ?
  • Goal State

13
Search Tree For Searching a State Space
  • 1. State Space Graph
  • nodes are states
  • links are operators mapping states to states
  • 2. Search Tree (not a tree data structure
    strictly speaking)
  • S, the starting point for the search, is always
    the root node
  • The search algorithm searches by expanding leaf
    nodes
  • Internal nodes are states the algorithm has
    already explored
  • Leaves are potential goal nodes the algorithm
    stops expanding once it finds (attempts to
    expand) the first goal node G
  • Key Concept
  • Search trees are a data structure to represent
    how the search algorithm explores the state
    space, i.e., they dynamically evolve as the
    search proceeds

14
Example 1 a Search Tree for Map Navigation
S

D
A

B
D
E
E
C
Note this is the search tree at some particular
point in in the search.
15
Searching Using a Search Tree
  • Given a state space, start state S, and set of
    goal states G, the search algorithm must
    determine how to get from S to an element of G
  • General Search Procedure using a Search Tree
  • Let S be the root node of the search tree
  • Expand S i.e., determine the children of S
  • expanded nodes (internal) are closed
  • unexpanded nodes (leaves) are open
  • Add the children of S to the tree
  • Visit the open nodes of the tree in some order
  • test if each node is a goal node
  • if not expand it and add children to the open
    queue
  • Different search algorithms differ in the order
    in which nodes are expanded

16
Main Definitions
  • State Space - a graph showing states (nodes) and
    operators (edges)
  • Search tree - a tree showing the list of explored
    (closed) and leaf (open) nodes
  • Fringe (open nodes) - nodes on the priority queue
    waiting to be expanded, organized as a priority
    queue. Search algorithm differ primarily in the
    way they organize priority queue for the fringe
  • Node expansion - applying all possible operators
    the node (state corresponding to it) and adding
    the children to the fringe
  • Solution - is a path (sequence of states) from
    start state to a goal
  • Uninformed or blind search is performed in state
    spaces where operators have no costs, informed
    search is performed in search spaces where
    operators have costs and it makes sense to talk
    about optimality of a search algorithm
  • Optimal algorithm - finds the lowest cost
    solution (i.e. path from start state to goal with
    lowest cost)
  • Complete algorithm - finds a solution if one
    exists

17
Search Tree Notation
  • Branching Factor, b
  • b is the number of children of a node
  • Depth of a node, d
  • number of branches from root to a node
  • Partial Paths
  • paths which do not end in a goal
  • Complete Paths
  • paths which end in a goal
  • Open Nodes (Fringe)
  • nodes which are not expanded (i.e., the leaves of
    the tree)
  • Closed Nodes
  • nodes which have already been expanded (internal
    nodes)

d Depth 0 1 2
S
b2
G
18
Why Search can be difficult
  • At the start of the search, the search algorithm
    does not know
  • the size of the tree
  • the shape of the tree
  • the depth of the goal states
  • How big can a search tree be?
  • say there is a constant branching factor b
  • and one goal exists at depth d
  • search tree which includes a goal can have
    bd different branches in the tree (worst
    case)
  • Examples
  • b 2, d 10 bd 210 1024
  • b 10, d 10 bd 1010 10,000,000,000

19
What is a Search Algorithm ?
  • A search algorithm is an algorithm which
    specifies precisely how the state space is
    searched to find a goal state
  • Search algorithms differ in the order in which
    nodes are explored in the state space
  • since it is intractable to look at all nodes, the
    order in which we search is critically important
  • different search algorithms will generate
    different search trees
  • For now we will assume
  • we are looking for one goal state
  • all goal states are equally good, i.e., all have
    the same utility 1

20
How can we compare Search Algorithms?
  • Completeness
  • is it guaranteed to find a goal state if one
    exists?
  • Time Complexity
  • if a goal state exists, how long does it take
    (worst-case) to find it?
  • Space Complexity
  • if a goal state exists, how much memory
    (worst-case) is needed to perform the search?
  • Optimality
  • if goal states have different qualities, will the
    search algorithm find the state with the highest
    quality?
  • (we will return to optimal search later, when
    goals can have different qualities/utilities for
    now assume they are all the same)

21
Types of Search Algorithms
  • Blind/Uninformed Search (Chapter 3)
  • do not use any specific problem domain
    information
  • e.g., searching for a route on a map without
    using any information about direction
  • yes, it seems dumb but the power is in the
    generality
  • examples breadth-first, depth-first, etc
  • we will look at several of these classic
    algorithms
  • Heuristic/Informed Search (Chapter 4)
  • use domain specific heuristics (rules of thumb,
    hints)
  • e.g. since Seattle is north of LA, explore
    northerly routes first
  • This is the AI approach to search
  • i.e., add domain-specific knowledge
  • Later we will see that this can transform
    intractable search problems into ones we can
    solve in real-time

22
Depth First Search (DFS)
S
D
A
B
D
E
C
Here, to avoid repeated states assume we dont
expand any child node which appears already in
the path from the root S to the parent. (Other
strategies are also possible)
F
D
G
23
Pseudocode for Depth-First Search
Initialize Let Q S While Q is not
empty pull Q1, the first element in Q if Q1 is
a goal report(success) and quit else child_no
des expand(Q1) eliminate child_nodes which
represent loops put remaining child_nodes at
the front of Q end Continue
  • Comments
  • a specific example of the general search tree
    method
  • open nodes are stored in a queue Q of nodes
  • key feature
  • new unexpanded nodes are put at front of the
    queue
  • convention is that nodes are ordered left to
    right

24
Breadth First Search
S
D
A
B
A
E
D
E
S
E
C
F
B
B
S
(Use the simple heuristic of not generating a
child node if that node is a parent to avoid
obvious loops this clearly does not avoid all
loops and there are other ways to do this)
25
Pseudocode for Breadth-First Search
Initialize Let Q S While Q is not
empty pull Q1, the first element in Q if Q1 is
a goal report(success) and quit else child_no
des expand(Q1) eliminate child_nodes which
represent loops put remaining child_nodes at
the back of Q end Continue
  • Comments
  • another specific example of the general search
    tree method
  • open nodes are stored in a queue Q of nodes
  • differs from depth-first only in that
  • new unexpanded nodes are put at back of the queue
  • convention again is that nodes are ordered left
    to right

26
Summary
  • Intelligent agents can often be viewed as
    searching for problem solutions in a discrete
    state-space
  • Search consists of
  • state space
  • operators
  • start state
  • goal states
  • A Search Tree is an efficient way to represent a
    search
  • There are a variety of general blind search
    techniques, including
  • Depth-First Search
  • Breadth-First Search
  • we will look at several others in the next few
    lectures
  • Assigned Reading Nilsson Ch.II -7 R.N. Ch. 3
    (3.1-3.4)

27
Notes 3 Extensions of Blind Search
  • ICS 171 Winter 2001

28
Summary
  • Search basics
  • repeat main definitions
  • search spaces and search trees
  • complexity of search algorithms
  • Some new search strategies
  • depth-limited search
  • iterative deepening
  • bidirectional search
  • Repeated states can lead to infinitely large
    search trees
  • methods for for detecting repeated states
  • But all of these are blind algorithms in the
    sense that they do not take into account how far
    away the goal is.

29
Defining Search Problems
  • A statement of a Search problem has 4 components
  • 1. A set of states
  • 2. A set of operators which allow one to get
    from one state to another
  • 3. A start state S
  • 4. A set of possible goal states, or ways to test
    for goal states
  • Search solution consists of
  • a unique goal state G
  • a sequence of operators which transform S into a
    goal state G
  • For now we are only interested in finding any
    path from S to G

30
Important A State Space and a Search Tree are
different
S
B
D
S
B
C
C
B
State Space
Example of a Search Tree
  • A State Space represents all states and operators
    for the problem
  • A Search Tree is what an algorithm constructs as
    it solves a search problem
  • so we can have different search trees for the
    same problem
  • search trees grow in a dynamic fashion until the
    goal is found

31
Why is Search often difficult?
  • At the start of the search, the search algorithm
    does not know
  • the size of the tree
  • the shape of the tree
  • the depth of the goal states
  • How big can a search tree be?
  • say there is a constant branching factor b
  • and a goal exists at depth d
  • search tree which includes a goal can have
    bd different branches in the tree
  • Examples
  • b 2, d 10 bd 210 1024
  • b 10, d 10 bd 1010 10,000,000,000

32
Quick Review of Complexity
  • In analyzing an algorithm we often look at
    worst-case
  • 1. Time complexity
  • (how many seconds it will take to run)
  • 2. Space complexity
  • (how much memory is required)
  • The complexity will be a function of the size of
    the inputs to the algorithm, e.g., n is the
    length of a list to be sorted
  • we want to know how the algorithm scales with n
  • We use notation O( f(n) ) to denote behavior,
    e.g.,
  • O(n) for linear behavior
  • O(n2) for quadratic behavior
  • O(cn) for exponential behavior, c is some
    constant
  • In practice we want algorithms which scale
    nicely

33
What is the Complexity of Depth-First Search?
  • Time Complexity
  • assume (worst case) that there is 1 goal leaf at
    the RHS
  • so DFS will expand all nodes 1 b b2
    ......... bd O (bd)
  • Space Complexity
  • how many nodes can be in the queue (worst-case)?
  • at depth l lt d we have b-1 nodes
  • at depth d we have d nodes
  • total (d-1)(b-1) d O(bd)

d0
d1
d2
G
d0
d1
d2 d3 d4

34
What is the Complexity of Breadth-First Search?
  • Time Complexity
  • assume (worst case) that there is 1 goal leaf at
    the RHS
  • so BFS will expand all nodes 1 b b2
    ......... bd O (bd)
  • Space Complexity
  • how many nodes can be in the queue (worst-case)?
  • at depth d-1 there are bd unexpanded nodes in
    the Q O (bd)

d0
d1
d2
G
d0
d1
d2
G
35
Comparing DFS and BFS
  • Same Time Complexity, unless...
  • say we have a search problem with
  • goals at some depth d
  • but paths without goals and which have infinite
    depth (i.e., loops in the search space)
  • in this case DFS never may never find a goal!
  • (it stays on an infinite (non-goal) path forever)
  • BFS does not have this problem
  • it will find the finite depth goals in time
    O(bd)
  • Practical considerations
  • if there are no infinite paths, and many possible
    goals in the search tree, DFS will work best
  • For large branching factors b, BFS may run out of
    memory
  • BFS is safer if we know there can be loops

36
Depth-Limited Search
  • This is Depth-first Search with a cutoff on the
    maximum depth of any path
  • i.e., implement the usual DFS algorithm
  • when any path gets to be of length m, then do not
    expand this path any further and backup
  • this will systematically explore a search tree of
    depth m
  • Properties of DLS
  • Time complexity O(bm), Space complexity
    O(bm)
  • If goal state is within m steps from S
  • DLS is complete
  • e.g., with N cities, we know that if there is a
    path to goal state G it can be of length N-1 at
    most
  • But usually we dont know where the goal is!
  • if goal state is more than m steps from S, DLS is
    incomplete!
  • gt the big problem is how to choose the value of m

37
Iterative Deepening Search
  • Basic Idea
  • we can run DFS with a maximum depth constraint, m
  • i.e., DFS algorithm but it backs-up at depth m
  • this avoids the problem of infinite paths
  • But how do we choose m in practice? say m lt d
    (!!)
  • We can run DFS multiple times, gradually
    increasing m
  • this is known as Iterative Deepening Search

Procedure for m 1 to infinity if (depth-first
search with max-depth m ) returns
success then report (success) and
quit else continue end
38
Comments on Iterative Deepening Search
  • Complexity
  • Space complexity O(bd)
  • (since its like depth first search run different
    times)
  • Time Complexity
  • 1 (1b) (1 bb2) .......(1 b....bd)
    O(bd) (i.e., the same as BFS or DFS in the the
    worst case)
  • The overhead in repeated searching of the same
    subtrees is small relative to the overall time
  • e.g., for b10, only takes about 11 more time
    than DFS
  • A useful practical method
  • combines
  • guarantee of finding a solution if one exists (as
    in BFS)
  • space efficiency, O(bd) of DFS

39
Search Method 5 Bidirectional Search
  • Idea
  • simultaneously search forward from S and
    backwards from G
  • stop when both meet in the middle
  • need to keep track of the intersection of 2 open
    sets of nodes
  • What does searching backwards from G mean
  • need a way to specify the predecessors of G
  • this can be difficult,
  • e.g., predecessors of checkmate in chess?
  • what if there are multiple goal states?
  • what if there is only a goal test, no explicit
    list?
  • Complexity
  • time complexity is O(2 b(d/2)) O(b (d/2)) steps
  • memory complexity is the same

40
Repeated States
S
B
S
B
C
C
S
C
B
S
State Space
Example of a Search Tree
  • For many problems we can have repeated states in
    the search tree
  • i.e., the same state can be gotten to by
    different paths
  • gt same state appears in multiple places in the
    tree
  • this is inefficient, we want to avoid it
  • How inefficient can this be?
  • a problem with a finite number of states can have
    an infinite search tree!

41
Techniques for Avoiding Repeated States
  • Method 1
  • when expanding, do not allow return to parent
    state
  • (but this will not avoid triangle loops for
    example)
  • Method 2
  • do not create paths containing cycles (loops)
  • i.e., do not keep any child-node which is also an
    ancestor in the tree
  • Method 3
  • never generate a state generated before
  • only method which is guaranteed to always avoid
    repeated states
  • must keep track of all possible states (uses a
    lot of memory)
  • e.g., 8-puzzle problem, we have 9! 362,880
    states
  • Methods 1 and 2 are most practical, work well on
    most problems

42
Summary
  • A review of search
  • a search space consists of states and operators
    it is a graph
  • a search tree represents a particular exploration
    of search space
  • There are various extensions to standard BFS and
    DFS
  • depth-limited search
  • iterative deepening
  • bidirectional search
  • Repeated states can lead to infinitely large
    search trees
  • we looked at methods for for detecting repeated
    states
  • Reading Nilsson Ch.II-8 R.N. Ch. 3, Ch.4
    (4.1)
  • All of the search techniques so far do not care
    about the cost of the path to the goal. Next we
    will look at algorithms which are optimal in the
    sense they always find a path to goal which has
    minimum cost.
Write a Comment
User Comments (0)
About PowerShow.com