Solving Problems by Searching - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

Solving Problems by Searching

Description:

On holiday in Romania; currently in Arad. Formulate goal: be in Bucharest. Formulate problem: ... There are 3 missionaries and 3 cannibals on one bank of the river. ... – PowerPoint PPT presentation

Number of Views:380
Avg rating:3.0/5.0
Slides: 53
Provided by: axe55
Category:

less

Transcript and Presenter's Notes

Title: Solving Problems by Searching


1
Solving Problems by Searching
  • Chapter 3

2
Outline
  • Problem-solving agents
  • Problem types
  • Problem formulation
  • Example problems
  • Basic search algorithms

3
Problem-solving agents
  • Formulate
  • Creation of the model
  • Built-in knowledge for the agent created during
    design-time
  • Formulation of the goal
  • Simplification of the agents performance measure
  • Search
  • Searching for the solution using appropriate
    search strategy in order to find a solution
  • Execute
  • Execution of the solution

4
Example Romania
  • On holiday in Romania currently in Arad.
  • Formulate goal
  • be in Bucharest
  • Formulate problem
  • states various cities
  • actions drive between cities
  • Find solution
  • sequence of cities, e.g., Arad, Sibiu, Fagaras,
    Bucharest

5
Example Romania
6
Abstraction (Resolution level)
  • It is important to abstract from unnecessary
    details when modeling the problem
  • Example, when finding the optimal route from Arad
    to Bucharest, the details of cities and their
    distances are enough
  • Abstraction for states
  • What kind of information we need to represent
    state?
  • E.g. do we need sight seeing spots?
  • Abstraction for actions
  • What kind of activities represent actions?
  • E.g. do we need details of moving, turning?
  • In general, all the information we have should be
    useful for the problem we are trying to solve

7
Assumptions
  • Environment
  • Static
  • No changes during agents operation in the
    environment
  • Fully observable
  • Agent can observe at least the initial state
  • Deterministic
  • It is possible to find a solution as sequence of
    states and corresponding actions the agent has
    all the knowledge to find the optimal solution
    (i.e. with the lowest path cost)
  • This is the easiest type of environment, we will
    relax it in the future

8
Type of problems
  • Toy problems
  • Easy to model, simplified
  • Very important for comparison of algorithms and
    their evaluations
  • Real world problem
  • More difficult, more complex
  • Have details of the real-world

9
Examples
  • Vacuum world
  • Two locations each with dirtor no dirt
  • States 2 x 22
  • Goal no dirt in A and B
  • Could be represented asABADBD where A,B,
    AD,BD is 1 or 0
  • e.g. 0101 means agent is at location B, there
    is no dirt at location A and there is dirt in
    location B
  • There are always actions right, left, suck that
    agent can perform in each state

10
Examples, Toy problems
  • 8 queen problem
  • Place 8 queen on a chess board so that no queen
    attacks each other
  • Missionaries and Cannibals
  • There are 3 missionaries and 3 cannibals on one
    bank of the river. There is a boat which can
    carry one or two people. The goal is to move all
    missionaries and cannibals to the other bank so
    that cannibals do not outnumber missionaries at
    any time.

11
Problem Formulation
  • A problem is defined by four items
  • initial state e.g., "at Arad"
  • actions or successor function S(x) set of
    actionstate pairs
  • e.g., S(Arad) ltArad ? Zerind, Zerindgt,
  • goal test, can be
  • explicit, e.g., x "at Bucharest"
  • implicit, e.g., Checkmate(x)
  • Remark in general this can be an arbitrary
    condition (formula) over the goal state
  • path cost (additive)
  • e.g., sum of distances, number of actions
    executed, etc.
  • c(x,a,y) is the step cost, assumed to be 0
  • A solution is a sequence of actions leading from
    the initial state to a goal state

12
Vacuum world state space graph
  • states? integer dirt and robot location
  • actions? Left, Right, Suck
  • goal test? no dirt at all locations
  • path cost? 1 per action
  • states?
  • actions?
  • goal test?
  • path cost?

13
Example Romania
  • states?
  • location of the agent
  • actions?
  • move along edges
  • goal test?
  • at(Bukarest)
  • path cost?
  • travel distance

14
Tree search algorithms
  • Basic, general idea
  • offline, simulated exploration of state space by
    generating successors of already-explored states
    (a.k.a.expanding states)
  • Algorithm needs to know the expansion function

15
Tree search example
16
Tree search example
17
Tree search example
18
Implementation general tree search
Fringe storage/queue, depending on the type of
queue, the algorithm will apply a specific search
strategy
19
Representation of the state space as a
treestates vs. nodes
  • A state is a (representation of) a physical
    configuration
  • A node is a data structure constituting part of a
    search tree includes state, parent node, action,
    path cost g(x), depth
  • The Expand function creates new nodes, filling in
    the various fields and using the SuccessorFn of
    the problem to create the corresponding states.ç

20
Search strategies
  • A search strategy is defined by picking the order
    of node expansion
  • Strategies are evaluated along the following
    dimensions
  • completeness Does it always find a solution if
    one exists?
  • time complexity Number of nodes generated
  • space complexity Maximum number of nodes in
    memory
  • optimality Does it always find a least-cost
    solution?
  • Time and space complexity are measured in terms
    of
  • b maximum branching factor of the search tree
  • d depth of the least-cost solution
  • m maximum depth of the state space (may be 8)

21
Uninformed search strategies
  • Uninformed search strategies use only the
    information available in the problem definition
  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening search

22
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation
  • Fringe (storage, queue) is a FIFO queue, i.e.,
    new successors inserted at end

23
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation
  • fringe is a FIFO queue, i.e., new successors
    inserted at end

24
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation
  • fringe is a FIFO queue, i.e., new successors
    inserted at end

25
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation
  • fringe is a FIFO queue, i.e., new successors
    inserted at end

26
Properties of breadth-first search
  • Complete? Yes (if b is finite)
  • Time? 1bb2b3 bd b(bd-1) O(bd1)
  • Space? O(bd1) (keeps every node in memory)
  • Optimal? Yes (if cost 1 or a fixed constant c
    per step)
  • Space is the bigger problem (more than time)!
  • b branching factor
  • d depth of first solution found

27
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

28
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

29
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

30
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

31
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

32
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

33
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

34
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

35
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

36
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

37
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

38
Depth-first search
  • Expand deepest unexpanded node
  • Implementation
  • fringe LIFO queue, i.e., put successors at front

39
Properties of depth-first search
  • Complete? No fails in infinite-depth spaces,
    spaces with loops (e.g. travel example!)
  • Modify to avoid repeated states along loop paths
  • complete in finite spaces, but does't solve
    incompleteness
  • for infinite spaces
  • Time? O(bm) terrible if m is much larger than d
  • but if solutions are dense, may be much faster
    than breadth-first
  • Space? O(bm), i.e., linear space!!!
  • Optimal? No
  • m maximum depth

40
Depth-limited search
  • depth-first search with depth limit l,
  • i.e., nodes at depth l have no successors
  • Recursive implementation
  • ? Obviously complete if solution is at depth lt l
    !!

41
Iterative deepening search
Simple idea do regain completeness and avoid
memory problems of BFS apply DLS with increasing
depth-limit
42
Iterative deepening search l 0
43
Iterative deepening search l 1
44
Iterative deepening search l 2
45
Iterative deepening search l 3
46
Iterative deepening search
  • Number of nodes generated in a depth-limited
    search to depth d with branching factor b
  • NDLS b0 b1 b2 bd-2 bd-1 bd
  • Number of nodes generated in an iterative
    deepening search to depth d with branching factor
    b
  • NIDS (d1)b0 d b1 (d-1)b2 3bd-2
    2bd-1 1bd
  • For b 10, d 5,
  • NDLS 1 10 100 1,000 10,000 100,000
    111,111
  • NIDS 6 50 400 3,000 20,000 100,000
    123,456
  • Overhead (123,456 - 111,111)/111,111 11
  • i.e.

47
Properties of iterative deepening search
  • Complete? Yes
  • Time? (d1)b0 d b1 (d-1)b2 bd O(bd)
  • Space? O(bd) !!
  • Optimal? Yes, if step cost constant
  • Remark the trick (uniform-cost-search) we used
    for optimality in breath-first with varying costs
    doesn't work for iterative deepening, but we
    could apply something similar increasing the
    path-cost limit instead of the search depth This
    is called iterative lengthening, however, this
    has more overhead and doesn't inherit all
    advantages of IDS.

48
Repeated states
  • Failure to detect repeated states can turn a
    linear problem into an exponential one!

49
Example for repeated states Grid graph
  • BFS better suited here! Expands each node once
    O(d2)
  • DFS and IDS expand each path to a node ?
    exponential number of paths!
  • High memory consumtion of BFS
  • vs.
  • There is no way to avoid this except keeping
    visited nodes in memory.

50
Graph search
51
Bidirectional search
  • We learned already about problems which could be
    solved in forward (searching from the initial
    state towards the goal) or backward (searching
    from the goal state towards the initial state)
    direction.
  • The idea behind bidirectional search is to
    combine these searches interleaved until they
    "meet"
  • Motivation bd/2 bd/2 ltlt bd

52
Summary
  • Problem formulation usually requires abstracting
    away real-world details to define a state space
    that can feasibly be explored
  • Variety of uninformed search strategies
  • Iterative deepening search uses only linear space
    and not much more time than other uninformed
    algorithms
Write a Comment
User Comments (0)
About PowerShow.com