Midterm Review - PowerPoint PPT Presentation

1 / 46
About This Presentation
Title:

Midterm Review

Description:

... Perform depth-first search by expanding all nodes N such that f(N) cutoff Reset cutoff to smallest value f of non-expanded (leaf) ... – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 47
Provided by: ron49
Learn more at: http://www.cs.umd.edu
Category:

less

Transcript and Presenter's Notes

Title: Midterm Review


1
Midterm Review
  • cmsc421 Fall 2005

2
Outline
  • Review the material covered by the midterm
  • Questions?

3
Subjects covered so far
  • Search Blind Heuristic
  • Constraint Satisfaction
  • Adversarial Search
  • Logic Propositional and FOL

4
and subjects to be covered
  • planning
  • uncertainty
  • learning
  • and a few more

5
Search
6
Stating a Problem as a Search Problem
S
  • State space S
  • Successor function x ? S ? SUCCESSORS(x) ?
    2S
  • Arc cost
  • Initial state s0
  • Goal test
  • x?S ? GOAL?(x) T or F
  • A solution is a path joining the initial to
    a goal node

7
Basic Search Concepts
  • Search tree
  • Search node
  • Node expansion
  • Fringe of search tree
  • Search strategy At each stage it determines
    which node to expand

8
Search Algorithm
  • If GOAL?(initial-state) then return initial-state
  • INSERT(initial-node,FRINGE)
  • Repeat
  • If empty(FRINGE) then return failure
  • n ? REMOVE(FRINGE)
  • s ? STATE(n)
  • For every state s in SUCCESSORS(s)
  • Create a new node n as a child of n
  • If GOAL?(s) then return path or goal state
  • INSERT(n,FRINGE)

9
Performance Measures
  • CompletenessA search algorithm is complete if it
    finds a solution whenever one existsWhat about
    the case when no solution exists?
  • OptimalityA search algorithm is optimal if it
    returns an optimal solution whenever a solution
    exists
  • ComplexityIt measures the time and amount of
    memory required by the algorithm

10
Blind vs. Heuristic Strategies
  • Blind (or un-informed) strategies do not exploit
    state descriptions to select which node to expand
    next
  • Heuristic (or informed) strategies exploits state
    descriptions to select the most promising node
    to expand

11
Blind Strategies
  • Breadth-first
  • Bidirectional
  • Depth-first
  • Depth-limited
  • Iterative deepening
  • Uniform-Cost(variant of breadth-first)

12
Comparison of Strategies
  • Breadth-first is complete and optimal, but has
    high space complexity
  • Depth-first is space efficient, but is neither
    complete, nor optimal
  • Iterative deepening is complete and optimal, with
    the same space complexity as depth-first and
    almost the same time complexity as breadth-first

13
Avoiding Revisited States
  • Requires comparing state descriptions
  • Breadth-first search
  • Store all states associated with generated nodes
    in CLOSED
  • If the state of a new node is in CLOSED, then
    discard the node

14
Avoiding Revisited States
  • Depth-first search
  • Solution 1
  • Store all states associated with nodes in current
    path in CLOSED
  • If the state of a new node is in CLOSED, then
    discard the node
  • Only avoids loops
  • Solution 2
  • Store of all generated states in CLOSED
  • If the state of a new node is in CLOSED, then
    discard the node
  • ? Same space complexity as breadth-first !

15
Uniform-Cost Search (Optimal)
  • Each arc has some cost c ? ? gt 0
  • The cost of the path to each fringe node N is
  • g(N) ? costs of arcs
  • The goal is to generate a solution path of
    minimal cost
  • The queue FRINGE is sorted in increasing cost
  • Need to modify search algorithm

16
Modified Search Algorithm
  • INSERT(initial-node,FRINGE)
  • Repeat
  • If empty(FRINGE) then return failure
  • n ? REMOVE(FRINGE)
  • s ? STATE(n)
  • If GOAL?(s) then return path or goal state
  • For every state s in SUCCESSORS(s)
  • Create a node n as a successor of n
  • INSERT(n,FRINGE)

17
Avoiding Revisited States in Uniform-Cost Search
  • When a node N is expanded the path to N is also
    the best path from the initial state to STATE(N)
    if it is the first time STATE(N) is encountered.
  • So
  • When a node is expanded, store its state into
    CLOSED
  • When a new node N is generated
  • If STATE(N) is in CLOSED, discard N
  • If there exits a node N in the fringe such that
    STATE(N) STATE(N), discard the node N or N
    with the highest-cost path

18
Best-First Search
  • It exploits state description to estimate how
    promising each search node is
  • An evaluation function f maps each search node N
    to positive real number f(N)
  • Traditionally, the smaller f(N), the more
    promising N
  • Best-first search sorts the fringe in increasing f

19
Heuristic Function
  • The heuristic function h(N) estimates the
    distance of STATE(N) to a goal state Its value
    is independent of the current search tree it
    depends only on STATE(N) and the goal test
  • Example
  • h1(N) number of misplaced tiles 6

20
Classical Evaluation Functions
  • h(N) heuristic functionIndependent of search
    tree
  • g(N) cost of the best path found so far between
    the initial node and NDependent on search
    tree
  • f(N) h(N) ? greedy best-first search
  • f(N) g(N) h(N)

21
Can we Prove Anything?
  • If the state space is finite and we discard nodes
    that revisit states, the search is complete, but
    in general is not optimal
  • If the state space is finite and we do not
    discard nodes that revisit states, in general the
    search is not complete
  • If the state space is infinite, in general the
    search is not complete

22
Admissible Heuristic
  • Let h(N) be the cost of the optimal path from N
    to a goal node
  • The heuristic function h(N) is admissible if
    0 ? h(N) ? h(N)
  • An admissible heuristic function is always
    optimistic !
  • Note G is a goal node ? h(G) 0

23
A Search(most popular algorithm in AI)
  • f(N) g(N) h(N), where
  • g(N) cost of best path found so far to N
  • h(N) admissible heuristic function
  • for all arcs 0 lt ? ? c(N,N)
  • modified search algorithm is used
  • ? Best-first search is then called A search

24
Result 1
  • A is complete and optimal

25
Experimental Results
  • 8-puzzle with
  • h1 number of misplaced tiles
  • h2 sum of distances of tiles to their goal
    positions
  • Random generation of many problem instances
  • Average effective branching factors (number of
    expanded nodes)

d IDS A1 A2
2 2.45 1.79 1.79
6 2.73 1.34 1.30
12 2.78 (3,644,035) 1.42 (227) 1.24 (73)
16 -- 1.45 1.25
20 -- 1.47 1.27
24 -- 1.48 (39,135) 1.26 (1,641)
26
Iterative Deepening A (IDA)
  • Idea Reduce memory requirement of A by applying
    cutoff on values of f
  • Consistent heuristic h
  • Algorithm IDA
  • Initialize cutoff to f(initial-node)
  • Repeat
  • Perform depth-first search by expanding all nodes
    N such that f(N) ? cutoff
  • Reset cutoff to smallest value f of non-expanded
    (leaf) nodes

27
Local Search
  • Light-memory search method
  • No search tree only the current state is
    represented!
  • Only applicable to problems where the path is
    irrelevant (e.g., 8-queen), unless the path is
    encoded in the state
  • Many similarities with optimization techniques

28
Search problems
Blind search
Heuristic search best-first and A
Variants of A
Construction of heuristics
Local search
29
When to Use Search Techniques?
  • The search space is small, and
  • No other technique is available
  • Developing a more efficient technique is not
    worth the effort
  • The search space is large, and
  • No other technique is available, and
  • There exist good heuristics

30
Constraint Satisfaction
31
Constraint Satisfaction Problem
  • Set of variables X1, X2, , Xn
  • Each variable Xi has a domain Di of possible
    values
  • Usually Di is discrete and finite
  • Set of constraints C1, C2, , Cp
  • Each constraint Ck involves a subset of variables
    and specifies the allowable combinations of
    values of these variables
  • Goal Assign a value to every variable such that
    all constraints are satisfied

32
CSP as a Search Problem
  • Initial state empty assignment
  • Successor function a value is assigned to any
    unassigned variable, which does not conflict with
    the currently assigned variables
  • Goal test the assignment is complete
  • Path cost irrelevant

33
Questions
  1. Which variable X should be assigned a value next?
  2. Minimum Remaining Values/Most-constrained
    variable
  3. In which order should its domain D be sorted?
  4. least constrained value
  5. How should constraints be propagated?
  6. forward checking
  7. arc consistency

34
Adversarial Search
35
Specific Setting Two-player, turn-taking,
deterministic, fully observable, zero-sum,
time-constrained game
  • State space
  • Initial state
  • Successor function it tells which actions can be
    executed in each state and gives the successor
    state for each action
  • MAXs and MINs actions alternate, with MAX
    playing first in the initial state
  • Terminal test it tells if a state is terminal
    and, if yes, if its a win or a loss for MAX, or
    a draw
  • All states are fully observable

36
Choosing an Action Basic Idea
  • Using the current state as the initial state,
    build the game tree uniformly to the maximal
    depth h (called horizon) feasible within the time
    limit
  • Evaluate the states of the leaf nodes
  • Back up the results from the leaves to the root
    and pick the best action assuming the worst from
    MIN
  • ? Minimax algorithm

37
Minimax Algorithm
  • Expand the game tree uniformly from the current
    state (where it is MAXs turn to play) to depth h
  • Compute the evaluation function at every leaf of
    the tree
  • Back-up the values from the leaves to the root of
    the tree as follows
  • A MAX node gets the maximum of the evaluation of
    its successors
  • A MIN node gets the minimum of the evaluation of
    its successors
  • Select the move toward a MIN node that has the
    largest backed-up value

38
Alpha-Beta Pruning
  • Explore the game tree to depth h in depth-first
    manner
  • Back up alpha and beta values whenever possible
  • Prune branches that cant lead to changing the
    final decision

39
Example
The beta value of a MIN node is an upper bound
on the final backed-up value. It can never
increase
40
Example
a 1
The alpha value of a MAX node is a lower bound
on the final backed-up value. It can never
decrease
41
Alpha-Beta Algorithm
  • Update the alpha/beta value of the parent of a
    node N when the search below N has been completed
    or discontinued
  • Discontinue the search below a MAX node N if its
    alpha value is ? the beta value of a MIN ancestor
    of N
  • Discontinue the search below a MIN node N if its
    beta value is ? the alpha value of a MAX ancestor
    of N

42
Logical Representations and Theorem Proving
43
Logical Representations
  • Propositional logic
  • First-order logic
  • syntax and semantics
  • models, entailment, etc.

44
The Game
  • Rules
  • Red goes first
  • On their turn, a player must move their piece
  • They must move to a neighboring square, or if
    their opponent is adjacent to them, with a blank
    on the far side, they can hop over them
  • The player that makes it to the far side first
    wins.

45
Logical Inference
  • Propositional truth tables or resolution
  • FOL resolution unification
  • strategies
  • shortest clause first
  • set of support

46
Questions?
Write a Comment
User Comments (0)
About PowerShow.com