BISC-SIG-ES Short Course Fuzzy Logic and GA-Fuzzy - PowerPoint PPT Presentation

1 / 92
About This Presentation
Title:

BISC-SIG-ES Short Course Fuzzy Logic and GA-Fuzzy

Description:

HEURISTIC SEARCH Heuristics: Rules for choosing the branches in a state space that are most likely to lead to an acceptable problem solution. Rules that provide ... – PowerPoint PPT presentation

Number of Views:99
Avg rating:3.0/5.0
Slides: 93
Provided by: michaelb51
Category:
Tags: bisc | sig | course | fuzzy | heuristic | logic | short

less

Transcript and Presenter's Notes

Title: BISC-SIG-ES Short Course Fuzzy Logic and GA-Fuzzy


1
HEURISTIC SEARCH
  • Heuristics
  • Rules for choosing the branches in a state space
    that are most likely to lead to an acceptable
    problem solution.
  • Rules that provide guidance in decision making
  • Often improves the decision making
  • Shopping Choosing the shortest queue in a
    supermarket does not necessarily means that you
    will out of the market earlier
  • Used when
  • Information has inherent ambiguity
  • computational costs are high

2
Finding Heuristics
Tic - Tac - Toe which one to choose?
Heuristic calculate winning lines and move to
state with most wining lines.
3
Calculating winning lines
X
3 winning lines
4 winning lines
3 winning lines
Always choose the state with maximum heuristic
value Maximizing heuristics
4
Heuristic Choosing city with minimum distance
Always choose the city with minimum heuristic
value Minimizing heuristics (hilthi-1)
5
Finding Heuristic functions
  • 8-puzzle
  • Avg. solution cost is about 22 steps (branching
    factor /- 3)
  • Exhaustive search to depth 22 3.1 x 1010 states.
  • A good heuristic function can reduce the search
    process.
  • CAN YOU THINK OF A HEURISTIC ?

6
8 Puzzle
  • Two commonly used heuristics
  • h1 the number of misplaced tiles
  • h1(s)8
  • h2 the sum of the distances of the tiles from
    their goal positions (manhattan distance).
  • h2(s)3122233218

7
Heuristics Quality
  • Admissibility of Heuristics
  • Heuristic function should never overestimate the
    actual cost to the goal
  • Effective Branching factor
  • Heuristic that has lower effective branching
    factor is better
  • More Informed Heuristics
  • Heuristic that has a higher value is more
    informed compared to the others

8
Algorithms for Heuristic Search
9
Hill Climbing If the Node is better, only then
you proceed to that Node When is a node better?
Apply heuristic to compare the nodes
Simplified Algorithm 1. Start with
current-state (cs) initial state 2. Until cs
goal-state or there is no change in the cs
do (a) Get the successor of cs and use the
EVALUATION FUNCTION to assign a score to each
successor (b) If one of the successor has a
better score than cs then set the new state to
be the successor with the best score.
10
Navigation Problem
Choose the closest city to travel
11
Navigation Problem
Choose the closest city to travel hi lt hi-1
12
GET STUCK
No Backtracking
Navigation Problem
Choose the closest city to travel
13
Hill Climbing
Node label heuristic value of the node A10 A
is node name, 10 is the heuristic evaluation
14
Hill Climbing
Compare B with C
Node label heuristic value of the node A10 A
is node name, 10 is the heuristic evaluation
15
Hill Climbing
C is better so move
Node label heuristic value of the node A10 A
is node name, 10 is the heuristic evaluation
16
Hill-climbing search
  • is a loop that continuously moves in the
    direction of increasing value (for maximizing
    heuristic), or in the direction of decreasing
    value (for minimizing heuristic)
  • It terminates when a peak is reached
  • Hill climbing does not look ahead of the
    immediate neighbors of the current state.
  • Hill-climbing chooses randomly among the set of
    best successors, if there is more than one.
  • Hill-climbing a.k.a. greedy local search

17
Hill-climbing search Algo
  • function HILL-CLIMBING( problem) return a state
    that is a local maximum
  • input problem, a problem
  • local variables current, a node.
  • neighbor, a node.
  • current ? MAKE-NODE(INITIAL-STATEproblem)
  • loop do
  • neighbor ? a highest valued successor of
    current
  • if VALUE neighbor VALUEcurrent then
    return STATEcurrent
  • current ? neighbor

18
Hill-climbing example
  • 8-queens problem (complete-state formulation).
  • Successor function move a single queen to
    another square in the same column.
  • Heuristic function h(n) the number of pairs of
    queens that are attacking each other (directly or
    indirectly).

19
Hill-climbing example
a)
b)
  • a) shows a state of h17 and the h-value for each
    possible successor.
  • b) A local minimum in the 8-queens state space
    (h1).

20
Drawback
  • Ridge sequence of local maxima difficult for
    hill climbing to navigate
  • Plateaux an area of the state space where the
    evaluation function is flat.
  • GETS STUCK 86 OF THE TIME.

21
Hill-climbing variations
  • Stochastic hill-climbing
  • Random selection among the uphill moves.
  • The selection probability can vary with the
    steepness of the uphill move.
  • First-choice hill-climbing
  • Stochastic hill climbing by generating successors
    randomly until a better one is found.
  • Random-restart hill-climbing
  • Tries to avoid getting stuck in local
    maxima/minima.

22
Reading Assignment on Simulated Annealing Study
the methods to get rid of the local minima
problem especially simulated annealing You can
consult the Text Books (no need to turn in any
report)
23
Best-First Search
  • It exploits state description to estimate how
    promising each search node is
  • An evaluation function f maps each search node N
    to positive real number f(N)
  • Traditionally, the smaller f(N), the more
    promising N
  • Best-first search sorts the nodes in increasing f
    random order is assumed among nodes with equal
    values of f
  • Best only refers to the value of f, not to the
    quality of the actual path. Best-first search
    does not generate optimal paths in general

24
Summary Best-first search
  • General approach
  • Best-first search node is selected for expansion
    based on an evaluation function f(n)
  • Idea evaluation function measures distance to
    the goal.
  • Choose node which appears best based on the
    heuristic value
  • Implementation
  • A queue is sorted in decreasing order of
    desirability.
  • Special cases greedy search, A search

25
Evaluation function
  • Same as Hill Climbing
  • Heuristic Evaluation
  • f(n)h(n) estimated cost of the cheapest path
    from node n to goal node.
  • If n goal then h(n)f(n)0

26
Best First Search Method Algo 1. Start with
agenda (priority queue) initial-state 2.
While agenda not empty do (a) remove the best
node from the agenda (b) if it is the goal node
then return with success. Otherwise find
its successors. ( c) Assign the successor nodes
a score using the evaluation function and
add the scored nodes to agenda
27
Breadth - First Depth First Hill Climbing
28
Best First Search Method
  • 1. Open A10 closed
  • Evaluate A10
  • Open C3,B5 closed A10
  • Evaluate C3
  • Open B5,F6 closed C3,A10
  • Evaluate B5
  • Open E2,D4,F6 closed C3,B5,A10.
  • Evaluate E2
  • 5 Open G0,D4,F6 closed E2,C3,B5,A10
  • Evaluate G0
  • the solution / goal is reached

29
Comments Best First Search Method
  • If the evaluation function is good best first
    search may drastically cut the amount of search
    requested otherwise.
  • The first move may not be the best one.
  • If the evaluation function is heavy / very
    expensive the benefits may be overweighed by the
    cost of assigning a score

30
Romania with step costs in km
  • hSLDstraight-line distance heuristic.
  • hSLD can NOT be computed from the problem
    description itself
  • In this example f(n)h(n)
  • Expand node that is closest to goal
  • Greedy best-first search

31
Greedy search example
OpenArad366
Arad (366)
  • Assume that we want to use greedy search to solve
    the problem of travelling from Arad to Bucharest.
  • The initial stateArad

32
Greedy search example
OpenSibiu253 , Tmisoara329 , Zerind374
Arad
Zerind(374)
Sibiu(253)
Timisoara (329)
  • The first expansion step produces
  • Sibiu, Timisoara and Zerind
  • Greedy best-first will select Sibiu.

33
Greedy search example
OpenFagaras176 , RV193 , Arad366 , Or380
Arad
Sibiu
Rimnicu Vilcea (193)
Arad (366)
Fagaras (176)
Oradea (380)
  • If Sibiu is expanded we get
  • Arad, Fagaras, Oradea and Rimnicu Vilcea
  • Greedy best-first search will select Fagaras

34
Greedy search example
OpenBucharest0 , , goal achieved
Arad
Sibiu
Fagaras

Sibiu (253)
Bucharest (0)
  • If Fagaras is expanded we get
  • Sibiu and Bucharest
  • Goal reached !!
  • Yet not optimal (see Arad, Sibiu, Rimnicu Vilcea,
    Pitesti)

35
Greedy search, evaluation
  • Completeness NO (DF-search)
  • Check on repeated states
  • With Oradea as GOAL, and start state lasi, what
    would be the path

Lasi to Neamt to lasi and so on
36
8-Puzzle
f(N) h(N) number of misplaced tiles
Total nodes expanded 16
37
8-Puzzle
f(N) h(N) S distances of tiles to goal
Savings 25
Total nodes expanded 12
38
Robot Navigation
39
Robot Navigation
f(N) h(N), with h(N) Manhattan distance to
the goal
40
Robot Navigation
f(N) h(N), with h(N) Manhattan distance to
the goal
5
8
7
4
6
2
3
3
5
4
6
3
7
4
5
5
0
0
2
1
1
6
3
2
4
7
7
6
5
7
8
3
6
5
2
4
4
3
5
6
Not optimal at all
41
Greedy search, evaluation
  • Completeness NO (DF-search)
  • Time complexity?
  • Worst-case DF-search
  • (with m is maximum depth of search space)
  • Good heuristic can give dramatic improvement.

42
Greedy search, evaluation
  • Completeness NO (DF-search)
  • Time complexity
  • Space complexity
  • Keeps all nodes in memory

43
Greedy search, evaluation
  • Completeness NO (DF-search)
  • Time complexity
  • Space complexity
  • Optimality? NO
  • Same as DF-search

44
Robot Navigation Other Examples
yg
xg
45
Robot Navigation Other Examples
yg
xg
46
A Alogorithm
  • Problems with Best First Search
  • It reduces the costs to the goal but
  • It is not optimal nor complete
  • Uniform cost

47
(No Transcript)
48
A Search Evaluation function f(n) g(n)
h(n) Path cost to node n heuristic cost at
n Constraints h(n) lt h(n) (Admissible
Studied earlier) g(n) gt g(n) (Coverage)
49
Coverage g(n) gt g(n)
Goal will never be reached
50
Observations
h g Remarks h Immediate convergence, A
converges to goal (No Search) 0 0 Random
Search 0 1 Breath - First Search gth No
Convergence lth Admissible Search ltg No
Convergence
51
Example of A
A10 2 2 B8 C9 5 3 D6 G3 3
E4 2 4 F0
Path (P1) Best First/Hill Climbing ABDEF Cost
P1 14 (not optimal) For A algorithm
F(A)01010, F(B)2810, f(C) 2911, Expand
B F(D)(25)613, f(C)11, Expand
C F(G)(23)38, f(D)13, Expand
G F(f)(232)07, GOAL achieved Path ACGF
Cost P27 (Optimal) Path Admissibility Cost P2 lt
Cost P1 hence P2 is admissible Path
52
Explanation
A10 2 2 B8 C9 5 3 D6 G3 3
E4 2 4 F0
For A algorithm Path (P2) Now lets start from A,
should we move from A to B or A to C. Lets
check the path cost SAME, So lets see the total
cost (fbhbgb8210) (fchcgb9211), hence
moving through B is better Next move to D, total
path cost to D is 25 7, and heuristic cost is
6, total is 7613. On the other side If you move
through C to G, then the path cost is 235, and
heuristic cost is 3, total 358, which is much
better than moving through state D. So now we
choose to change path and move through G
53
Explanation
A10 2 2 B8 C9 5 3 D6 G3 3
E4 2 4 F0
Now from G we move to the Goal node F Total Path
cost via G is 2327 And Total Path cost via D
is 253414 Hence moving through G is much
better will give the optimal path.
54
For Anever throw away unexpanded nodes Always
compare paths through expanded and unexpanded
nodes Avoid expanding paths that are already
expensive
55
A search
  • Best-known form of best-first search.
  • Idea avoid expanding paths that are already
    expensive.
  • Evaluation function f(n)g(n) h(n)
  • g(n) the cost (so far) to reach the node.
  • h(n) estimated cost to get from the node to the
    goal.
  • f(n) estimated total cost of path through n to
    goal.

56
A search
  • A search uses an admissible heuristic
  • A heuristic is admissible if it never
    overestimates the cost to reach the goal
  • Are optimistic
  • Formally
  • 1. h(n) lt h(n) where h(n) is the true cost
    from n
  • 2. h(n) gt 0 so h(G)0 for any goal G.
  • e.g. hSLD(n) never overestimates the actual road
    distance

57
Romania example
58
A search example
  • Starting at Arad
  • f(Arad) c(Arad,Arad)h(Arad)0366366

59
A search example
  • Expand Arad and determine f(n) for each node
  • f(Sibiu)c(Arad,Sibiu)h(Sibiu)140253393
  • f(Timisoara)c(Arad,Timisoara)h(Timisoara)11832
    9447
  • f(Zerind)c(Arad,Zerind)h(Zerind)75374449
  • Best choice is Sibiu

60
A search example
  • Previous Paths
  • f(Timisoara)c(Arad,Timisoara)h(Timisoara)11832
    9 447
  • f(Zerind)c(Arad,Zerind)h(Zerind)75374 449
  • Expand Sibiu and determine f(n) for each node
  • f(Arad)c(Sibiu,Arad)h(Arad)280366 646
  • f(Fagaras)c(Sibiu,Fagaras)h(Fagaras)239179 41
    5
  • f(Oradea)c(Sibiu,Oradea)h(Oradea)291380 671
  • f(Rimnicu Vilcea)c(Sibiu,Rimnicu Vilcea)
  • h(Rimnicu Vilcea)220192 413
  • Best choice is Rimnicu Vilcea

61
A search example
  • f(Timisoara)c(Arad,Timisoara)h(Timisoara)11832
    9 447
  • f(Zerind)c(Arad,Zerind)h(Zerind)75374 449
  • f(Arad)c(Sibiu,Arad)h(Arad)280366 646
  • f(Fagaras)c(Sibiu,Fagaras)h(Fagaras)239179 4
    15
  • f(Oradea)c(Sibiu,Oradea)h(Oradea)291380 671
  • Expand Rimnicu Vilcea and determine f(n) for each
    node
  • f(Craiova)c(Rimnicu Vilcea, Craiova)h(Craiova)3
    60160 526
  • f(Pitesti)c(Rimnicu Vilcea, Pitesti)h(Pitesti)3
    17100 417
  • f(Sibiu)c(Rimnicu Vilcea,Sibiu)h(Sibiu)300253
    553
  • Best choice is Fagaras

62
A search example
  • f(Timisoara)c(Arad,Timisoara)h(Timisoara)11832
    9 447
  • f(Zerind)c(Arad,Zerind)h(Zerind)75374 449
  • f(Arad)c(Sibiu,Arad)h(Arad)280366 646
  • f(Fagaras)c(Sibiu,Fagaras)h(Fagaras)239179
    415
  • f(Oradea)c(Sibiu,Oradea)h(Oradea)291380 671
  • Expand Rimnicu Vilcea and determine f(n) for each
    node
  • f(Craiova)c(Rimnicu Vilcea, Craiova)h(Craiova)3
    60160 526
  • f(Pitesti)c(Rimnicu Vilcea, Pitesti)h(Pitesti)3
    17100 417
  • f(Sibiu)c(Rimnicu Vilcea,Sibiu)h(Sibiu)300253
    553
  • Expand Fagaras and determine f(n) for each node
  • f(Sibiu)c(Fagaras, Sibiu)h(Sibiu)338253 591
  • f(Bucharest)c(Fagaras,Bucharest)h(Bucharest)450
    0 450
  • Best choice is Pitesti !!!

63
A search example
  • Expand Pitesti and determine f(n) for each node
  • f(Bucharest)c(Pitesti,Bucharest)h(Bucharest)418
    0418
  • Best choice is Bucharest !!!
  • Optimal solution (only if h(n) is admissable)
  • Note values along optimal path !!

64
Optimality of A(standard proof)
  • Suppose suboptimal goal G2 in the queue.
  • Let n be an unexpanded node on a shortest to
    optimal goal G.
  • f(G2 ) g(G2 ) since h(G2 )0
  • gt g(G) since G2 is suboptimal
  • gt f(n) since h is admissible
  • Since f(G2) gt f(n), A will never select G2 for
    expansion

65
BUT graph search
  • Discards new paths to repeated state.
  • Previous proof breaks down
  • Solution
  • Add extra bookkeeping i.e. remove more expsive of
    two paths.
  • Ensure that optimal path to any repeated state is
    always first followed.
  • Extra requirement on h(n) consistency
    (monotonicity)

66
Consistency
  • A heuristic is consistent if
  • If h is consistent, we have
  • i.e. f(n) is non decreasing along any path.

67
Optimality of A(more useful)
  • A expands nodes in order of increasing f value
  • Contours can be drawn in state space
  • Uniform-cost search adds circles.
  • F-contours are gradually
  • Added
  • 1) nodes with f(n)ltC
  • 2) Some nodes on the goal
  • Contour (f(n)C).
  • Contour I has all
  • Nodes with ffi, where
  • fi lt fi1.

68
A search, evaluation
  • Completeness YES
  • Since bands of increasing f are added
  • Unless there are infinitely many nodes with
    fltf(G)

69
A search, evaluation
  • Completeness YES
  • Time complexity
  • Number of nodes expanded is still exponential in
    the length of the solution.

70
A search, evaluation
  • Completeness YES
  • Time complexity (exponential with path length)
  • Space complexity
  • It keeps all generated nodes in memory
  • Hence space is the major problem not time

71
A search, evaluation
  • Completeness YES
  • Time complexity (exponential with path length)
  • Space complexity(all nodes are stored)
  • Optimality YES
  • Cannot expand fi1 until fi is finished.
  • A expands all nodes with f(n)lt C
  • A expands some nodes with f(n)C
  • A expands no nodes with f(n)gtC
  • Also optimally efficient (not including ties)

72
Quality of Heuristics
  • Admissibility
  • Effective Branching factor

73
Admissible heuristics
  • A heuristic h(n) is admissible if for every node
    n, h(n) h(n), where h(n) is the true cost to
    reach the goal state from n.
  • An admissible heuristic never overestimates the
    cost to reach the goal, i.e., it is optimistic

74
Navigation Problem
Shows Actual Road Distances
75
Heuristic Straight Line Distance between cities
and goal city (aerial)
Is the new heuristic Admissible
76
Heuristic Straight Line Distance between cities
goal city (aerial)
Is the new heuristic Admissible hsld(n)lth(n)
Consider n sibiu hsld(sibiu)253 h(sibiu)8097
101278 (actual cost through Piesti)
h(sibiu)99211310 (actual cost through
Fagaras) hsldlth Admissible (never
overestimates the actual road distance)
77
Which Heuristic is Admissible?
  • h1 the number of misplaced tiles
  • h1(s)8
  • h2 the sum of the distances of the tiles from
    their goal positions (manhattan distance).
  • h2(s)3122233218
  • BOTH

78
New Heuristic Permutation Inversions
  • let the goal be
  • Let nI be the number of tiles J lt I that appear
    after tile I (from left to right and top to
    bottom)
  • h3 n2 n3 ? n15 row number of empty tile

n2 0 n3 0 n4 0 n5 0 n6 0 n7 1 n8
1 n9 1 n10 4 n11 0 n12 0 n13 0 n14
0 n15 0
? h3 7 4
IS h3 admissible
79
New Heuristic Permutation Inversions
IS h3 admissible
h3 7 4 h actual moves required to achieve
the goal If h3 lt h then Admissible,
Otherwise Not
Find out yourself
80
New Heuristic Permutation Inversions
STATE(N)
Is h3 admissible here h3617 6 (block 3 and 6
requires 2 jumps each) 1(empty block is in row
1) h2 (actual moves) So h3 is not admissible
81
Example
Goal contains partial information, the sequence
in the first row is important.
82
Finding admissible heuristics
  • Relaxed Problem
  • Admissible heuristics can be derived from the
    exact solution cost of a relaxed version of the
    problem
  • Relaxed 8-puzzle for h1 a tile can move
    anywhere As a result, h1(n) gives the shortest
    solution
  • Relaxed 8-puzzle for h2 a tile can move to any
    adjacent square.
  • As a result, h2(n) gives the shortest solution.
  • The optimal solution cost of a relaxed problem is
    no greater than the optimal solution cost of the
    real problem.

83
Relaxed Problem Example
  • By solving relaxed problems at each node
  • In the 8-puzzle, the sum of the distances of each
    tile to its goal position (h2) corresponds to
    solving 8 simple problems
  • It ignores negative interactions among tiles
  • Store the solution pattern in the database

84
Relaxed Problem Example
h h8 h5 h6
85
Complex Relaxed Problem
  • Consider two more complex relaxed problems
  • h d1234 d5678 disjoint pattern heuristic
  • These distances could have been pre-computed in a
    database

86
Relaxed Problem
Several order-of-magnitude speedups for 15- and
24-puzzle Problem have been obtained through the
application of relaxed problem
87
Finding admissible heuristics
  • Find an admissible heuristic through experience
  • Solve lots of puzzles
  • Inductive learning algorithm can be employed for
    predicting costs for new states that may arise
    during search.

88
Summary Admissible Heuristic
  • Defining and Solving Sub-problem
  • Admissible heuristics can also be derived from
    the solution cost of a sub-problem of a given
    problem.
  • This cost is a lower bound on the cost of the
    real problem.
  • Pattern databases store the exact solution to for
    every possible sub-problem instance.
  • The complete heuristic is constructed using the
    patterns in the DB
  • Learning through experience

89
More on Heuristic functions
  • 8-puzzle
  • Avg. solution cost is about 22 steps (branching
    factor /- 3)
  • Exhaustive search to depth 22 3.1 x 1010 states.
  • A good heuristic function can reduce the search
    process.
  • CAN YOU THINK OF A HEURISTIC ?

90
8 Puzzle
  • Two commonly used heuristics
  • h1 the number of misplaced tiles
  • h1(s)8
  • h2 the sum of the distances of the tiles from
    their goal positions (manhattan distance).
  • h2(s)3122233218

91
Heuristic quality
  • Effective branching factor b
  • Is the branching factor that a uniform tree of
    depth d would have in order to contain N1 nodes.
  • A good heuristic should have b as low as
    possible
  • This measure is fairly constant for sufficiently
    hard problems.
  • Can thus provide a good guide to the heuristics
    overall usefulness.
  • A good value of b is 1.

92
Heuristic quality and dominance
  • Effective branching factor of h2 is lower than h1
  • If h2(n) gt h1(n) for all n then h2 dominates h1
    and is better for search (value of h2 and h1 e.g.
    18 vs 8)
Write a Comment
User Comments (0)
About PowerShow.com