Loading...

PPT – Course Introduction PowerPoint presentation | free to download - id: 732ee2-Yjc4M

The Adobe Flash plugin is needed to view this content

- Do you drive? Have you thought about how the

route plan is created for you in the GPS system? - How would you implement a cross-and-nought

computer program?

Artificial Intelligence Search Algorithms

- Dr Rong Qu
- School of Computer Science
- University of Nottingham
- Nottingham, NG8 1BB, UK
- rxq_at_cs.nott.ac.uk
- Course Introduction Tree Search

Problem Space

- Many problems exhibit no detectable regular

structure to be exploited, they appear chaotic,

and do not yield to efficient algorithms - The concept of search plays an important role in

science and engineering - In one way, any problem whatsoever can be seen as

a search for the right answer

Problem Space

Problem Space

- Search space
- Set of all possible solutions to a problem
- Search algorithms
- Take a problem as input
- Return a solution to the problem

Problem Space

- Search algorithms
- Exact (exhaustive) tree search
- Uninformed search algorithms
- Depth First, Breadth First
- Informed search algorithms
- A, Minimax
- Meta-heuristic search
- Modern AI search algorithms
- Genetic algorithm, Simulated Annealing

References

- Artificial intelligence a guide to intelligent

systems. Addison-Wesley, 2002. Negnevitsky - Good AI textbook
- Easy to read while in depth
- Available from library

References

- Metaheuristics From Design to Implementation,

Talbi, 2009

Quite recent Seems quite complete, from theory to

design, and implementation

References

- Artificial Intelligence A Modern Approach

(AIMA) (Russell/Norvig), 1995 2003

Artificial Intelligence (AI) is a big field and

this is a big book (Preface to AIMA) Most

comprehensive textbook in AI Web site

http//aima.cs.berkeley.edu/ Textbook in many

courses Better to be used as reference book

(especially for tree search) You dont have to

learn and read the whole book

Other Teaching Resources

- Introduction to AI http//www.cs.nott.ac.uk/rxq/g

51iai.htm - Uniform cost search
- Depth limited search
- Iterative deepening search
- alpha-beta pruning game playing
- Other basics of AI incl. tree search
- AI Methods http//www.cs.nott.ac.uk/rxq/g52aim.ht

m - Search algorithms

Brief intro to search space

Problem Space

- Often we can't simply write down and solve the

equations for a problem - Exhaustive search of large state spaces appears

to be the only viable approach

How?

The Travelling Salesman Problem

- TSP
- A salesperson has to visit a number of cities
- (S)He can start at any city and must finish at

that same city - The salesperson must visit each city only once
- The cost of a solution is the total distance

traveled - Given a set of cities and distances between them
- Find the optimal route, i.e. the shortest

possible route - The number of possible routes is

Combinatorial Explosion

A 10 city TSP has 181,000 possible solutions A 20

city TSP has 10,000,000,000,000,000 possible

solutions A 50 City TSP has 100,000,000,000,000,00

0,000,000,000,000,000,000,000,000,000,000,000,000,

000,000,000 possible solutions

There are 1,000,000,000,000,000,000,000 litres of

water on the planet

Mchalewicz, Z, Evolutionary Algorithms for

Constrained Optimization Problems, CEC 2000

(Tutorial)

Combinatorial Explosion

A 50 City TSP has 1.52 1064 possible solutions

A 10GHz computer might do 109 tours per

second Running since start of universe, it would

still only have done 1026 tours Not even close to

evaluating all tours! One of the major unsolved

theoretical problems in Computer Science

Combinatorial Explosion

Tree searches

Search Trees

- Does the tree under the following root contain a

node I? - All you get to see at first is the root
- and a guarantee that it is a tree
- The rest is up to you to discover during the

process of search

F

Search Trees

- A tree is a graph that
- is connected but becomes disconnected by removing

any edge (branch) - has precisely one path between any two nodes
- Unique path
- makes them much easier to search
- so we will start with search on trees

Search Trees

- Depth of a node
- Depth of a tree
- Examples TSP vs. game
- Tree size
- Branching factor b 2 (binary tree)
- Depth d

d nodes at d, 2d total nodes

0 1 1

1 2 3

2 4 7

3 8 15

4 16 31

5 32 63

6 64 127

Exponentially - Combinatorial explosion

Search Trees

- Heart of search techniques
- Nodes states of problem
- Root node initial state of problem
- Branches moves by operator
- Branching factor number of neighbourhoods

Search Trees Example I

Search Trees Example II

- 1st level 1 root node (empty board)
- 2nd level 8 nodes
- 3rd level 6 nodes for each of the node on the

2nd level (?)

Implementing a Search

- State
- The state in the state space to which this node

corresponds - Parent-Node
- the node that generated the current node
- Operator
- the operator that was applied to generate this

node - Path-Cost
- the path cost from the initial state to this node

Blind searches

- Breadth First Search

Breadth First Search - Method

- Expand Root Node First
- Expand all nodes at level 1 before expanding

level 2 - OR
- Expand all nodes at level d before expanding

nodes at level d1 - Queuing function
- Adds nodes to the end of the queue

Breadth First Search - Implementation

A

node REMOVE-FRONT(nodes) If GOAL-TESTproblem

applied to STATE(node) succeeds then return

node nodes QUEUING-FN(nodes, EXPAND(node,

OPERATORSproblem))

The example node set

Initial state

A

A

C

D

E

F

B

Goal state

G

H

I

J

K

L

M

N

O

P

L

Q

R

S

T

U

V

W

X

Y

Z

Press space to see a BFS of the example node set

We begin with our initial state the node labeled

A. Press space to continue

This node is then expanded to reveal further

(unexpanded) nodes. Press space

Node A is removed from the queue. Each revealed

node is added to the END of the queue. Press

space to continue the search.

A

A

The search then moves to the first node in the

queue. Press space to continue.

Node B is expanded then removed from the queue.

The revealed nodes are added to the END of the

queue. Press space.

We then backtrack to expand node C, and the

process continues. Press space

C

D

E

F

B

B

C

D

E

F

G

H

I

J

K

L

N

M

O

P

G

H

I

J

K

L

L

L

L

L

Node L is located and the search returns a

solution. Press space to end.

Q

R

S

T

U

Press space to begin the search

Press space to continue the search

Press space to continue the search

Press space to continue the search

Press space to continue the search

Press space to continue the search

Press space to continue the search

Press space to continue the search

Press space to continue the search

Press space to continue the search

Size of Queue 0

Queue Empty

Queue A

Size of Queue 1

Queue B, C, D, E, F

Size of Queue 5

Queue C, D, E, F, G, H

Size of Queue 6

Queue D, E, F, G, H, I, J

Size of Queue 7

Queue E, F, G, H, I, J, K, L

Size of Queue 8

Queue F, G, H, I, J, K, L, M, N

Size of Queue 9

Queue G, H, I, J, K, L, M, N, O, P

Size of Queue 10

Queue H, I, J, K, L, M, N, O, P, Q

Queue I, J, K, L, M, N, O, P, Q, R

Queue J, K, L, M, N, O, P, Q, R, S

Queue K, L, M, N, O, P, Q, R, S, T

Queue L, M, N, O, P, Q, R, S, T, U

Queue Empty

Size of Queue 0

Nodes expanded 0

Current Action

Current level n/a

Nodes expanded 1

Current level 0

Current Action Expanding

Nodes expanded 2

Current level 1

Current Action Backtracking

Current level 0

Current level 1

Nodes expanded 3

Current Action Expanding

Current Action Backtracking

Current level 0

Current level 1

Nodes expanded 4

Current Action Expanding

Current Action Backtracking

Current level 0

Current level 1

Current Action Expanding

Nodes expanded 5

Current Action Backtracking

Current level 0

Current Action Expanding

Current level 1

Nodes expanded 6

Current Action Backtracking

Current level 0

Current level 1

Current level 2

Current Action Expanding

Nodes expanded 7

Current Action Backtracking

Current level 1

Current Action Expanding

Nodes expanded 8

Current Action Backtracking

Current level 2

Current level 1

Current level 0

Current level 1

Current level 2

Current Action Expanding

Nodes expanded 9

Current Action Backtracking

Current level 1

Current level 2

Current Action Expanding

Nodes expanded 10

Current Action Backtracking

Current level 1

Current level 0

Current level 1

Current level 2

Current Action Expanding

Nodes expanded 11

Current Action Backtracking

Current level 1

FINISHED SEARCH

Current level 2

BREADTH-FIRST SEARCH PATTERN

Evaluating Search Algorithms

- Evaluating against four criteria
- Optimal
- Complete
- Time complexity
- Space complexity

Evaluating Breadth First Search

- Evaluating against four criteria
- Complete?
- Yes
- Optimal?
- Yes

Evaluating Breadth First Search

- Evaluating against four criteria
- Space Complexity
- O(bd), i.e. number of leaves
- Time Complexity
- 1 b b2 b3 ... bd-1 i.e. O(bd)
- b the branching factor
- d is the depth of the search tree
- Note The space/time complexity could be less as

the solution could be found anywhere before the

dth level.

Evaluating Breadth First Search

Time and memory requirements for breadth-first

search, assuming a branching factor of 10, 100

bytes per node and searching 1000 nodes/second

Breadth First Search - Observations

- Very systematic
- If there is a solution, BFS is guaranteed to find

it - If there are several solutions, then BFS
- will always find the shallowest goal state first

and - if the cost of a solution is a non-decreasing

function of the depth then it will always find

the cheapest solution

Breadth First Search - Observations

- Space is more of a factor to breadth first search

than time - Time is still an issue
- Who has 35 years to wait for an answer to a level

12 problem (or even 128 days to a level 10

problem) - It could be argued that as technology gets faster

then exponential growth will not be a problem - But even if technology is 100 times faster
- we would still have to wait 35 years for a level

14 problem and what if we hit a level 15 problem!

Blind searches

- Depth First Search

Depth First Search - Method

- Expand Root Node First
- Explore one branch of the tree before exploring

another branch - Queuing function
- Adds nodes to the front of the queue

Depth First Search - Observations

- Space complexity
- Only needs to store the path from the root to the

leaf node as well as the unexpanded nodes - For a state space with a branching factor of b

and a maximum depth of m, DFS requires storage of

bm nodes - Time complexity
- is bm in the worst case

Depth First Search - Observations

- If DFS goes down a infinite branch it will not

terminate if it does not find a goal state - If it does find a solution there may be a better

solution at a lower level in the tree - Therefore, depth first search is neither complete

nor optimal

The example node set

Exercise - Show the queue status of BFS and DFS

Initial state

A

A

C

D

E

F

B

Goal state

G

H

I

J

K

L

M

N

O

P

L

Q

R

S

T

U

V

W

X

Y

Z

Heuristic searches

Blind Search vs. Heuristic Searches

- Blind search
- Randomly choose where to search in the search

tree - When problem get large, not practical any more
- Heuristic search
- Explore the node which is more likely lead to

goal state

Heuristic Searches - Characteristics

- Heuristic searches work by deciding which is the

next best node to expand - Has some domain knowledge
- Use a function to tell us how close the node is

to the goal state - Usually more efficient than blind searches
- Sometimes called an informed search
- There is no guarantee that it is the best node
- Heuristic searches estimate the cost to the goal

from its current position. It is usual to denote

the heuristic evaluation function by h(n)

Heuristic Searches Example

Go to the city which is nearest to the goal city

Hsld(n) straight line distance between n and

the goal location

Heuristic Searches - Greedy Search

- So named as it takes the biggest bite it can

out of the problem - That is, it seeks to minimise the estimated cost

to the goal by expanding the node estimated to be

closest to the goal state - Implementation is achieved by sorting the nodes

based on the evaluation function f(h) h(n)

Heuristic Searches - Greedy Search

- It is only concerned with short term aims
- It is possible to get stuck in an infinite loop
- It is not optimal
- It is not complete
- Time and space complexity is O(bm) where m is

the depth of the search tree

Greedy Search

Performed well, but not optimal

Heuristic Searches vs. Blind Searches

Heuristic Searches vs. Blind Searches

- Want to achieve this but stay
- complete
- optimal
- If bias the search too much then could miss

goals or miss shorter paths

Heuristic searches

- A Search

The A algorithm

- Combines the cost so far and the estimated cost

to the goal - That is evaluation function f(n) g(n) h(n)
- An estimated cost of the cheapest solution via n

The A algorithm

- A search algorithm to find the shortest path

through a search space to a goal state using a

heuristic - f g h
- f - function that gives an evaluation of the

state - g - the cost of getting from the initial state to

the current state - h - the cost of getting from the current state to

a goal state

The A algorithm admissible heuristic

- It can be proved to be optimal and complete

providing that the heuristic is admissible. - That is the heuristic must never over estimate

the cost to reach the goal - h(n) must provide a valid lower bound on cost to

the goal - But, the number of nodes that have to be searched

still grows exponentially

Straight Line Distances to Bucharest

Town SLD

Arad 366

Bucharest 0

Craiova 160

Dobreta 242

Eforie 161

Fagaras 178

Giurgiu 77

Hirsova 151

Iasi 226

Lugoj 244

Town SLD

Mehadai 241

Neamt 234

Oradea 380

Pitesti 98

Rimnicu 193

Sibiu 253

Timisoara 329

Urziceni 80

Vaslui 199

Zerind 374

We can use straight line distances as an

admissible heuristic as they will never

overestimate the cost to the goal. This is

because there is no shorter distance between two

cities than the straight line distance.

ANIMATION OF A.

Nodes Expanded

1.Sibiu

2.Rimnicu

3.Pitesti

4.Fagaras

5.Bucharest 278 GOAL!!

Neamt

Oradea

71

Zerind

87

Iasi

Fringe in RED Visited in BLUE

Annotations ghf

75

140366506

92

Arad

Optimal route is (8097101) 278 miles

140

140

Vaslui

99178277

0253253

118

99

Fagaras

Sibiu

Timisoara

could use 211?

142

80

111

80193273

211

Rimnicu

Lugoj

98

Urziceni

Hirsova

17798275

86

70

97

Pitesti

Mehadia

146

101

Bucharest

86

75

138

3100310 (F)

Dobreta

2780278 (R,P)

90

226160386(R)

Craiova

120

Eforie

Giurgui

315160475(P)

The A algorithm

- Clearly the expansion of the nodes is much more

directed towards the goal - The number of expansions is significantly reduced
- Exercise
- Draw the search tree of A for the 8-puzzle using

the two heuristics

The A Algorithm An example

Initial State

Goal State

8 puzzle problem Online demo of A algorithm

for 8 puzzle Noyes Chapmans 15 puzzle

The A Algorithm An example

Possible Heuristics in A Algorithm

- H1
- the number of tiles that are in the wrong

position - H2
- the sum of the distances of the tiles from

their goal positions using the Manhattan Distance - We need admissible heuristics (never over

estimate) - Both are admissible but which one is better?

The A Algorithm An example

1 3 4

8 6 2

7 5

1 3 4

8 6 2

7 5

5

4

6

1 3 4

8 6 2

7 5

1 3 4

8 2

7 6 5

1 3 4

8 6 2

7 5

1 3 4

8 6 2

7 5

1 3 4

8 6 2

7 5

1 3 4

8 2

7 6 5

4

6

5

6

3

- H1 the number of tiles that are in the wrong

position (4) - H2 the sum of the distances of the tiles from

their goal positions using the Manhattan Distance

(5)

1 3 4

8 6 2

7 5

5

6

1 3 4

8 6 2

7 5

1 3 4

8 2

7 6 5

1 3 4

8 6 2

7 5

4

6

5

1 3 4

8 2

7 6 5

1 4

8 3 2

7 6 5

1 3 4

8 2

7 6 5

5

3

1 3

8 2 4

7 6 5

1 3 4

8 2 5

7 6

2

4

1 3

8 2 4

7 6 5

Whats wrong with the search? Is it implementing

the A search?

1

H2 the sum of the distances of the tiles from

their goal positions using the Manhattan Distance

(5)

(No Transcript)

The A Algorithm An example

- A is optimal and complete, but it is not all

good news - It can be shown that the number of nodes that are

searched is still exponential to the size of most

problems - This has implications not only for the time taken

to perform the search but also the space required - Of these two problems the search complexity is

more serious

Game tree searches

- MiniMax Algorithm

Game Playing

- Up till now we have assumed the situation is not

going to change whilst we search - Shortest route between two towns
- The same goal board of 8-puzzle, n-Queen
- Game playing is not like this
- Not sure of the state after your opponent move
- Goals of your opponent is to prevent your goal,

and vice versa

Game Playing

Wolfgang von Kempelen The Turk 18th

Century Chess Automaton 1770-1854

Game Playing

Game Playing - Minimax

- Game Playing
- An opponent tries to thwart your every move
- 1944 - John von Neumann outlined a search method

(Minimax) - maximise your position whilst minimising your

opponents

Game Playing - Minimax

- In order to implement we need a method of

measuring how good a position is - Often called a utility function
- Initially this will be a value that describes our

position exactly

Assume we can generate the full search tree

The idea is computer wants to force the opponent

to lose, and maximise its own chance of winning

Of course for larger problem its not possible to

draw the entire tree

1

MAX

A

Game starts with computer making the first move

Now we can decide who win the game

We know absolutely who will win following a branch

Then the opponent makes the next move

MIN

MAX

Assume positive computer wins

terminal position

agent

opponent

Now the computer is able to play a perfect game.

At each move itll move to a state of the highest

value.

Question who will win this game, if both players

play a perfect game?

Game Playing - Minimax

- Nim
- Start with a pile of tokens
- At each move the player must divide the tokens

into two non-empty, non-equal piles

Game Playing - Minimax

- Nim
- Starting with 7 tokens, draw the complete search

tree - At each move the player must divide the tokens

into two non-empty, non-equal piles

7

Game Playing - Minimax

- Conventionally, in discussion of minimax, have

two players MAX and MIN - The utility function is taken to be the utility

for MAX - Larger values are better for MAX
- Assuming MIN plays first, complete the MIN/MAX

tree - Assume that a utility function of
- 0 a win for MIN
- 1 a win for MAX

Game Playing - Minimax

- Player MAX is going to take the best move

available - Will select the next state to be the one with the

highest utility - Hence, value of a MAX node is the MAXIMUM of the

values of the next possible states - i.e. the maximum of its children in the search

tree

Game Playing - Minimax

- Player MIN is going to take the best move

available for MIN i.e. the worst available for

MAX - Will select the next state to be the one with the

lowest utility - higher utility values are better for MAX and so

worse for MIN - Hence, value of a MIN node is the MINIMUM of the

values of the next possible states - i.e. the minimum of its children in the search

tree

Game Playing - Minimax

- A MAX move takes the best move for MAX
- so takes the MAX utility of the children
- A MIN move takes the best for min
- hence the worst for MAX
- so takes the MIN utility of the children
- Games alternate in play between MIN and MAX

1

1

1

1

0

1

0

1

1

0

0

1

0

0

Game Playing - Minimax

- Efficiency of the search
- Game trees are very big
- Evaluation of positions is time-consuming
- How can we reduce the number of nodes to be

evaluated? - alpha-beta search

Game Playing - Minimax

- Consider a variation of the two player game Nim
- The game starts with a stack of 5 tokens. At each

move a player removes one, two or three tokens

from the pile, leaving the pile non-empty. A

player who has to remove the last token loses the

game. - (a) Draw the complete search tree for this

variation of Nim. - (b) Assume two players, min and max. Max plays

first. - If a terminal state in the search tree developed

above is a win for min, a utility function of -1

is assigned to that state. A utility function of

1 is assigned to a state if max wins the game. - Apply the minimax algorithm to the search tree.

Appendix

- A brief history of AI game playing

Game Playing

- Game Playing has been studied for a long time
- Babbage (1791-1871)
- Analytical machine
- tic-tac-toe
- Turing (1912-1954)
- Chess playing program
- Within 10 years a computer will be a chess

champion - Herbert Simon, 1957

Game Playing

- Why study game playing in AI
- Games are intelligent activities
- It is very easy to measure success or failure
- Do not require large amounts of knowledge
- They were thought to be solvable by

straightforward search from the starting state to

a winning position

Game Playing - Checkers

- Arthur Samuel
- 1952 first checker program, written for an IBM

701 - 1954 - Re-wrote for an IBM 704
- 10,000 words of main memory

Game Playing - Checkers

- Arthur Samuel
- Added a learning mechanism that learnt its own

evaluation function by playing against itself - After a few days it could beat its creator
- And compete on equal terms with strong human

players

Game Playing - Checkers

- Jonathon Schaeffer Chinook, 1996
- In 1992 Chinook won the US Open
- Plays a perfect end game by means of a database
- And challenged for the world championship
- http//www.cs.ualberta.ca/chinook/

Game Playing - Checkers

- Jonathon Schaeffer Chinook, 1996
- Dr Marion Tinsley
- World championship for over 40 years, only losing

three games in all that time - Against Chinook he suffered his fourth and fifth

defeat - But ultimately won 21.5 to 18.5

Game Playing - Checkers

- Jonathon Schaeffer Chinook, 1996
- Dr Marion Tinsley
- In August 1994 there was a re-match but Marion

Tinsley withdrew for health reasons - Chinook became the official world champion

Game Playing - Checkers

- Jonathon Schaeffer Chinook, 1996
- Uses Alpha-Beta search
- Did not include any learning mechanism
- Schaeffer claimed Chinook was rated at 2814
- The best human players are rated at 2632 and 2625

Game Playing - Checkers

- Chellapilla and Fogel 2000
- Learnt how to play a good game of checkers
- The program used a population of games with the

best competing for survival - Learning was done using a neural network with the

synapses being changed by an evolutionary

strategy - Input current board position
- Output a value used in minimax search

Game Playing - Checkers

- Chellapilla and Fogel 2000
- During the training period the program is given
- no information other than whether it won or lost

(it is not even told by how much) - No strategy and no database of opening and ending

positions - The best program beats a commercial application

6-0 - The program was presented at CEC 2000 (San Diego)

and prize remain unclaimed

Game Playing - Chess

- No computer can play even an amateur-level game

of chess - Hubert Dreyfus, 1960s

Game Playing - Chess

- Shannon - March 9th 1949 - New York
- Size of search space (10120 - average of 40

moves) - 10120 gt number of atoms in the universe
- 200 million positions/second 10100 years to

evaluate all possible games - Age of universe 1010
- Searching to depth 40, at one state per

microsecond, it would take 1090 years to make its

first move

Game Playing - Chess

- 1957 AI pioneers Newell and Simon predicted

that a computer would be chess champion within

ten years - Simon I was a little far-sighted with chess,

but there was no way to do it with machines that

were as slow as the ones way back then - 1958 - First computer to play chess was an IBM

704 - about one millionth capacity of deep blue

Game Playing - Chess

- 1967 Mac Hack competed successfully in human

tournaments - 1983 Belle attained expert status from the

United States Chess Federation - Mid 80s Scientists at Carnegie Mellon

University started work on what was to become

Deep Blue - Sun workstation, 50K positions per second
- Project moved to IBM in 1989

Game Playing - Chess

- May 11th 1997, Gary Kasparov lost a six match

game to deep blue, IBM Research - 3.5 to 2.5
- Two wins for deep blue, one win for Kasparov and

three draws

(http//www.research.ibm.com/deepblue/meet/html/d.

3.html)

Game Playing - Chess

- Still receives a lot of research interests
- Computer program to learn how to play chess,

rather than being told how it should play - Research on game playing at School of CS,

Nottingham

Game Playing Go

- A significant challenge to computer programmers,

not yet much helped by fast computation - Search methods successful for chess and checkers

do not work for Go, due to many qualities of the

game - Larger area of the board (five times the chess

board) - New piece appears every move - progressively more

complex

wikipedia http//en.wikipedia.org/wiki/Go_(game)

Game Playing Go

- A significant challenge to computer programmers,

not yet much helped by fast computation - Search methods successful for chess and checkers

do not work for Go, due to many qualities of the

game - A material advantage in Go may just mean that

short-term gain has been given priority - Very high degree of pattern recognition involved

in human capacity to play well

wikipedia http//en.wikipedia.org/wiki/Go_(game)

Appendix

- Alfa Beta pruning

A

STOP! What else can you deduce now!?

STOP! What else can you deduce now!?

STOP! What else can you deduce now!?

MAX

lt6

MIN

On discovering util( D ) 6 we know that

util( B ) lt 6

6

gt8

On discovering util( J ) 8 we know that

util( E ) gt 8

MAX

Can stop expansion of E as best play will not go

via E Value of K is irrelevant prune it!

6

5

8

agent

opponent

A

gt6

MAX

6

lt2

MIN

6

gt8

2

MAX

6

5

8

2

1

agent

opponent

A

gt6

MAX

6

2

MIN

6

gt8

2

MAX

6

5

8

2

1

agent

opponent

Alpha-beta Pruning

A

6

MAX

6

2

beta cutoff

MIN

6

gt8

2

alpha cutoff

MAX

6

5

8

2

1

agent

opponent

Appendix

- General Search

General Search

- Function GENERAL-SEARCH (problem, QUEUING-FN)

returns a solution or failure - nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem

)) - Loop do
- If nodes is empty then return failure
- node REMOVE-FRONT(nodes)
- If GOAL-TESTproblem applied to STATE(node)

succeeds then return node - nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr

oblem)) - End
- End Function

General Search

- Function GENERAL-SEARCH (problem, QUEUING-FN)

returns a solution or failure - nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem

)) - Loop do
- If nodes is empty then return failure
- node REMOVE-FRONT(nodes)
- If GOAL-TESTproblem applied to STATE(node)

succeeds then return node - nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr

oblem)) - End
- End Function

General Search

- Function GENERAL-SEARCH (problem, QUEUING-FN)

returns a solution or failure - nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATEproblem

)) - Loop do
- If nodes is empty then return failure
- node REMOVE-FRONT(nodes)
- If GOAL-TESTproblem applied to STATE(node)

succeeds then return node - nodes QUEUING-FN(nodes,EXPAND(node,OPERATORSpr

oblem)) - End
- End Function