CS3020: Robotics and Learning - PowerPoint PPT Presentation

1 / 79
About This Presentation
Title:

CS3020: Robotics and Learning

Description:

Plan :go to corner shop, buy milk, buy bananas, go to DIY shop, ... Ramification problem: Have we stated all the effects? Does the shopkeeper now have the money? ... – PowerPoint PPT presentation

Number of Views:38
Avg rating:3.0/5.0
Slides: 80
Provided by: deptcomput
Category:

less

Transcript and Presenter's Notes

Title: CS3020: Robotics and Learning


1
CS3020 Robotics and Learning
PLANNING
Russell and Norvig Sections11- 1 to 11-3
2
Example planning situations
Initial state
at home no milk no fruit no hammer
Goal at home, have milk, have fruit, have
hammer,
planner
Plan go to corner shop, buy milk, buy
bananas, go to DIY shop, buy hammer, go home
at home have milk, fruit hammer
3
Example manufacturing planning
have many machines have lump of steel
Initial state
Goal have Land Rover "upper swivel pin"
planner
... manufacturing plan ...
have many machines, have Land Rover upper swivel
pin have waste material
4
What is planning?
A planner is something which has a set of goals
or objectives, and tries to construct a sequence
of actions which, when executed, will achieve
the goals.
Start
World in state s
Goal put world in state s'
Planner
Sequence of actions
s
s
World in state s'
5
Planning
  • INPUT
  • State of the world
  • conjunctions of terms describing this state at a
    specific time (snapshot)
  • likely to be limited world with simplifying
    assumptions
  • Operators
  • actions which can be taken in particular
    circumstances to change state
  • Goals
  • conjunction of terms describing properties of the
    world which must be true in any acceptable final
    state
  • OUTPUT
  • plan sequence of actions generating a state
    which satisfies goals

6
Example - House
study
table
hall
bed
lounge
fred
bedroom
  • 4 rooms, connected by doors
  • 2 objects (table and bed) that can be moved from
    room to room
  • 1 person (fred) who can take two types of action
  • move himself from room to room
  • carry an object from room to room

7
Representational issues - state
  • State of the world
  • Changing
  • (in fred hall),
  • (in table study),
  • Unchanging
  • (adjacent study lounge)
  • (adjacent lounge hall)
  • (adjacent ?x ?y) ? ( adjacent ?y ?x)
  • (object table), (object bed)
  • (room hall), (room lounge), (room study), (room
    hall)
  • Goal
  • ((in table hall) (in bed bedroom))

8
Representational issues - actions
all possible actions when they can be done what
the effects are
go to work go home go to newsagent go to pub go
to DIY shop go to corner shop buy tuna fish buy
bananas buy newspaper buy beer wash car play
football
planner
before
after
- be in place selling bananas - have gt 20p
money
- have bananas - have less money than before
9
Representational issues - actions
  • fred can carry the bed from the hall into the
    bedroom
  • (carry bed hall bedroom)
  • (carry ?object1 ?room1 ?room2)
  • When can the action be applied?
  • (in fred ?room1)
  • (in ?object1 ?room1)
  • (adjacent ?room1 ?room2)
  • What are the effects of executing the action?
  • retract (in fred ?room1)
  • retract (in ?object1 ?room1)
  • add (in fred ?room2
  • add (in ?object1 ?room2)

10
STRIPS style planning
  • STRIPS was an early planner written 1970
    (Stanford Res Inst - SRI)
  • Every action is specified by a rule consisting of
    four parts a summary of the action, and three
    lists of terms.
  • (Action, Preconditions, Delete, Add)
  • An action can be applied if the items in the
    Preconditions are in the current state of the
    world
  • When an action is executed, the terms in the
    Delete list are removed from the current state,
    and the statements in the Add list are added to
    the current state.
  • Delete and Add lists are sometimes combined into
    an Effects list (with Delete effects being
    negations).
  • Distinguish STRIPS representation from the STRIPS
    algorithm.

11
Example STRIPS rule
  • Action (carry ?object1 ?room1 ?room2)
  • Preconditions (in ?object1 ?room1)
  • (in fred ?room1)
  • (adjacent ?room1 ?room2)
  • Delete-list (in fred ?room1)
  • (in ?object1 ?room1)
  • Add-list (in ?object1 ?room2)
  • (in fred ?room2)


12
Problems with simple actions
  • Qualification problem Have we stated all the
    preconditions?
  • What if I am too ill to go shopping?
  • What if it is 3 a.m.on a Sunday?
  • What if the fruit shop burnt down last night?
  • Ramification problem Have we stated all the
    effects?
  • Does the shopkeeper now have the money?
  • Is the money in the till, or was it thrown in the
    bin?
  • STRIPS style planners assume that they have
    perfect knowledge of the world and all the
    actions.

13
The FRAME problem
  • Frame problem How do we know what stays the same
    from one situation to the next?
  • In the description of an action we have not
    explicitly said anything about the effects of
    objects not involved in the action. E.g. if we
    carry the bed from the hall to the bedroom
  • what happens to the table?
  • what happens to the cat sleeping on the bed?
  • STRIPS ASSUMPTION
  • STRIPS style planners run away from the frame
    problem.

14
The BLOCKS world
Terms to describe state
(on ?x ?y) (on ?x t) ?x (block) is on ?y
(block) or t (table) (clear ?z) the top of block
?z is clear
(move ?x ?y ?z) move block ?x from block ?y to
block ?z
Possible action
((on ?x ?y) (clear ?x) (clear ?z))
Preconditions
((on ?x ?y) (clear ?z))
Delete list
((on ?x ?z) (clear ?y))
Add list
15
STRIPS rule
  • (move ?x ?y ?z) action
  • ((on ?x,?y)(clear ?x)(clear ?z))
    preconditions
  • ((on ?x ?y) (clear ?z)) delete
    list
  • ((on ?x ?z) (clear ?y)) add list

a
a
c
b
c
b
t
t
(move a b c) action ((on a b)
(clear a) (clear c)) preconditions ((on a
b) (clear c)) delete list ((on
a c) (clear b)) add list
16
Special cases for table
  • Table is always clear
  • (move ?x ?y t) move ?x from ?y to
    table
  • ((on ?x ?y) (clear ?x)) no (clear t)
  • ((on ?x ?y)) no (clear t)
  • ((on ?x t) (clear ?y))
  • (move ?x t ?y) move ?x from t to ?y
  • ((on ?x t) (clear ?x) (clear ?y))
  • ((on ?x t)) no (clear t)
  • ((on ?x ?y) (clear ?y))
  • NB overloading of move action

17
Example STRIPS problem
b
a
c
a
c
b
t
t
(initial_state (on b t) (on a b) (clear a)
(on c t) (clear c)) (goal (on b
c)) (plan (move a b t) (move b t c))
18
Level of detail
  • We could include a robot arm and extend the state
    representation by
  • (holding ?x) ?x is a block
  • (emptyhand) not holding(?)
  • and replace the actions by
  • (grasp ?x) (release ?x) ?x is a block
  • (lift ?x) (lower ?x) ?x is a block
  • (go ?x ?y) ?x and ?y are locations

19
Search and Planning
Planning
Search
Initial state of the world
Start state
Operators to produce successor states
Actions
Evaluation heuristics
Are we close to our goal?
Goal state
Goals which must be true in the state of the
world achieved by the plan
All planning involves search, but not all search
involves planning.
20
Increased complexity
  • Large branching factor i.e. often large number
    of possible actions.
  • when I walk out of my front door, there are a
    million things I could do!
  • Goals are often complex.
  • I want to buy a paper, get my hair cut, go past
    HMT and see whats on, and have change left from
    10

21
Remember - Container stacking ...
22
... as a Search problem
a b c
a c b
c b a
b a c
b a c
c a b
legal actions
a b c
a b c
c a b
b c a
c a b
a b c
b a c
23
Planning as blind forwards search
  • View STRIPS actions as operators to move through
    space of states of the world, starting from the
    initial state.
  • Goal is set of sub-goals
  • Success when the goal is a subset of the current
    state.
  • Select an action (choice) and instantiate
    variables in that action (choice) the
    preconditions of the action (as instantiated)
    must be a subset of the current world state
  • Execute the action to get the next state
  • Search strategy can be dfs or bfs.

24
Forward search
  • let L be a list of states of the world
    initialised to the initial state of the world
  • let GS specify the set of sub-goals that we are
    trying to achieve
  • while L is not empty do
  • let N be the first member of L
  • if GS is a subset of N
  • then return the plan (sequence of moves)
  • else remove N from the front of L
  • generate all the moves from N (checking
    for looping)
  • add them to the front of L (dfs) or the
    back of L (bfs)
  • return failure

25
... as a Search problem
a b c
goal state
a c b
c b a
b a c
b a c
c a b
legal actions
a b c
a b c
c a b
b c a
c a b
a b c
initial state
b a c
26
Blind forwards search
Initial state
t
(move c t a)
(move a b c)
(move a b t)
t
t
t
(move b t c)
(move a t b)
(move a t c)
t
t
t
b
Goal
c
a
t
27
Refinements Move towards goal
  • How can we improve efficiency?
  • No obvious continuous measure of closeness to the
    goal.
  • However if there is an action which contains at
    least one of the goals in its add list, which is
    not already in the current state, then prefer it
    to other actions e.g.
  • (initial_state (on b t) (on a b) (clear a) (on c
    t) (clear c))
  • (goal (on a t) (on b c) (on c t))
  • Obvious to prefer action (move a b t) in
    preference to any other.
  • However, if (goal (on b c))
  • And

28
Refinements Move towards goal
t
(initial_state (on c b) (on b t) (on a t) (clear
c) (clear a)) (goal (on c b) (on b a) (on a t)
(clear c))
  • The only goal which is not present in Initial
    state is (on b a).
  • However there is no action which can be taken in
    State which has (on b a) in its Add list.
  • Direction only works when close to the goal
    revert to blind search otherwise.

see next slide
29
... as a Search problem
a b c
initial state
goal state
a c b
c b a
b a c
b a c
c a b
legal actions
a b c
a b c
c a b
b c a
c a b
a b c
b a c
30
Refinements Inefficient moves
  • It may be that in there are domain specific tests
    for allowing us to rule out sequences of moves
    which are clearly inefficient in that domain.
  • Getting into repeated states is an obvious
    example
  • In blocks world, it makes no sense to move the
    same block twice in succession.
  • but this isnt true in the furniture-moving
    domain
  • The approach is general, but the solutions are
    domain specific.

31
Problems with forwards search
at home, no milk, bananas or hammer
Goal at home, have milk, have bananas, have
hammer,
go to work
go home
go to pub
go to corner shop
go to DIY shop
wash car
watch TV
. . .
. . .
. . .
. . .
Too great a choice of moves!
go ...
start work
go ...
buy tuna
buy bananas
. . .
32
Forwards vs. Backwards search
  • In planning, the forward branching factor is
    usually very large - many possible actions.
  • Goals are normally few - so limited number of
    actions to achieve them.
  • Most planning systems work backwards, sometimes
    called 'regressive' planning.

33
Blind backwards search (1)
Reverse start and goal states, forward plan,
reverse individual actions and their order.
b
c
a
t
t
b
c
a
t
t
Plan ((move b c t) (move a t b))
Plan ((move a b t) (move b t c))
BUT actions may not be reversible (e.g. one-way
streets)
34
Problems with forwards search
at home, no milk, bananas or hammer
Goal at home, have milk, have bananas, have
hammer,
go to work
go home
go to pub
go to corner shop
go to DIY shop
wash car
watch TV
. . .
. . .
. . .
. . .
go ...
start work
go ...
buy tuna
buy bananas
. . .
35
Blind backwards search (2)
  • Work backwards from Goals.
  • Select a goal from the list of goals (choice).
  • Select an action which contains that goal in its
    Add list (choice) and instantiate any other
    variables in that action (choice).
  • Add the Preconditions to the revised list of
    goals and remove the item(s) in the Add-list.
  • The preconditions must be added to the goal list,
    because they must have been satisfied before you
    could have applied the action.
  • Other goals may be satisfied as a by-product.
  • Choice means search - strategy can be dfs or bfs.
  • Success when the revised goal list is (or is a
    subset of) the start state.

36
Example of blind backwards search
(initial_state (on a t) (on b t) (on c t)
(clear a) (clear b) (clear c))
t
(goal (on b a) (on c b))
t
Remember this is BLIND search!
Work on goals - underline goal being worked on
(goal (on b a) (on c b))
37
One choice of goal to work on
  • (goal (on b a) (on c b)) arbitrary choice of
    goal
  • (move b t a) arbitrary choice of action
  • (goal (on b t) (clear b) (clear a) (on c b))
    arbitrary choice
  • ...
  • (move b t t) inefficient move
  • (goal (on b t) (clear b)
  • (move b a t) inefficient move
  • (goal (on b a) (on b a) was already a
    goal!
  • ...

(goal (on b t) (clear b) (clear a) (on c b))
(clear b) and (on c b) are conflicting goals -
see later
38
Another choice of goal
  • (goal (on b a) (on c b))
  • (move c t b) stumble on right choice
  • (goal (on b a) (on c t) (clear c) (clear b))
  • (move b t a) stumble on right choice again!
  • (goal (on b t) (clear a) (clear b) (on c t)
    (clear c))
  • Goal is (subset of) initial_state so stop
  • (initial_state (on a t) (on b t) (on c t) (clear
    a) (clear b) (clear c))

39
Search tree
Search tree
(goal (on b a) (on c b) )
(move c t b)
(goal (on b a) (on c t) (clear c) (clear b))
(move b t a)
(goal (on b t) (clear b) (clear a) (on c t)
(clear c))
Plan consists of operators in search space in
reverse order (since we are searching
backwards) (plan (move b t a) (move c t b))
40
Refinements Move towards initial_state
  • Work backwards from Goals.
  • Select a goal from the list of goals (choice)
    which has an action which
  • contains that goal in its Add list (choice)
  • if possible, has at least one of the items in the
    start state (S) in its Preconditions list prefer
    actions with more preconditions in S
  • Instantiate any other variables in that action
    (choice) .
  • Add the Preconditions to the current list of
    goals and remove the items in the Add list.
  • The preconditions must be added to the goal list,
    because they must have been satisfied before you
    could have applied the action.
  • Other goals may be satisfied as a by-product.
  • Success when the the current goal list is (or is
    a subset of) S.

41
Refinements Example
(initial_state (on a t) (on b t) (on c t)
(clear a) (clear b) (clear c))
t
(goal (on b a) (on c b))
t
(goal (on b a) (on c b))
(move c t b)
(goal (on b a) (on c t) (clear c) (clear b))
(move b t a)
(goal (on b t) (clear a) (on c t) (clear b)
(clear c))
42
Refinements Domain heuristics
  • You may find heuristics specific to the domain
    that reduce the search time.
  • In the furniture domain, we can achieve a
    specific location of fred either by having him
    move there (on his own) or have him carry
    something there.
  • In general (unless there is also a goal to have
    the thing at the location) it is better to have
    fred move rather than carry (as the preconditions
    will generally be easier to achieve)
  • If actions are tried from the top down, put move
    before carry (or give them a higher salience).

43
Refinement Incompatible goals
  • Choosing (on c b) as the initial goal to work on
    is in principle no more rational than choosing
    (on b a)

(goal (on b a) (on c b))
(move b t a)
(goal (on b t) (clear a) (clear b) (on c b))
  • However we can recognise that (clear b) and (on c
    b) are incompatible goals
  • an action to achieve (clear b) will inevitably
    have (on ? b) in its Delete list
  • an action to achieve (on ? b) will have (clear b)
    in its Delete list

44
Refinement Incompatible goals
  • Look for domain specific tests to identify these.
  • E.g. in blocks world
  • a block can't be clear and have a block on it
  • (on a b)
  • (clear b)
  • two different blocks can't be on the same block
  • (on a c)
  • (on b c)
  • two blocks can't be on each other
  • (on a b)
  • (on b a)

45
.. but ..
  • Successful goal-driven search doesn't necessarily
    generate a feasible plan

46
Sussman anomaly(1)
t
t
  • Initial_state (initial_state (clear c) (on c a)
    (on a t) (clear b) (on b t))
  • Goals (goal (on a b) (on b c)) (move a t
    b) (goal (clear a) (clear b) (on a t) (on b
    c)) (move c a t) (goal (clear c) (on c a)
    (clear b) (on a t) (on b c)) (move b t
    c) (goal (clear c) (on c a) (clear b) (on a t)
    (on b t))
  • Plan (plan (move b t c) (move c a t) ??

47
Sussman anomaly(2)
t
t
  • Initial_state (initial_state (clear c) (on c a)
    (on a t) (clear b) (on b t))
  • Goals (goal (on a b) (on b c)) (move b t c)
    (goal (on a b) (clear c) (clear b) (on b t))
    (move a t b) (goal (clear a) (on a t)
    (clear c) (clear b) (on b t)) (move c a
    t) (goal (on c a) (on a t) (clear c) (clear b)
    (on b t))
  • Plan (plan (move c a t) (move a t b) (move b t
    c) ???

48
Goal clobbering
We can look at incompatible goals from a
different perspective
Search tree (ignoring incompatibility)
Plan
(goal (on b a) (on c b))
(move b t a)
(goal (on b t) (clear b) (clear a) (on c b))
(move c t b)
(goal (on b t) (clear b) (clear a) (on c t)
(clear c))
(plan (move c t b) (move b t a) ?)
49
Goal clobbering
t
t
t
... but (plan (move c t b) (move b t a)) doesn't
work, because we are moving b when it is not
clear.
(goal (on b t) (clear b) (clear a) (goal (on
b t) (clear b) (clear a) (on c b)) (on c t)
(clear c))
(move c t b) (move b t a) Precon (on c
t) (clear c) (clear b) (on b t) (clear b)
(clear a) Delete (on c t) (clear b) (on b
a) (clear a) Add (on c b) (on b a)
(goal (on b a) (on c b))
50
Goal clobbering
  • The action (move c t b) is said to clobber the
    move (move b t a), because that move required
    (clear b) as a precondition before it could be
    applied, but when we moved c onto b, we stopped b
    from being clear.
  • (move c t b) (move b t a)
  • Precon (on c t) (clear c) (clear b) (on b t)
    (clear b) (clear a)
  • Delete (on c t) (clear b) (on b a) (clear a)
  • Add (on c b) (on b a)

51
Sussman Anomaly again
Interleave work on goals (on a b) (on b c)
t
Start work on (on a b) - (clear a)
t
Achieve (on b c)
t
Resume work on (on a b)
t
52
Linear and non-linear planning
  • The problem with planning so far is that plans
    can only be extended by adding new actions to the
    beginning or then end of the existing plan.
  • This is called linear planning.
  • We have a very local view of the problem (the
    beginning or end of the plan).
  • Search will get there in the end, but it may take
    a long time, and a lot of backtracking.
  • We may rediscover plan fragments many times as
    we backtrack.
  • Take a more global view - non-linear planning

53
Searching through states
  • So far we have searched through states (forwards)
    or goal sets (backwards).
  • Nodes are states (or goal sets)
  • Arcs are actions
  • A plan is a sequence of actions along the arcs
    from one state to another.
  • A complete plan is one which provides a path from
    the initial state to a state which satisfies the
    goals.
  • A partial plan is a plan which isnt (yet) a
    complete plan and is extended by adding a new
    action at the end (forwards) or at the beginning
    (backwards).

54
Ordering within plans
  • So far plans have been totally ordered - we
    always know the precise temporal relationships
    between the actions (A) (Ai is after
    Ai-1 and before Ai1)
  • In partially ordered plans (complete or partial)
    we are not obliged to specify the temporal
    relationships between all the actions.
  • E.g. if we are trying to achieve (on b a) and (on
    c b) we may have an action that will achieve (on
    b a) and an action for (on c b) but no decision
    as to which is to be executed first.

55
Partially ordered ? totally ordered
  • Eventually we have to place constraints on the
    ordering.

56
More complex example
e
t
t
Complete partially ordered plan
(move f e t)
(move c b t)
(move b t a)
(move e t d)
(move c t b)
(move f t e)
57
Linearisations
In order to execute the plan with a single agent,
we must decide on a total ordering
(linearisation). However many linearisations are
possible
etc.
(move c b t)
(move c b t)
(move c b t)
(move b t a)
(move f e t)
(move f e t)
(move c t b)
(move b t a)
(move e t d)
(move f e t)
(move e t d)
(move f t e)
(move e t d)
(move c t b)
(move b t a)
(move f t e)
(move f t e)
(move c t b)
Note that with two agents we dont need to
linearise to this extent. Each agent can execute
its own part of the plan
58
Searching through plans
  • Our search is now through a space of partially
    ordered partial plans.
  • In such a search space
  • Nodes are partially ordered partial plans
  • Arcs are refinements
  • adding an action
  • constraining the temporal relationships between
    two actions.
  • Initial node is the empty plan.
  • Goal is a plan which when executed (observing any
    constraints on temporal ordering) takes the
    initial state to a state which satisfies the
    goals - this is the same definition as before.
  • We are still searching we may reach a point when
    we havent achieved the goal and cant make
    further progress.

59
Same example
(initial_state (on a t) (on b t) (on c t)
(clear a) (clear b) (clear c))
t
(goal (on c b) (on b a))
t
60
Start and finish actions
  • Empty plan consists of the actions (start)
    (finish)
  • Start action - A0
  • a trivial action with no preconditions which
    doesnt do anything but whose Add list defines
    the initial state e.g. (start), ( ), ( ),
    ((on a t) (on b t) (on c t) (clear a) (clear
    b) (clear c))
  • Finish action - Ainf
  • another trivial action which doesnt do
    anythingits Preconditions specify the goals
    e.g. (finish), ((on c b) (on b a)), ( ),
    ( )
  • Any plan (set of actions) will begin with (start)
    and end with (finish)

61
Representation for partial order planning (1)
  • PLAN (A, O, L)
  • A is a set (non-ordered) of actions
  • e.g. ( A0 A1 A2 Ainf )
  • actions more important than states
  • O is a set of orderings on actions
  • e.g. ((A1 lt A2) (A3 lt A4 ) )
  • ? i, i 1 ..inf, A0 lt Ai
  • ? i, i 0 .. j, j lt inf, Ai lt Ainf
  • the orderings must be consistent (i.e. at least
    one linearisation must exist which satisfies O)

62
Representation for partial order planning (2)
  • L is a set of causal links (housekeeping)
  • represents dependencies between actions.
  • e.g. (move c t b) achieves (on c b) which is a
    precondition for finish the causal link is
    ((move c t b), (on c b), finish)
  • in general if Q is a precondition for Aneed (the
    action that needs the precondition) and Q is
    added by Aadd then the causal link is
    (Aadd, Q, Aneed )
  • the following ordering is always added
    (Aadd lt Aneed)

63
Threats (1)
  • An action threatens (might clobber) a causal link
    when it might delete the precondition specified
    in that link.
  • e.g. (move c b a) has (on c b) in its delete
    list. It therefore threatens the link
    ((move c t b), (on c b), finish)
  • Ai threatens (Aadd, Q, Aneed) if
  • it has Q in its delete-list
  • it might intervene between Aadd and Aneed
  • We cannot simply add such a threatening action to
    a plan and ignore the consequences.
  • The threatening action must not come between Aadd
    and Aneed
  • We must remove the threat by making sure that Ai
    comes earlier than Aadd(demotion) or later than
    Aneed (promotion).

64
Threats (2)
ordering
link
threat
preconditions
A
effects delete add
65
Threats (3)
Demotion
Promotion
Aadd
Ai
not Q
Q
Aadd
Aneed
Q
Ai
Aneed
not Q
NB Ordering and links do not mean direct
adjacency
66
Threats (4)
  • Demotion add an ordering which ensures that the
    threatening action occurs before the action which
    provides the link.
  • i.e if Ai threatens (Aadd, Q, Aneed) then add Ai
    lt Aadd
  • Promotion add an ordering which ensures that
    the threatening action occurs after the action
    which needed the link Aneed lt Ai
  • The choice of demotion or promotion may be
    limited by the need to maintain a consistent
    ordering
  • if Aneed is Ainf we cant have Ainf lt Ai only
    demotion is possible.
  • if Aadd is A0 we cant have Ai lt A0 only
    promotion is possible.

67
POP algorithm for partial order planning
  • Basic ideas of non-linear planning are due to
    Sacerdoti (NOAH).
  • POP (Partial Order Planning) algorithm due to
    Weld.
  • We also have an Agenda - a set of (Q, Aneed)
    where Q is a precondition of Aneed - equivalent
    to a goal set (so we are effectively planning
    backwards)
  • Note that we still have search at the choice
    points (in italics) and the possibility of
    failure.
  • However the choice about which goal to select
    from the Agenda is not very important, as we are
    not committing to an ordering.

68
POP ((A,O,L), Agenda)
  • initial plan and agendaPlan (start
    finish), ((start lt finish)), ( )
    (A,O,L)Agenda (Q1, Afinish), (Qi, Afinish)
    where the Qi are the preconditions of
    Afinish
  • while not empty(Agenda) do
  • Goal selection select an item (Q, Aneed) from
    the Agenda.
  • Action selection select Aadd, an action that
    adds Q (failure?).
  • A ? A ? Aadd (Aadd may already be in A)
  • O ? O ? (Aaddlt Aneed) check that O is still
    consistent (failure?)
  • L ? L ? (Aadd, Q, Aneed)

select choice failure? point where operation
may not be possible
69
POP ((A,O,L), Agenda)
  • Causal link protection
  • check all existing causal links to see if Aadd
    threatens any of them.
  • check all existing actions to see if they
    threaten the new causal link.
  • select demotion and/or promotion to remove any
    threats, making sure that the consistency of O is
    preserved (failure?).
  • Linearisation check
  • ensure that the some linear sequence of the
    actions in A can be generated subject to the
    temporal constraints in O
  • Update Agenda Agenda Agenda - (Q, Aneed)
    (Q1, Aadd) (Qi, Aadd) where the Qi are
    the preconditions of Aadd So we are still
    backward chaining
  • return (A,O,L)

70
Example of POP algorithm (1)
t
t
Actions
start (start), ( ) , ( ), ((on a t) (on b
t) (on c t) (clear a) (clear b) (clear
c)) finish (finish), ((on c b) (on b a)),
( ), ( )
Plan ((start) (finish)), (((start) lt
(finish))), ( ) Agenda ((on c b), (finish))
((on b a), (finish)))
71
Example of POP algorithm (2)
  • chose (move c t b) to add (on c b)
  • Plan ((start) (finish) (move c t b)),
  • (((start) lt (finish)) ((move c t b) lt
    (finish))),
  • (((move c t b), (on c b), (finish)))
  • Agenda ((on c t), (move c t b))
  • ((clear c), (move c t b))
  • ((clear b), (move c t b))
    ((on b a), (finish))

72
Example of POP algorithm (3)
  • start already adds (on c t), (clear c), (clear b)
  • Plan ((start) (finish) (move c t b)),
  • (((start) lt (finish)) ((start) lt (move c t b)
    lt (finish))),
  • (((move c t b), (on c b), (finish))
  • ((start), (on c t), (move c t b))
  • ((start), (clear c), (move c t b))
  • ((start), (clear b), (move c
    t b)))
  • Agenda ((on b a), (finish))

73
Example of POP algorithm (4)
  • chose move(b,t,a) to add on(b,a)
  • Plan ((start) (finish) (move c t b) (move b t
    a)),
  • (((start) lt (finish)) ((start) lt (move c t b)
    lt (finish))
  • ((move b t a) lt (finish))),
  • (((move c t b), (on c b), (finish))
  • ((start), (on c t), (move c t b))
  • ((start), (clear c), (move c t b))
  • ((start), (clear b), (move
    c t b))
  • ((move b t a), (on b a), (finish))))
  • Agenda ((on b t), (move b t a)) ((clear b),
    (move b t a))
  • ((clear a), (move b t a))

74
Example of POP algorithm (5)
  • start already adds (on b t), (clear b) and (clear
    a)Plan ((start), (finish), (move c t b),
    (move b t a)),
  • (((start) lt (finish)) ((start) lt (move c t b)
    lt (finish))
  • ((start) lt (move b t a) lt (finish))),
  • (((move c t b), (on c b), (finish))
  • ((start), (on c t), (move c t b))
  • ((start), (clear c), (move c t b))
  • ((start), (clear b), (move c t b))
  • ((move b t a), (on b a), finish)
  • ((start), (on b t), (move b t a))
  • ((start), (clear b), (move b t a))
  • ((start), (clear a), (move b t a)))
  • Agenda

75
Example of POP algorithm (6)
  • start already adds (on b t), (clear b) and (clear
    a)Plan ((start), (finish), (move c t b),
    (move b t a)),
  • (((start) lt (finish)) ((start) lt (move c t b)
    lt (finish))
  • ((start) lt (move b t a) lt (finish))),
  • (((move c t b), (on c b), (finish))
  • ((start), (on c t), (move c t b))
  • ((start), (clear c), (move c t b))
  • ((start), (clear b), (move c t b))
  • ((move b t a), (on b a), finish)
  • ((start), (on b t), (move b t a))
  • ((start), (clear b), (move b t a))
  • ((start), (clear a), (move b t a)))
  • Agenda

threat
76
Example of POP algorithm (7)
  • BUT
  • (clear b) is in the delete-list for (move c t b)
  • the timing constraints are such that (move c t
    b)could come before (move b t a)
  • so (move c t b) threatens (clear b)
  • cant demote (move c t b) (Aadd (start))
  • must promote it ((move b t a) lt (move c t b))

77
Example of POP algorithm (8)
  • Plan ((start), (finish), (move c t b), (move b
    t a)),
  • (((start)lt (finish)), ((start)lt (move c t b)
    lt (finish))
  • ((start)lt (move b t a) lt (finish))
  • ((move b t a) lt (move c t b))),
  • (((move c t b), (on c b), (finish))
  • ((start), (on c t), (move c t b))
  • ((start), (clear c), (move c t b))
  • ((start), (clear b), (move c t b))
  • ((move b t a), (on b a), (finish))
  • ((start), (on b t), (move b t a))
  • ((start), (clear b), (move b t a))
  • ((start), (clear a), (move b t a)))
  • Linearisation (start), (move b t a), (move c
    t b), (finish)

78
Example of POP algorithm (9)
(start)
(on a t) (on b t) (on c t) (clear a) (clear
b) (clear c)
(on c t) (clear b) (clear c)
(move c t b)
not (clear b)
(on b t) (clear a) (clear b)
(move b t a)
(on c b)
(on b a)
(finish)
79
Exercise
  • Work though the Sussman anomaly problem with POP

t
t
Write a Comment
User Comments (0)
About PowerShow.com