Intelligent Systems III Lecture 9: Logical Agents - PowerPoint PPT Presentation

1 / 62
About This Presentation
Title:

Intelligent Systems III Lecture 9: Logical Agents

Description:

Squares adjacent to wumpus are smelly. Squares adjacent to pit are breezy ... Shooting kills wumpus if you are facing it ... Entailment in the wumpus world ... – PowerPoint PPT presentation

Number of Views:159
Avg rating:3.0/5.0
Slides: 63
Provided by: miny205
Category:

less

Transcript and Presenter's Notes

Title: Intelligent Systems III Lecture 9: Logical Agents


1
Intelligent Systems IIILecture 9 Logical Agents
  • Russell and Norvig,
  • Artificial Intelligence A Modern Approach
  • Chapter 7

2
Material
  • Slides on wiki
  • Chapter 7-10 of AIMA
  • Chapter 7 available
  • http//aima.cs.berkeley.edu/

3
Outline
  • Inference rules and theorem proving
  • Forward chaining, backward chaining
  • Soundness and completeness proofs
  • Knowledge-based agents
  • Wumpus world
  • Model-checking and entailment

4
Forward and backward chaining
  • Horn Form (restricted)
  • KB conjunction of Horn clauses
  • Horn clause
  • proposition symbol or
  • (conjunction of symbols) ? symbol
  • E.g., C ? (B ? A) ? (C ? D ? B)
  • Modus Ponens (for Horn Form) complete for Horn
    KBs
  • a1, ,an, a1 ? ? an ? ß
  • ß
  • Can be used with forward chaining or backward
    chaining.
  • These algorithms are very natural and run in
    linear time

5
Forward chaining
  • Idea fire any rule whose premises are satisfied
    in KB,
  • add its conclusion to the KB, until query is found

6
Forward chaining algorithm
  • Forward chaining is sound and complete for Horn KB

7
Forward chaining example
8
Forward chaining example
9
Forward chaining example
10
Forward chaining example
11
Forward chaining example
12
Forward chaining example
13
Forward chaining example
14
Forward chaining example
15
Exercise
  • Proof soundness and completeness of forward
    chaining

16
Proof of completeness
  • FC derives every atomic sentence that is entailed
    by KB
  • FC reaches a fixed point where no new atomic
    sentences are derived
  • Consider the final state as a model m, assigning
    true/false to symbols
  • Every clause in the original KB is true in m
  • a1 ? ? ak ? b
  • Hence m is a model of KB
  • If KB q, q is true in every model of KB,
    including m

17
Backward chaining
  • Idea work backwards from the query q
  • to prove q by BC,
  • check if q is known already, or
  • prove by BC all premises of some rule concluding
    q
  • Avoid loops check if new subgoal is already on
    the goal stack
  • Avoid repeated work check if new subgoal
  • has already been proved true, or
  • has already failed

18
Backward chaining example
19
Backward chaining example
20
Backward chaining example
21
Backward chaining example
22
Backward chaining example
23
Backward chaining example
24
Backward chaining example
25
Backward chaining example
26
Backward chaining example
27
Backward chaining example
28
Forward vs. backward chaining
  • FC is data-driven, automatic, unconscious
    processing,
  • e.g., object recognition, routine decisions
  • May do lots of work that is irrelevant to the
    goal
  • BC is goal-driven, appropriate for
    problem-solving,
  • e.g., Where are my keys? How do I get into a PhD
    program?
  • Complexity of BC can be much less than linear in
    size of KB

29
Knowledge bases
  • Knowledge base set of sentences in a formal
    language
  • Declarative approach to building an agent (or
    other system)
  • Tell it what it needs to know
  • Then it can Ask itself what to do - answers
    should follow from KB
  • Agents can be viewed at the knowledge level
  • i.e., what they know, regardless of how
    implemented
  • Or at the implementation level
  • i.e., data structures in KB and algorithms that
    manipulate them

30
A simple knowledge-based agent
  • The agent must be able to
  • Represent states, actions, etc.
  • Incorporate new percepts
  • Update internal representations of the world
  • Deduce hidden properties of the world
  • Deduce appropriate actions

31
Wumpus World PEAS description
  • Performance measure
  • gold 1000, death -1000
  • -1 per step, -10 for using the arrow
  • Environment
  • Squares adjacent to wumpus are smelly
  • Squares adjacent to pit are breezy
  • Glitter iff gold is in the same square
  • Shooting kills wumpus if you are facing it
  • Shooting uses up the only arrow
  • Grabbing picks up gold if in same square
  • Releasing drops the gold in same square
  • Sensors Stench, Breeze, Glitter, Bump, Scream
  • Actuators Left turn, Right turn, Forward, Grab,
    Release, Shoot

32
Wumpus world characterization
  • Fully Observable No only local perception
  • Deterministic Yes outcomes exactly specified
  • Episodic No sequential at the level of actions
  • Static Yes Wumpus and Pits do not move
  • Discrete Yes
  • Single-agent? Yes Wumpus is essentially a
    natural feature

33
Exploring a wumpus world
34
Exploring a wumpus world
35
Exploring a wumpus world
36
Exploring a wumpus world
37
Exploring a wumpus world
38
Exploring a wumpus world
39
Exploring a wumpus world
40
Exploring a wumpus world
41
Logic in general
  • Logics are formal languages for representing
    information such that conclusions can be drawn
  • Syntax defines the sentences in the language
  • Semantics define the "meaning" of sentences
  • i.e., define truth of a sentence in a world
  • E.g., the language of arithmetic
  • x2 y is a sentence x2y gt is not a
    sentence
  • x2 y is true iff the number x2 is no less
    than the number y
  • x2 y is true in a world where x 7, y 1
  • x2 y is false in a world where x 0, y 6

42
Entailment
  • Entailment means that one thing follows from
    another
  • KB a
  • Knowledge base KB entails sentence a if and only
    if a is true in all worlds where KB is true
  • E.g., the KB containing the Dutch won and the
    Belgians won entails Either the Dutch won or
    the Belgians won
  • E.g., xy 4 entails 4 xy
  • Entailment is a relationship between sentences
    (i.e., syntax) that is based on semantics

43
Models
  • Logicians typically think in terms of models,
    which are formally structured worlds with respect
    to which truth can be evaluated
  • We say m is a model of a sentence a if a is true
    in m
  • M(a) is the set of all models of a
  • Then KB a iff M(KB) ? M(a)
  • E.g. KB Dutch won and Belgianswon a Dutch won

44
Entailment in the wumpus world
  • Situation after detecting nothing in 1,1,
    moving right, breeze in 2,1
  • Consider possible models for KB assuming only
    pits
  • 3 Boolean choices ? 8 possible models

45
Wumpus models
46
Wumpus models
  • KB wumpus-world rules observations

47
Wumpus models
  • KB wumpus-world rules observations
  • a1 "1,2 is safe", KB a1, proved by model
    checking

48
Wumpus models
  • KB wumpus-world rules observations

49
Wumpus models
  • KB wumpus-world rules observations
  • a2 "2,2 is safe", KB a2

50
Wumpus world sentences
  • Let Pi,j be true if there is a pit in i, j.
  • Let Bi,j be true if there is a breeze in i, j.
  • ? P1,1
  • ?B1,1
  • B2,1
  • "Pits cause breezes in adjacent squares"
  • B1,1 ? (P1,2 ? P2,1)
  • B2,1 ? (P1,1 ? P2,2 ? P3,1)

51
Truth tables for inference
52
Exercise 7.3 (page 236)
  • Suppose the agent has progressed to the point
    shown in Figure 7.4(a) (slide 36), having
    perceived nothing in 1,1, a breeze in 2,1,
    and a stench in 1,2. and is now concerned with
    the contents of 1,3, 2,2, and 3,1. Each of
    these can contain a pit and at most one can
    contain a wumpus.
  • Following the example of Figure 7.5, construct
    the set of possible worlds. (You should find 32
    of them.) Mark the worlds in which the KB is true
    and those in which each of the following
    sentences is true
  • a2 There is no pit in 2,2.
  • a3 There is a wumpus in 1,3.
  • Hence show that KB a2 and KB a3.

53
Inference by enumeration
  • Depth-first enumeration of all models is sound
    and complete
  • For n symbols, time complexity is O(2n), space
    complexity is O(n)

54
Proof methods
  • Proof methods divide into (roughly) two kinds
  • Application of inference rules
  • Legitimate (sound) generation of new sentences
    from old
  • Proof a sequence of inference rule
    applications Can use inference rules as
    operators in a search algorithm
  • Typically require transformation of sentences
    into a normal form
  • Model checking
  • truth table enumeration (always exponential in n)
  • improved backtracking, e.g., Davis--Putnam-Logeman
    n-Loveland (DPLL)
  • heuristic search in model space (sound but
    incomplete)
  • e.g., min-conflicts-like hill-climbing
    algorithms

55
Inference-based agents in the wumpus world
  • A wumpus-world agent using propositional logic
  • ?P1,1
  • ?W1,1
  • Bx,y ? (Px,y1 ? Px,y-1 ? Px1,y ? Px-1,y)
  • Sx,y ? (Wx,y1 ? Wx,y-1 ? Wx1,y ? Wx-1,y)
  • W1,1 ? W1,2 ? ? W4,4
  • ?W1,1 ? ?W1,2
  • ?W1,1 ? ?W1,3
  • ? 64 distinct proposition symbols, 155 sentences

56
(No Transcript)
57
Expressiveness limitation of propositional logic
  • KB contains "physics" sentences for every single
    square
  • For every time t and every location x,y,
  • Lx,y ? FacingRightt ? Forwardt ? Lx1,y
  • Rapid proliferation of clauses

t
t
58
Summary
  • Resolution is complete for propositional
    logicForward, backward chaining are linear-time,
    complete for Horn clauses
  • Logical agents apply inference to a knowledge
    base to derive new information and make decisions
  • Wumpus world requires the ability to represent
    partial and negated information, reason by cases,
    etc.
  • Propositional logic lacks expressive power

59
Exercise
  • Show the following using resolution
  • KB (B1,1 ? (P1,2? P2,1)) ?? B1,1
  • a ?P1,2
  • KB a

60
Resolution algorithm
  • Proof by contradiction, i.e., show KB??a
    unsatisfiable

61
Conversion to CNF
  • B1,1 ? (P1,2 ? P2,1)
  • Eliminate ?, replacing a ? ß with (a ? ß)?(ß ?
    a).
  • (B1,1 ? (P1,2 ? P2,1)) ? ((P1,2 ? P2,1) ? B1,1)
  • 2. Eliminate ?, replacing a ? ß with ?a? ß.
  • (?B1,1 ? P1,2 ? P2,1) ? (?(P1,2 ? P2,1) ? B1,1)
  • 3. Move ? inwards using de Morgan's rules and
    double-negation
  • (?B1,1 ? P1,2 ? P2,1) ? ((?P1,2 ? ?P2,1) ? B1,1)
  • 4. Apply distributivity law (? over ?) and
    flatten
  • (?B1,1 ? P1,2 ? P2,1) ? (?P1,2 ? B1,1) ? (?P2,1 ?
    B1,1)

62
Resolution example
  • KB (B1,1 ? (P1,2? P2,1)) ?? B1,1
  • a ?P1,2
Write a Comment
User Comments (0)
About PowerShow.com