COM1070: Introduction to Artificial Intelligence: week 6 - PowerPoint PPT Presentation

Loading...

PPT – COM1070: Introduction to Artificial Intelligence: week 6 PowerPoint presentation | free to download - id: 69e130-YzA2M



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

COM1070: Introduction to Artificial Intelligence: week 6

Description:

COM1070: Introduction to Artificial Intelligence: week 6 Yorick Wilks Computer Science Department University of Sheffield www.dcs.shef.ac.uk/-yorick – PowerPoint PPT presentation

Number of Views:73
Avg rating:3.0/5.0
Slides: 60
Provided by: LucyMo5
Learn more at: http://staffwww.dcs.shef.ac.uk
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: COM1070: Introduction to Artificial Intelligence: week 6


1
COM1070 Introduction to Artificial Intelligence
week 6 Yorick Wilks Computer Science
Department University of Sheffield www.dcs.shef.ac
.uk/-yorick
2
Two traditions of knowledge representation based
on meaning representations
  • First, the Expert Systems tradition which we can
    link to the meaning and knowledge representtaions
    in SHRDLU, Winograds (1972 MIT system)
  • Secondly, the tradition of scripts and frames as
    representational objects, which can be linked
    back to Conceptual Dependency theory (Schank,
    Stanford, 1971)

3
Expert Systems
  • SHRDLU, and the blocks microworld.
    Domain-specific knowledge (as opposed to
    domain-general knowledge).
  • Understood a substantial subset of English by
    representing and reasoning about a very
    restricted domain the MIT Blocksworld.
  • The program too complex to be extended to real
    world scenarios.
  • Expert systems also relied on depth of knowledge
    of constrained domain.
  • But commercially exploitable. Real
    applications medical domains, computer layouts
    etc.

4
(No Transcript)
5
  • Shrdlu Terry Winograd (1972) at MIT.
  • Natural language understanding program.
  • Unlike Eliza and PARRY, knows about environment,
    can reason, plan, and learn.
  • Simulation of robot arm and environment of
    coloured play blocks.
  • Communication via key board.
  • You saw it in the video two weeks ago--it knows
    more than PARRY but would be a far more boring
    desert island companion!

6
Tyoical and untypical SHRDLU
  • Pick up the green block and put it in the box
  • I HAVE DONE SO
  • Pick up the small red block and put it in the box
  • BY SMALL I ASSUME YOU MEAN THE RED BLOCK ON THE
    BLUE CUBE. I HAVE DONE SO.
  • Put the red block on the cube in the box.

7
  • Pick up the steeple.
  • SORRY, I DONT KNOW THE WORD STEEPLE.
  • A steeple is a stack that contains two green
    cubes and a pyramid.
  • I UNDERSTAND
  • Trade you the steeple for three red cubes.
  • SORRY, I DONT KNOW THE WORD FREE.
  • Sorry, I thought you were smarter than you are.
  • SORRY, I DONT KNOW THE WORD SORRY.

8
Does the little block that the hatched pyramids
support supports support anything black
9
Winograds method based on logic and notion that
words point to sets of things in the world.-
meaning extracted by looking at what is in common
between sets of objects.
Label Type Size x-position y-position
A Box Large 3 4
B Box Small 2 2
C Ball Large 4 2
D Ball Small 3 3
E Pyramid Large 2 3
F Pyramid Small 2 6
10
  • Database for a toy world.
  • Pick up the ball to the right of the small box
  • to the right of the small box set of objects
    that have x position greater than that of the
    small box at (c,d), I.e. with xgtc
  • Sentence scanned for known instruction (pick up)
  • Then look for object that satisfies all
    constraints. Ball (p,q) where pgtc
  • Ambiguous preprogrammed to ask.
  • Retrieve x,y coordinates and grasp object.

11
SEMANTICS IN WINOGRAD
  • object semantic structures for
  • A RED CUBE
  • (GOAL (IS X BLOCK
  • (EQDIM X) Micro planner code
  • (COLOR X RED)))
  • (Block manip physob thing) features


12
  • TRUTH CONDITIONS (on other screen) are questions
    to be asked after finding a big NP
  • Below is what SHRDLU knows about the blocks at a
    given moment.

Pyramid 2 is a block is pyramidal is green Bl
ock 7 is a block is blue Pyramid
2 is above BLOCK 7
These facts make all the Truth conditions true
(if X is Pyramid 2 and Y is Block 7) ---- SO THE
BIG NP fits this world! the little NP wouldnt.
13
  • RSSs Blocks used in Parsing.
  • Put pyramid
  • Blocks tries to prove either
  • (ON BLOCK PYRAMID)
  • Or
  • (IN BOX BLOCK)

14
Put the green pyramid on the block in the box
Prove (in BOX BLOCK)
World I
G
(on BLOCK PYRAMID)
World II
G
15
  • Actions (verbs which are commands not expressed
    as RSSs)
  • grasp ? i.e. no (CMEANS etc)
    (VB((TRANS(GRASP)))
  • grasp ?
    (DEFTHEOREM TC-GRASP)
    (THCONSE (X Y) (GRASP ?X) (THGOAL
    (MANIP ?X) (THCOND
    ((THGOAL(GRASPING ?X))
  • THGOAL (CLEARTOP ?X) (THUSE TC-CLEARTOP)
  • This is the influential defn. of grasp that
    carries out grasping. Note the assumption of 2
    disjoint classes of verbs. All inferential defns.
    reduces to GRASP, UNGRAD, MOVETO

16
  • Microworld Approach
  • Precursor to Expert Systems.
  • Knowledge possible to make a distinction between
    domain-specific and domain-independent knowledge.
  • Domain-specific expertise in specific domain.
  • Domain-independent more general purpose
    knowledge.
  • Minsky supervised several students who looked at
    microworlds.
  • Eg Daniel Bobrows STUDENT program (1967) which
    solved algebra story problems such as

17
  • If the number of customer Tom gets is twice the
    square of 20 percent of the number of
    advertisements he runs is 45, what is the number
    of customers Tom gets?
  • Tom Evans ANALOGY program (1968) solved geometric
    analogy from IQ tests.
  • Blocks world most famous microworld.
  • Set of solid blocks placed on tabletop. Task is
    to rearrange blocks in a certain way, using a
    robot hand.
  • Shrdlu (limited domain knowledge) Shows
    understanding through answering questions and
    carrying out actions.

18
  • Impressive language used is like English,
    conversation is interactive, and robot
    understands in the sense of doing what is
    required of it.
  • But SHRDLU approach relies on dealing with small
    sets.
  • Supposing it could deal with larger sets, to
    understand a large grey mammal would have to
    find set of all grey things, all large things,
    and all mammals to find common element. But
    humans probably immediately think of elephants.

19
  • Issues of Knowledge Representation
  • Clearly important to have stored knowledge.
  • Problem with ELIZA and PARRY no knowledge.
  • Main formalisms for knowledge representation
  • Predicate logic
  • Frames and semantic networks
  • Rule-based systems

20
  • Lisp and Prolog
  • Functions vs. predicates/truth functions.
  • Contrast
  • Mother (john) gt mary
  • Mother (john, mary)
  • which is true or false
  • Contrast
  • CONVENTIONAL LANGUAGE (A B)
  • (A - B
  • LISP (QUOTIENT (SUM A B) (DIFFERENCE A B)) -gt
    ANSWER
  • PROLOG
  • (SUM A, B, T1),(DIFFERENCE A,B,T2)(QUOTIENT T1,
    T2, T3)
  • which is true or false for some T1, T2 and a
    (returned T3)

21
  • SHRDLU was never extended and became a key
    example of a type of expert system that was also
    toy--very few words or rules expressing its
    information.
  • Disillusionment
  • Methods that worked for demonstrations on simple
    problems failed when tried on wider selections,
    or more difficult problems.
  • Lighthill Report (Lighthill, 1973) criticisms of
    Artificial Intelligence.

22
  • The General Problem Solver (Newell, Shaw and
    Simon, CMU, 1970s)
  • GPS designed to be a general problem solver
    does not contain knowledge of problem
    domains--unlike SHRDLU. Performs means-end
    analysis, guided by heuristics about which
    subgoal should be achieved first. But many
    problems dont lend themselves to means-end
    analysis. Also cant extend method of giving
    programs heuristics to larger problem domains.
  • Its knowledge stored in rules, plus an
    interpreter for running themX--gt Y Z or
  • IF X THEN DO Y AND Z
  • like the rules of grammar and logic.

23
Knowledge Representation, Slot and Filler
structures, and Scripts
  • Why are we capable of performing so many
    difficult tasks? One answer is the knowledge
    that we have built up of the world.
  • In order to have intelligent computers, or
    robots, they will need to be able to understand
    the world, and to understand it, they must have
    knowledge of it.
  • Symbolic AI emphasis on giving computers
    knowledge about the world. Raises question of how
    this knowledge will be represented.

24
KNOWLEDGE AND SYNTACTIC STRUCTURES
  • The departure of Mr. Whitelaw from N. Ireland at
    this time has amazed Irish political leaders.
    While there was no official comment in Dublin, it
    would appear that the Government was not informed
    in advance of MR. WHITELAWS MOVE.

25
The man drove down the road in a car ((The
man)(drove (down the road)(in a car))))
((The man)(drove(down the road(in a car))))
26
  • Symbolic AI emphasis on particular kind of
    knowledge. Can be contrasted to different view of
    knowledge evident in Adaptive Behaviour approach,
    and in Neural Computing.
  • Expert system knowledge represented in form of
    if-then procedural rules. Clearly we have
    knowledge that is not represented in this manner.
    (Dreyfus criticisms of the whole expertise view
    of the world in What computers cant do)
  • we have other kinds of knowledge, including
    vague knowledge
  • we have acquired it in certain ways

27
  • What is required of a knowledge representation
    language?
  • Representational adequacy It should allow you to
    represent all the knowledge you need to reason
    with
  • Inferential adequacy It should allow new
    knowledge to be inferred from a basic set of
    facts
  • Inferential efficiency Inferences should be made
    efficiently
  • Clear Syntax and Semantics We should know what
    the allowable expressions of the language are and
    what they mean
  • Naturalness The language should be reasonably
    natural and easy to use

28
  • Useful form of inference property inheritance
  • Semantic Networks, and Frames support property
    inheritance. (Slot and Filler structures are a
    more specific form of frames).
  • Semantic Networks and Frames use different
    notations but are effectively the same.
  • They provide a simple and intuitive way of
    representing facts about objects, and essentially
    semantic networks are just diagrammatic forms of
    frames.

29
animal
subclass
subclass
has-part
reptile
mammal
head
subclass
large
elephant
grey
size
colour
instance
instance
clyde
nellie
apples
likes
30
(No Transcript)
31
Mammal Subclass Animal Has_part yes
Furry yes Elephant
subclass Mammal has_trunk yes
colour grey size large
furry noClyde instance Elephant
colour pink owner FredNellie
instance Elephant size small
32
  • Can represent subclass and instance relationships
    (Both sometimes called ISA)
  • We can represent same idea as a frame
  • Properties (eg colour and size) can be referred
    to as slots and slot values (eg grey, large) as
    slot fillers.
  • Objects can inherit all properties of parent
    class (therefore Nellie is grey and large).
  • But can properties which are only typical
    (usually called default, here starred), and can
    be overridden.
  • For example, animal is typically furry, but this
    is not so for an elephant.

33
  • Situation can be complicated by multiple
    inheritance, where object or class may have more
    than one parent class. May result in some
    conflict for example if Nellie is both an
    elephant and a circus animal. From elephant we
    would expect Nellies habitat to be the jungle,
    but from circus animal we would expect it to be a
    tent. Could set further precedence order to
    resolve this or might need further class for
    Circus-elephant.

34
Nixon diamond inheritance!
  • Nixon was a Quaker and a Republican
  • Quakers are usually pacifists
  • Republicans are usually not pacifists
  • What to inherit for Nixon?
  • He wasnt!

quakers
republicans
Pacifists
Not-pacifists
nixon
?
35
Semantic networks give transitive
inference--but..
  • Tweety is an elephant, an elephant is a mammal
    Tweety is a mammal.
  • The US President is elected every 4 years, Bush
    is US President Bush is elected every 4
    years!!!!
  • My car is a Ford, Ford is a car company my car
    is a car company!!!!

36
  • Slot and Filler structures
  • Semantic networks (which can be written as
    frames) are very general systems. They can be
    seen as examples of slot and filler structures,
    but it is possible to have slot and filler
    structures which embody specific notions of what
    types of objects and relations are permitted.
  • Conceptual dependency and scripts are examples of
    strong slot-and-filler structures.

37
  • Conceptual dependency (CD) slot and filler
    structures used to represent the kind of
    knowledge abut events that is usually conveyed in
    natural language sentences.
  • Goal is to represent knowledge so as to
  • Facilitate drawing inferences from sentences
  • Be independent of the language in which the
    sentences were originally stated.

38
  • Conceptual Dependency claim
  • For any two sentences that are identical in
    meaning, regardless of language, there should be
    only one representation.
  • Any information in a sentence that is implicit
    must be made explicit in the representation of
    the meaning of that sentence.

39
Schank CD diagrams
  • John ltgt INGEST lt--D--

body
food
40
  • Conceptual Dependency 11 primitive acts.
  • ATRANS Transfer of an abstract relationship
    (e.g. give)
  • PTRANS Transfer of the physical location of an
    object (e.g. go)
  • PROPEL Application of physical force to an
    object (e.g. push)
  • MOVE Movement of a body part by its owner (e.g.
    kick)
  • GRASP Grasping of an object by an actor (e.g.
    clutch)
  • INGEST Ingestion of an object by an animal (e.g.
    eat)

41
  • EXPEL Expulsion of something from the body of an
    animal (e.g. cry)
  • MTRANS Transfer of mental information (e.g.
    tell)
  • MBUILD Building new information out of old (e.g.
    decide)
  • SPEAK Production of sounds (e.g. say)
  • ATTEND Focusing of a sense organ towards a
    stimulus (e.g. listen)

42
  • 4 Primitive conceptual categories to build
    dependency structures.
  • ACTs Actions
  • PPs Objects (picture producers)
  • AAs Modifiers of actions (action aiders)
  • PAs Modifiers of PPs (picture aiders)
  • Dependencies among conceptualisations correspond
    to semantic relations among underlying concepts.

43
  • Rule 1 describes the relationship between an
    actor and the event he or she causes. This is a
    two-way dependency since neither actor nor event
    can be considered primary. The letter p above the
    dependency link indicates past tense.
  • Rule 2 describes the relationship between a PP
    and a PA that is being asserted to describe it.
    Many state descriptions, such as height, are
    represented in CD as numeric scales.
  • Rule 3 describes the relationship between two
    PPs, one of which belongs to the set defined by
    the other.
  • Rule 4 describes the relationship between a PP
    and an attribute that has already been predicated
    of it. The direction of the arrow is toward the
    PP being described.
  • Rule 5 describes the relationship between two
    PPs, one of which provides a particular kind of
    information about the other. The three most
    common types of information to be provided in
    this way are possession (shown as POSS-BY),
    location (shown as LOC), and physical containment
    (shown as CONT). The direction of the arrow is
    again toward the concept being described.

44
(No Transcript)
45
  • Rule 6 describes the relationship between an ACT
    and the PP that is the object of that ACT. The
    direction of the arrow is toward the ACT since
    the context of the specific ACT determines the
    meaning of the object relation.
  • Rule 7 describes the relationship between an ACT
    and the source and the recipient of the ACT.
  • Rule 8 describes the relationship between an ACT
    and the instrument with which it is performed.
    The instrument must always be a full
    conceptualisation (I.e. it must contain an ACT),
    not just a single physical object.
  • Rule 9 describes the relationship between an ACT
    and its physical source and destination.
  • Rule 10 describes the relationship between a PP
    and a state in which it started and another in
    which it ended.
  • Rule 11 describes the relationship between one
    conceptualisation and another that causes it.
    Notice that the arrows indicate dependency of one
    conceptualisation on another and so point in the
    opposite direction of the implication arrows. The
    two forms of the rule describe the cause of an
    action and the cause of a state change.

46
  • Rule 12 describes the relationship between a
    conceptualisation and the time at which the event
    it describes occurred.
  • Rule 13 describes the relationship between one
    conceptualisation and another that is the time of
    the first. The example for this rule also shows
    how CD exploits a model of the human information
    processing system see is represented as the
    transfer of information between the eyes and the
    conscious processor.
  • Rule 14 describes the relationship between a
    conceptualisation and the place at which it
    occurred.

47
  • Advantages of Conceptual Dependency primitives
  • Easier to describe the inference rules by which
    knowledge can be manipulated. Rules can be
    represented once for each primitive ACT rather
    than for all words that describe that ACT, e.g.
    Give, Take, Steal and Donate all are instances of
    ATRANS. Can make same inferences about who has
    the object, and who once had the object.
  • To construct CD representation we make explicit
    some information that was not stated in the text,
    e.g. John took the book from Mary, we make
    explicit the information that Mary no longer has
    the book. This might make it easier to understand
    subsequent statement, e.g. Mary had nothing to
    read.

48
  • Disadvantages of Conceptual Dependency primitives
  • Does require all knowledge to be decomposed into
    a set of primitives.
  • Not always easy e.g. would take two pages of CD
    forms to represent John bet Sam fifty dollars
    that the Mets would win the World Series.
  • Also emphasis on events not other knowledge
    such as our knowledge about physical objects.
  • Problem of a right set of primitives

49
  • Scripts
  • Reference Schank, R. Abelson, R. (1977)
    Scripts, plans, goals and understanding.
    Hillsdale, New Jersey Lawrence Erlbaum.
  • SCRIPTS were predefined sequences of CD
    structures designed to capture the stereotypical
    sequence of events in a story.
  • If a story could be matched against a script then
    inferences present in the script could be drawn
    to make implicit aspects of the story
    explicit--as a way of understanding it by machine.

50
Sample Schank story
  • John went to New York by bus. On the bus he
    talked to an old lady. When he left the bus, he
    thanked the driver. He took the subway to
    Leones. On the subway his pocket was picked. He
    got off the train and entered Leones. He had
    some lasagne. When the check came, he discovered
    he couldnt pay. The management told him he would
    have to wash dishes. When he left, he caught a
    bus to New Haven.

51
  • Script detailed description of a stereotypical
    episode unfolding through time.
  • Leone story uses restaurant script, bus script,
    and subway script.
  • Can show that people have knowledge of a number
    of scripts sequence of actions that are
    typically performed in certain situations.
  • Schanks student Wendy Lehnert programmed SAM
    (Script Applier Mechanism) to match the story and
    answer questions.
  • The key notion is separating out what you do NOT
    expect in a script for an activity.

52
  • How did John get to Leones?
  • JOHN TOOK A BUS TO NEW YORK AND THEN HE TOOK THE
    SUBWAY TO LEONES.
  • Did anything unusual happen on the bus?
  • I DONT REMEMBER ANYTHING.
  • Did anything unusual happen on the subway?
  • A THIEF PICKED JOHNS POCKET.
  • What did John order?
  • JOHN ORDERED A LASAGNE.
  • Did anything unusual happen at the restaurant?
  • JOHN DISCOVERED THAT HE COULDNT PAY THE CHECK SO
    HE HAD TO WASH THE DISHES.

53
  • Why didnt John have any money?
  • BECAUSE A THIEF PICKED JOHNS POCKET.
  • How did John get back to New Haven?
  • JOHN WENT FROM LEONES BY SUBWAY AND THEN HE WENT
    TO NEW HAVEN BY BUS.
  • How was John able to take the bus to New Haven?
  • PROBABLY JOHN HAD A TICKET.

54
  • Restaurant script illustrates important
    components of a script
  • Entry conditions Conditions that must, in
    general be satisfied before the events described
    in the script can occur.
  • Result Conditions that will in general be true
    after the events in the script have occurred.
  • Props Slots representing objects that are
    involved in the events described in the script.
    The presence of these people can be inferred if
    they are not mentioned explicitly. If specific
    people are mentioned they can be inserted into
    slots.
  • Track the specific variation on a more general
    pattern that is represented by this particular
    script.
  • Scenes that actual sequence of events that
    occur. The events are represented in CD formalism.

55
(No Transcript)
56
  • Implicit claim here about understanding language
  • Not enough to know the individual meanings of the
    words, or even the meanings of individual
    sentences.
  • An important part of understanding language is
    filling in missing information.
  • Not all there is to understanding language also
    have to be able to do things like work out the
    point or message that a writer/speaker is trying
    to get across. And to understand non-literal
    speech (metaphor etc).
  • To understand text, have to make inferences about
    things that are not mentioned.

57
  • Scripts avoid problem of inference explosion if
    all possible inferences made, impossible to know
    when to stop. Instead, limited set of inferences
    made.
  • Example of filling in missing information (making
    script-based inferences)
  • John went to a restaurant. He asked the waiter
    for coq au vin. He left a large tip.
  • Could be expanded to
  • John went to a restaurant. He sat down at a
    table. He read a menu. He ordered coq au vin from
    the waitress. She bought it to his table. He ate
    the coq au vin. He left a large tip. He paid the
    check. He exited from the restaurant.

58
  • Can also spot events that do not fit e.g. not
    paying in a restaurant.
  • Scripts getting at common-sense knowledge.
    Knowledge that is so boring that writers dont
    include it in the texts. Scripts are an approach
    to giving such knowledge to a computer program.
  • McCarthy always claimed that the problem for AI
    was boring common-sense knowledge that a child
    learns and is not explicit (Consider the
    instructions in the phone booth!)

59
Schank and his students at Yale created a range
of programs in the 1970s.
  • English Analysis Program. (translates English
    into Conceptual Dependency).
  • Script Applier (SAM)
  • Attempts to locate inputs in script contexts.
  • Uses pattern matching to identify script.
  • Use of human planning stuctures (PAM)
  • Summariser, Question-answerer, Generator
About PowerShow.com