Some Special Purpose Knowledge Representations - PowerPoint PPT Presentation

1 / 44
About This Presentation
Title:

Some Special Purpose Knowledge Representations

Description:

When the system comes to a conclusion with which the expert disagrees ... Preston, Ramadan. Richards. Beydoun & Hoffman. Kang, Ho, Wobcke. Singh, Mishra, Sowmya ... – PowerPoint PPT presentation

Number of Views:69
Avg rating:3.0/5.0
Slides: 45
Provided by: bobm8
Category:

less

Transcript and Presenter's Notes

Title: Some Special Purpose Knowledge Representations


1
Some Special Purpose Knowledge Representations
  • Bob McKay
  • School of Computer Science and Engineering
  • College of Engineering
  • Seoul National University

2
Outline
  • Knowledge Representation for Knowledge
    Acquisition
  • Ripple-Down Rules
  • Model Based Knowledge Acquisition MORE
  • Case-Based Reasoning
  • Qualitative Reasoning QSIM

3
Ripple-Down Rules
  • Based on work at UNSW (Kensington)
  • Rules with exceptions
  • Reverses usual conflict resolution (last matching
    conclusion is used)
  • When the system comes to a conclusion with which
    the expert disagrees
  • case is generalised as far as appropriate
  • added to the knowledge-base as an exception rule
  • System eventually consists of exceptions to
    exceptions to exceptions....

4
Ripple Down Rules
5
Key features
  • Automatic rule placement
  • And matching inference
  • Expert identifies features that distinguish the
    case from
  • A single past case
  • A selection of past cases
  • All seen cases

Show stored cases to the expert one by one
  • Case by case development while in use

6
Problem Types
  • Preston, Srinivasan
  • Kang, Preston
  • Preston, Ramadan
  • Richards
  • Beydoun Hoffman
  • Kang, Ho, Wobcke
  • Singh, Mishra, Sowmya
  • Beckman, Hoffman
  • Hoffman, Kang
  • Cao, Martinez-Bejar
  • Finlayson
  • Hoffman
  • Single Classification
  • Multiple Classification
  • Configuration
  • Resource allocation
  • Heuristic search
  • Document management
  • Image processing
  • Genetic Algorithm
  • Information extraction
  • Ontology development
  • Planning (soccer)
  • Translation

7
Viewpoint
  • No system is ever complete
  • A robust case-by-case maintenance system is
    required
  • Needs domain expert (user)
  • Build the whole system using the maintenance
    system
  • Applies to all systems ??????

8
Model-Based Elicitation
  • We will look at an example system MORE
  • Model-based systems have two knowledge
    representations
  • The domain model
  • Representation tailored for easy expression of
    domain
  • With which the expert interacts
  • The performance model
  • Representation tailored for simple reasoning,
    explanation etc
  • In which the actual reasoning occurs
  • usually some form of logic rules

9
MORE Domain
  • MORE deals with heuristic classification tasks
  • Specifically, maintenance of drilling fluid in
    systems such as oil rigs

10
MORE Models
11
MORE Domain Models
  • Symptoms
  • Items user may observe during diagnosis, and seek
    to explain
  • Attributes
  • Serve to further discriminate symptoms
  • eg rapid increase or decrease in some property
  • Hypotheses
  • Events that are possible causes of symptoms
  • therefore serve as hypotheses
  • Background Conditions
  • Conditions which make symptoms more or less
    likely
  • given the occurrence of a hypothetical cause
  • Tests
  • can be used to determine presence or absence of
    background conditions
  • Test Conditions
  • Conditions that have a bearing on the accuracy of
    the tests

12
MORE Performance Representation
  • The MORE performance representation
  • form the system actually reasons with
  • maintained in the form of production rules
  • They fall into three classes
  • Diagnostic Rules
  • heuristic mapping between symptoms and hypotheses
  • Worked out from the causal links in the domain
    model
  • Symptom Confidence Rules
  • Give estimates of test reliability under
    different background conditions
  • Hypothesis Expectancy Rules
  • Give estimates of the prior probability of
    hypotheses under different background conditions.

13
MORE KA Strategies
  • Differentiation
  • Seek symptoms that distinguish between hypotheses
  • for example, symptoms that have only one cause
  • Applied when MORE discovers a pair of hypotheses,
    H1 and H2, that have no differentiating symptom
  • MORE elicits a symptom from the expert, and adds
    it to the event model with appropriate links
  • Frequency Conditionalisation
  • Determine any background conditions that make a
    particular hypothesis more or less likely
  • Applied when a family of rules (ie rules with the
    same conclusion) lacks any rules with either high
    or low, positive or negative, confidence factors

14
MORE KA Strategies
  • Symptom Distinction
  • Identify special properties that indicate the
    underlying cause
  • Eg a rapid decrease in density suggests an influx
    of water, rather than shale contamination
  • Applied by MORE when a family of rules contains
    no members with high positive confidence factors
  • Symptom Conditionalisation
  • Find out the conditions under which different
    symptoms might be expected to manifest
    themselves, given a particular disorder
  • Such conditions set up expectations which may
    serve to rule out hypotheses if they are not
    confirmed
  • Applied when there are no rules in a family that
    have a high negative confidence factor

15
MORE KA Strategies
  • Path Division
  • Attempt to uncover intermediate events between a
    hypothesised disorder and an expected symptom
  • which have a higher conditional probability of
    occurrence than the symptom itself
  • If you don't observe that intermediate event, you
    have stronger evidence against the hypothesis
    than just failing to observe the symptom
  • Applied when a rule family lacks any rule that
    associates a high negative confidence factor with
    failure to observe a symptom of that family's
    hypothesis
  • MORE seeks an intermediate event that is caused
    by the hypothesis
  • failure to observe it constitutes stronger
    evidence against the hypothesis

16
MORE KA Strategies
  • Path Differentiation
  • Try to split up causal pathways between disorder
    and symptom
  • As in path division
  • The motivation is to discover intermediate events
    that will allow us to differentiate between
    disorders that have similar symptoms.
  • Applied when a symptom is linked to two different
    hypotheses
  • MORE elicits from the expert an intermediate
    event that causes the symptom, and is caused by
    one hypothesis, but not the other
  • Test Differentiation
  • Determine the degree of confidence to be placed
    in test results
  • Evidence is normally the result of tests with
    varying reliability
  • Applied when a rule family lacks rules with
    either high or low positive or negative
    confidence factors
  • Will ultimately generate symptom confidence rules

17
MORE KA Strategies
  • Test Conditionalisation
  • Determine the background conditions that affect
    the reliability of tests
  • This information has a bearing on the
    significance of observations in particular cases
  • Applied when a rule family lacks rules with
    either high or low positive or negative
    confidence factors
  • Will ultimately generate symptom confidence rules

18
MORE Reanalysis
  • Confidence Factors Trigger Reanalysis
  • MORE has some expectations about the
    relationships between confidence factors
  • When its expectations are violated, MORE consults
    with the user to try to rectify the rules
  • Inference Path Lengths
  • Suppose that a disorder D leads to symptom S1,
    and S1 leads to another symptom S2
  • MORE expects that the confidence factor
    associated with the rule mapping S1 to D will be
    greater than or equal to that associated with the
    rule mapping S2 to D

19
MORE KA Strategies
  • Diagnostic Significance
  • Suppose that symptom S1 is only caused by D1,
    whereas S2 is caused by a number of disorders
    including D1
  • MORE expects that the confidence factor
    associating S1 with D1 is greater than that
    associating S2 with D1
  • Rule Families
  • MORE has expectations regarding the relative
    values of confidence factors associated with
    rules in the same rule family
  • For example, adding a symptom condition to a rule
    family that increases the conditional likelihood
    of the symptom should result in rules that have
    greater negative confidence factors than their
    constituent rules
  • That is, the more we anticipate a particular
    symptom, given a particular hypothesis, the
    greater our shift towards disbelief in the
    hypothesis if that symptom is not observed

20
Case-Based Reasoning
  • In KBS applications discussed previously
  • human heuristic knowledge and experience are
    compiled and transformed into a condensed
    form
  • However, human experience can be used directly
    and explicitly as well
  • A motor mechanic noticed some blue smoke coming
    out of the muffler of a car. He recalled a
    previous case where an engine problem caused the
    same kind of smoke. So he decided to check the
    engine first.
  • This is a typical example of case-based reasoning
  • A stored, similar case is retrieved to derive a
    solution for the new case

21
Case-Based Reasoning
  • A case-based reasoning system consists of three
    major components
  • A library of historical cases
  • A means of using the key elements of the present
    problem to find and retrieve the most similar
    case(s) from the library
  • A means of modifying the proposed solution when
    the retrieved case is not identical to the
    current one
  • The key issue in case-based reasoning is how to
    organise the library so that efficient search for
    a similar case can be achieved
  • It is ideally suited for the situation where many
    well-documented problems and solutions exist
  • Legal cases as precedents
  • Predicting weather from previous similar
    situations

22
Components
  • Representation
  • Retrieval
  • Matching engine retrieves cases similar to target
    case.
  • Adaptation
  • Remembering

23
Case-Based Reasoning Cycle
24
Breathalyser
  • Example cases
  • Duration is duration of drinking session.
  • Perhaps elapsed time should be added as a case
    feature?

25
Case Representation
  • The knowledge engineering task is focused on
    deciding how to represent cases
  • what features best characterise cases
  • i.e. predictive features
  • may require expert analysis
  • e.g. for image classification the bitmap may need
    to be converted to an edge map.
  • e.g. height and weight may not be useful in
    themselves for classifying apples and pears,but
    height/weight ratio is.

26
Case retrieval
  • Based on some similarity measure.
  • e.g number of matching features
  • e.g. distance measure based on difference between
    numeric features
  • Indexes may be used to speed the retrieval

27
Case indexing - Example
28
k-Decision Tree
29
Case Adaptation
  • Breathalyser
  • if actual consumption is 2 more than in retrieved
    case add 0.5 to blood alcohol count.
  • Property Valuation
  • for extra bedroom add x to price
  • More complex adaptation may be needed where
    solutions are plans or designs, rather than
    single values.

30
CBR Advantages Disadvantages
  • Advantages
  • allows the reasoner to propose solutions quickly
  • allows the reasoner to propose solutions in
    domains not completely understood by the reasoner
  • cases are useful in interpreting open-ended and
    ill-defined concepts
  • lends itself to analogical reasoning
  • knowledge acquisition process is considerably
    simplified
  • Disadvantages
  • solutions may be biased towards stored cases
  • solutions might be used blindly without proper
    validation, especially by novices

31
CBR Applications
  • Early Software Cost Estimation
  • ww.cs.tcd.ie/publications/tech-reports/
    reports.99/TCD-CS-1999-36.pdf
  • Weather Prediction
  • www.mcs.vuw.ac.nz/comp/Publications/
    CS-TR-93-7.abs.html
  • www.aic.nrl.navy.mil/papers/2001/
    AIC-01-003/ws5/ws5toc1.pdf
  • Travel Check-in
  • http//www.check-in.com/
  • Building Design
  • http//nathan.gmd.de/projects/fabel/fabel.html
  • And dozens of other applications
  • http//www.cbr-web.org/CBR-Web/?infoprojectsmenu
    pp

32
QSIM A Qualitative Reasoning Language
  • An example of a very narrowly focussed domain
    language
  • Attempts to use physical constraints to describe
    what qualitative states are possible for a
    physical system being simulated
  • Parameters
  • Qualitative states (QS) are defined as a set of
    parameters
  • each of which is either
  • Increasing
  • decreasing
  • steady
  • Critical Points
  • Each physical parameter is represented as a
    function of time that has a finite number of
    critical points
  • A critical point is where the physical parameter
    changes value
  • Landmark Values
  • A totally ordered set of landmark values
    represents all the values of the physical
    parameter at its critical points

33
QSIM A Qualitative Reasoning Language
  • Time
  • Time is represented by a set of distinguished
    time points in the time line
  • Only two types of time elements can be defined
  • the time at a point
  • the time interval between two points
  • Initial State
  • The initial state is defined by the qualitative
    values of various physical parameters at a
    particular time point or time interval
  • Qualitative Behaviour
  • The basis of the qualitative simulation is the
    determination of all the possible combinations
    of values for all the physical parameters at a
    point or interval in time
  • A sequence of qualitative states describes a
    qualitative behaviour of the system

34
QSIM A Qualitative Reasoning Language
  • Constraints
  • The number of possible combinations of values
    for all the physical parameters is huge
  • Constraints of the model must be used to reduce
    the search space
  • If more than one qualitative change remains, then
    the current qualitative state has multiple
    successors
  • The simulation will produce a tree
  • Objective
  • The objective of qualitative simulation is to
    determine the next feasible qualitative state (or
    states) of the physical parameters
  • If the current QS is defined at a point,
    then the next QS would be defined at an
    interval starting from this point
  • If the current QS is defined at an interval, then
    the next QS would be defined at a point that is
    the end of the interval

35
QSIM A Qualitative Reasoning Language
  • Transitions
  • A physical parameter is restricted to a
    predetermined set of possible transitions from
    one QS to the next
  • There are two types of transitions
  • p-transitions (from point to interval)
  • i-transitions (from interval to point)

36
QSIM Example
  • We describe a qualitative simulation of a ball
    being thrown upwards
  • The system is determined by three physical
    parameters
  • acceleration
  • velocity
  • height

37
QSIM Example Transition Table
  • p-transition From QS(f,ti)
    To QS(f,ti, ti1)
  • P1 ltlj,stdgt
    ltlj,stdgt
  • P2 ltlj,stdgt
    lt(lj, lj1),incgt
  • P3 ltlj,stdgt
    lt(lj-1, lj),decgt
  • P4 ltlj,incgt
    lt(lj, lj1),incgt
  • P5 lt(lj, lj1),incgt
    lt(lj, lj1),incgt
  • P6 ltlj,decgt
    lt(lj-1, lj),decgt
  • P7 lt(lj, lj1),decgt
    lt(lj, lj1),decgt
  • i-transition From QS(f,ti,
    ti1) To QS(f,ti1)
  • I1 ltlj,stdgt
    ltlj,stdgt
  • I2 lt(lj, lj1),incgt
    lt lj1,stdgt
  • I3 lt(lj, lj1),incgt
    lt lj1,incgt
  • I4 lt(lj, lj1),incgt
    lt(lj, lj1),incgt
  • I5 lt(lj, lj1),decgt
    ltlj,stdgt
  • I6 lt(lj, lj1),decgt
    ltlj,decgt
  • I7 lt(lj, lj1),decgt
    lt(lj, lj1),decgt
  • I8 lt(lj, lj1),incgt
    ltl,stdgt
  • I9 lt(lj, lj1),decgt
    ltl,stdgt

38
QSIM Example
  • The first state at time t0 is when the ball is
    initially thrown upwards
  • The second QS can be described as an interval
  • QS(A t0 t1) (g std), where
  • A is acceleration
  • g is a landmark value
  • std means steady
  • QS(V t0 t1) (0? dec), where
  • V is velocity which is decreasing (dec)
  • The qualitative value is somewhere between zero
    and the initial velocity
  • which could be any positive value
  • QS(Y t0 t1) (0 ? inc), where
  • Y is height which is increasing (inc)
  • The qualitative value is between the initial
    height (0) and the maximum height
  • which could be any positive value

39
QSIM Example
  • The time interval t0 t1 is when the ball is
    moving up
  • The next objective is to compute the next QS,
    QS(F t1)
  • The following steps are involved in computing the
    next QS.
  • Determine all possible i-transitions from the
    current QS using the transition table. For
    example
  • A I1
  • V I5 I6 I7 I9
  • Y I4 I8 (note Y ? is impossible, ruling
    out I2, I3)
  • Apply various constraints to filter out
    infeasible transitions
  • I4 for Y is incompatible with I5, I6, I9 for V
  • Based on the relationships between velocity and
    position
  • I8 for Y is incompatible with I5, I7, I9 for V
  • Based on the relationships between velocity and
    position

40
QSIM Example
  • After this step, two combinations remain
  • A I1 V I7 Y I4
  • A I1 V I6 Y I8
  • The first is the same as the current QS
  • thus filtered out
  • The second will be the next QS
  • That is
  • QS(A t1) (g std)
  • QS(V t1) (0 dec)
  • QS(Y t1) (Ynew std)

41
QSIM Example
  • Now to obtain QS(F,t2)
  • Determine all possible p-transitions
  • A P1, P2, P3
  • V P6
  • Y P1, P2, P3
  • Physical constraints rule out P1, P2 for Y
  • Since V is zero and decreasing
  • Physical constraints rule out P2, P3 for A
  • The new state is
  • QS(A t2) (g std)
  • QS(V t2) (-inf0 dec)
  • QS(Y t2) (0Ynew dec)

42
QSIM Applications
  • http//www.cs.utexas.edu/users/qr/qsim-users.html
    QSIM-apps
  • Structural Design
  • Classification Analysis of Materials
  • Chemical Hazard Identification
  • Modelling Chemical Reactions
  • Genetic Regulatory Networks
  • Verifying Controller Behaviour
  • Air Conditioner Faults
  • Etc

43
Summary
  • Representation for Knowledge Acquisition
  • Ripple-Down approach provides maintainable
    knowledge
  • Separation of Domain and Performance
    representation in MORE
  • Domain model supports knowledge acquisition
  • Performance representation supports reasoning,
    explanation
  • Case-Based Reasoning
  • Reasoning with reduced explicit knowledge
    modelling
  • Reasoning for Special Purposes
  • QSIM for Qualitative Simulation of the Real World

44
Summary
  • Its clear that people dont follow probabilistic
    laws in reasoning
  • Response of probabilists people are just wrong
  • Alternative response people have learnt
    effective methods to reason with complex
    dependencies
  • Dempster-Shafer theory
  • Distinguishes between uncertainty and lack of
    knowledge
  • Consistent, systematic treatment of lack of
    knowledge
  • Fuzzy logic
  • Deals with fuzzy concepts rather than probability
  • Emphasises efficient computation
Write a Comment
User Comments (0)
About PowerShow.com