Multistrategy Rule Learning - PowerPoint PPT Presentation

About This Presentation
Title:

Multistrategy Rule Learning

Description:

Find the hypothesis H that explains D better than. other hypotheses ... Natural Language. Logic. Who or what is a strategically. critical industrial civilization ... – PowerPoint PPT presentation

Number of Views:28
Avg rating:3.0/5.0
Slides: 103
Provided by: learningag
Learn more at: http://lalab.gmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Multistrategy Rule Learning


1
CS 785, Fall 2001
Multistrategy Rule Learning
Gheorghe Tecuci tecuci_at_cs.gmu.eduhttp//lalab.gm
u.edu/
Learning Agents LaboratoryDepartment of
Computer Science George Mason University
2
Overview
A gentle introduction to Machine Learning
Rule learning
Agent teaching Hands-on experience
Illustration of rule learning in other domains
Required reading
3
A gentle introduction to Machine Learning
What is Learning?
Empirical inductive learning from examples
Explanation-based learning
Learning by analogy
Abductive learning
Multistrategy learning
4
What is Learning?

Learning is a very general term denoting the way
in which people and computers
  • acquire and organize knowledge (by building,
    modifying and organizing internal representations
    of some external reality),
  • discover new knowledge and theories (by creating
    hypotheses that explain some data or phenomena),
    or
  • acquire skills (by gradually improving their
    motor or cognitive skills through repeated
    practice, sometimes involving little or no
    conscious thought).

Learning results in changes in the agent (or
mind) that improve its competence and/or
efficiency.
5
Representative learning strategies
  • Instance-based learning
  • Reinforcement learning
  • Neural networks
  • Genetic algorithms and evolutionary
    computation
  • Reinforcement learning
  • Bayesian Learning
  • Multistrategy learning
  • Rote learning
  • Learning from instruction
  • Learning from examples
  • Explanation-based learning
  • Conceptual clustering
  • Quantitative discovery
  • Abductive learning
  • Learning by analogy

6
Empirical inductive learning from examples

The learning problem
Given a set of positive examples (E1, ..., En)
of a concept a set of negative examples (C1,
... , Cm) of the same concept a learning bias
other background knowledge Determine a
concept description which is a generalization of
the positive examples that does not cover any
of the negative examples
Purpose of concept learning Predict if an
instance is an example of the learned concept.
7
General approach
Compares the positive and the negative examples
of a concept, in terms of their similarities and
differences, and learns the concept as a
generalized description of the similarities of
the positive examples. This allows the agent to
recognize other entities as being instances of
the learned concept.
Illustration
Positive examples of cups P1
P2 ...
Negative examples of cups N1

Description of the cup concept
has-handle(x), ...
8
The concept learning problem (illustration)
COA511 IS COA-SPECIFICATION-MICROTHEORY TOTAL-NBR
-OFFENSIVE-ACTIONS-FOR-MISSION
10 COA51 IS COA-SPECIFICATION-MICROTHEORY TOTAL-
NBR-OFFENSIVE-ACTIONS-FOR-MISSION 5
Positive examples
COA31 IS COA-SPECIFICATION-MICROTHEORY TOTAL-NBR-
OFFENSIVE-ACTIONS-FOR-MISSION
0 COA61 IS COA-SPECIFICATION-MICROTHEORY TOTAL-N
BR-OFFENSIVE-ACTIONS-FOR-MISSION 30
Negative examples
Cautious learner
?O1 IS COA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OF
FENSIVE-ACTIONS-FOR-MISSION ?N1 ?N1 IS-IN 5
10
Concept1
9
The concept learning problem (illustration)
COA511 IS COA-SPECIFICATION-MICROTHEORY TOTAL-NBR
-OFFENSIVE-ACTIONS-FOR-MISSION
10 COA51 IS COA-SPECIFICATION-MICROTHEORY TOTAL-
NBR-OFFENSIVE-ACTIONS-FOR-MISSION 5
Positive examples
COA31 IS COA-SPECIFICATION-MICROTHEORY TOTAL-NBR-
OFFENSIVE-ACTIONS-FOR-MISSION
0 COA61 IS COA-SPECIFICATION-MICROTHEORY TOTAL-N
BR-OFFENSIVE-ACTIONS-FOR-MISSION 30
Negative examples
Aggressive learner
Concept2
?O1 IS COA-SPECIFICATION-MICROTHEORY TOTAL-NBR-OF
FENSIVE-ACTIONS-FOR-MISSION ?N1 ?N1 IS-IN 1
29
10
Basic idea of version space concept learning
Consider the examples E1, , E2 in sequence.
UB
Initialize the lower bound to the first positive
example (LBE1) and the upper bound (UB) to the
most general generalization of E1.

LB
UB
If the next example is a positive one, then
generalize LB as little as possible to cover it.
LB


UB
If the next example is a negative one, then
specialize UB as little as possible to uncover it
and to remain more general than LB.
LB

_


_
UBLB
Repeat the above two steps with the rest of
examples until UBLB. This is the learned concept.
_


_


11
Generalization with a positive example
A difficulty in learning is that there are many
ways in which a concept can be generalized to
cover a new positive example.

((?x yellow) (2 green))
((1 green) (2 yellow))
concept
12
Specialization with a negative example
Another difficulty in learning is that there are
many ways in which a concept can be specialized
to uncover a negative example.
_
((?x yellow) (2 green))
((1 green) (2 yellow))
concept
13
The Version Space learning method (Mitchell, 1978)
Because there are many ways to generalize or to
specialize concepts based on the examples, LB and
UB are not single elements but sets of
elements. The sets LB and UB are very large for
complex problems. The method requires an
exhaustive set of examples.
14
The version space method (cont.)
The learning method
  • Initialize S to the first positive example and G
    to its most general generalization
  • 2. Accept a new training instance I
  • If I is a positive example then
  • - remove from G all the concepts that do not
    cover I
  • - generalize the elements in S as little as
    possible to cover I
  • If I is a negative example then
  • - remove from S all the concepts that cover I
  • - specialize the elements in G as little as
    possible to uncover I and be more general than
    at least one element from S.
  • 3. Repeat 2 until GS and that contains a single
    concept description.

15
Illustration of the version space learning method
16
General features of the empirical inductive
methods
Require many examples
Do not need much domain knowledge
Improve the competence of the agent
The version space method is too computationally
intensive to have practical applications
Practical empirical inductive learning methods
(such as ID3) rely on heuristic search to
hypothesize the concept.
17
Explanation-based learning (EBL)
The EBL problem

Given A concept example cup(o1) Ü color(o1,
white), made-of(o1, plastic), light-mat(plastic),
has-handle(o1), has-flat-bottom(o1),
up-concave(o1),... Goal the learned concept
should have only operational features (e.g.
features present in the examples) BK cup(x)
Ü liftable(x), stable(x), open-vessel(x). liftabl
e(x) Ü light(x), graspable(x). stable(x) Ü
has-flat-bottom(x). ... Determine An
operational concept definition cup(x) Ü
made-of(x, y), light-mat(y), has-handle(x),
has-flat-bottom(x), up-concave(x).
18
Explanation-based learning (cont.)
Learns to recognize more efficiently the examples
of a concept by proving that a specific instance
is an example of it, and thus identifying the
characteristic features of the concept.
A example of a cup cup(o1) color(o1, white),
made-of(o1, plastic), light-mat(plastic),
has-handle(o1), has-flat-bottom(o1),
up-concave(o1),...
Proof generalization generalizes them
The proof identifies the characteristic features
made-of(o1, plastic) is needed to prove cup(o1)
made-of(o1, plastic) is generalized to
made-of(x, y) the material needs not be
plastic.
has-handle(o1) is needed to prove cup(o1)
color(o1,white) is not needed to prove cup(o1)
19
General features of Explanation-based learning
Needs only one example
Requires complete knowledge about the concept
(which makes this learning strategy
impractical).
Improves agent's efficiency in problem solving
20
Learning by analogy
The learning problem
Learns new knowledge about an input entity by
transferring it from a known similar entity.
The learning method

ACCESS find a known entity that is analogous
with the input entity. MATCHING match the two
entities and hypothesize knowledge.
EVALUATION test the hypotheses. LEARNING
store or generalize the new knowledge.
21
Learning by analogy illustration
Illustration The hydrogen atom is like our solar
system.
22
Learning by analogy illustration
Illustration The hydrogen atom is like our solar
system. The Sun has a greater mass than the
Earth and attracts it, causing the Earth to
revolve around the Sun. The nucleus also has a
greater mass then the electron and attracts
it. Therefore it is plausible that the electron
also revolves around the nucleus.
General idea of analogical transfer similar
causes have similar effects.
23
Abductive learning
The learning problem
Finds the hypothesis that explains best an
observation (or collection of data) and assumes
it as being true.
The learning method
Let D be a collection of data Find all the
hypotheses that (causally) explain D Find the
hypothesis H that explains D better than other
hypotheses Assert that H is true

Illustration There is smoke in the East
building. Fire causes smoke. Hypothesize that
there is a fire in the East building.
24
Multistrategy learning
Multistrategy learning is concerned with
developing learning agents that synergistically
integrate two or more learning strategies in
order to solve learning tasks that are beyond the
capabilities of the individual learning
strategies that are integrated.
25
Complementariness of learning strategies
Explanation- based learning
Multistrategy
Learningfrom examples
learning
Examples
several
many
one
needed
Knowledge
complete
incomplete
very little
knowledge
needed
knowledge
Type of
induction and/
induction
deduction
inference
or deduction
improves
Effect on
improves
improves
competence or/
agent's
competence
efficiency
and efficiency
behavior
26
Overview
A gentle introduction to Machine Learning
Rule learning
Agent teaching Hands-on experience
Illustration of rule learning in other domains
Required reading
27
Rule learning
General presentation of the rule learning method
Task formalization
Explanation of an example
Hint-based explanation generation
Rule generalization by analogy
Characterization of the learned rule
More on explanation generation by analogy
28
The rule learning problem definition

GIVEN an example of a problem solving
episode a knowledge base that includes an
object ontology and a set of problem solving
rules an expert that understands why the
given example is correct and may answer agents
questions. DETERMINE a plausible version
space rule that is an analogy-based
generalization of the specific problem solving
episode.
29
The rule learning problem input example

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
Who or what is a strategicallycritical
industrial civilizationelement in US_1943?
THEN
Industrial_capacity_of_US_1943
industrial_capacity_of_US_1943 is a strategic COG
candidate for US_1943
30
The rule learning problem Learned PVS rule

IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1

IF Identify the strategic COG candidates with
respect to the industrial civilization of ?O1
QuestionWho or what is a strategically critical
industrialcivilization element in ?O1
? Answer?O2
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
THEN ?O2 is a strategic COG candidate for ?O1
INFORMAL STRUCTURE OF THE RULE
Plausible Lower Bound Condition ?O1 IS US_1943 ha
s_as_industrial_factor ?O2 ?O2 IS Industrial_c
apacity_of_US_1943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_transports_of_US_1943
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
FORMAL STRUCTURE OF THE RULE
31
Basic steps of the rule learning method
1. Formalize and learn the tasks
2. Find a formal explanation of why the example
is correct. This explanation is the best
possible approximation of the question and the
answer, in the object ontology.
3. Rewrite the example and the explanation into a
very specific rule.
4. Generalize the condition of the specific rule
and generate a plausible version space rule.
5. Refine the rule by learning from additional
positive and negative examples (according to the
rule refinement method).
32
1. Formalize the tasks

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
Who or what is a strategicallycritical
industrial civilizationelement in US_1943?
THEN
Industrial_capacity_of_US_1943
industrial_capacity_of_US_1943 is a strategic COG
candidate for US_1943

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is US_1943
THEN
A strategic COG relevant factor is strategic COG
candidate for a force The force is US_1943 The
strategic COG relevant factor is
Industrial_capacity_of_US_1943
33
Task learning
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is US_1943
Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
ltobjectgt
Identify the strategic COG candidates with
respect to the civilization of ?O2 which is a
member of ?O1
subconcept_of
Force
INFORMAL STRUCTURE OF THE TASK
subconcept_of
subconcept_of
Multi_state_force

Opposing_force
subconcept_of
subconcept_of
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
Single_state_force
Multi_state_alliance
instance_of
subconcept_of
instance_of
Equal_partners_ multi_state_alliance
Plausible upper condition ?O1 IS
Force Plausible lower bound condition ?O1 IS
US_1943
Britain_1943
component_state
instance_of
instance_of
Anglo_allies_1943
component_state
FORMAL STRUCTURE OF THE TASK
US_1943
34
2. Find an explanation of why the example is
correct

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
Who or what is a strategicallycritical
industrial civilizationelement in US_1943?
THEN
Industrial_capacity_of_US_1943
industrial_capacity_of_US_1943 is a strategic COG
candidate for US_1943
The explanation is the best possible
approximation of the question and the answer, in
the object ontology.
War_materiel_and_transports_of_US_1943
These facts justify why the industrial capacity
is a strategically critical element
is_a_major_generator_of
Industrial_capacity_of_US_1943
has_as_industrial_factor
US_1943
35
3. Rewrite the example and the explanation as a
specific rule
IF the task to accomplish is


IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is US_1943
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
THEN
A strategic COG relevant factor is strategic COG
candidate for a force The force is US_1943 The
strategic COG relevant factor is
Industrial_capacity_of_US_1943
Condition ?O1 IS US_1943 has_as_industrial_factor
?O2 ?O2 IS Industrial_capacity_of_US_1943
is_a_major_generator_of ?O3 ?O3 IS War_materie
l_and_transports_of_US_1943
Because US_1943 has_as_industrial_factor
Industrial_capacity_of_US_1943 Industrial_capa
city_of_US_1943 is_a_major_generator_of
War_materiel_and_transports_of_US_1943
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
36
4. Generalize the condition and generate the PVS
rule

IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
generalization
Plausible Lower Bound Condition ?O1 IS US_1943 ha
s_as_industrial_factor ?O2 ?O2 IS Industrial_c
apacity_of_US_1943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_transports_of_US_1943
Condition ?O1 IS US_1943 has_as_industrial_factor
?O2 ?O2 IS Industrial_capacity_of_US_1943
is_a_major_generator_of ?O3 ?O3 IS War_materie
l_and_transports_of_US_1943
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
37
5. Refine the rule
Refine the rule by learning from additional
positive and negative examples (according to the
rule refinement method to be presented in the
next session). During rule refinement the two
conditions will converge toward one another.
38
Rule learning
General presentation of the rule learning method
Task formalization
Explanation of an example
Hint-based explanation generation
Rule generalization by analogy
Characterization of the learned rule
More on explanation generation by analogy
39
The task formalization method
  • Sample formalization rule
  • obtain the task name by replacing each specific
    instance with a more general concept
  • for each replaced instance define a task feature
    of the form The concept is instance

Task name
Task features
Identify the strategic COG candidates with
respect to the controlling element of
Britain_1943which is a member of
Anglo_allies_1943
Identify the strategic COG candidates with
respect to the controlling element of a
statewhich is a member of an opposing force
The state is Britain_1943
The opposing force is Anglo_allies_1943
40
The task formalization method (cont.)
  • Any other formalization is acceptable if
  • the task name does not contain any instance or
    constant
  • each instance from the informal task appears in a
    feature of the formalized task.

Task name
Task features
Identify the strategic COG candidates with
respect to the controlling element of
Britain_1943which is a member of
Anglo_allies_1943
Identify the strategic COG candidates
corresponding to the controlling element of a
member of an opposing force
The member is Britain_1943
The force is Anglo_allies_1943
41
Sample task formalizations
I need to
Identify the strategic COG candidatesfor the
Sicily_1943 scenario
Identify the strategic COG candidatesfor a
scenario
The scenario is Sicily_1943
Which is an opposing forcein the Sicily_1943
scenario?
Anglo_allies_1943
Therefore I need to
Identify the strategic COG candidatesfor
Anglo_allies_1943
Identify the strategic COG candidatesfor an
opposing force
The opposing force is Anglo_allies_1943
Is Anglo_allies_1943 a single member force or a
multi-member force?
Anglo_allies_1943 is a multi-member force
Therefore I need to
Identify the strategic COG candidates forthe
Anglo_allies_1943 which isa multi-member force
Identify the strategic COG candidates foran
opposing force which isa multi-member force
The opposing force is Anglo_allies_1943
42
Rule learning
General presentation of the rule learning method
Task formalization
Explanation of an example
Hint-based explanation generation
Rule generalization by analogy
Characterization of the learned rule
More on explanation generation by analogy
43
Explanation of the example
Natural Language
Logic
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is US_1943
Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
explanation US_1943 has_as_industrial_factor

Industrial_capacity_of_US_1943 Industrial_capacit
y_of_US_1943 is_a_major_generator_of
War_materiel_and_transports_of_US_19
43
Who or what is a strategicallycritical
industrial civilizationelement in US_1943?
Industrial_capacity_of_US_1943
A strategic COG relevant factor is strategic COG
candidate for a force The force is US_1943 The
strategic COG relevant factor is
Industrial_capacity_of_US_1943
industrial_capacity_of_US_1943 is a strategic COG
candidate for US_1943
44
What is the form of the explanation?
The explanation is a sequence of object
relationships that correspond to fragments of the
object ontology.
explanation US_1943 has_as_industrial_factor
Industrial_capacity_of_US_1943 Industrial_capaci
ty_of_US_1943 is_a_major_generator_of

War_materiel_and_transports_of_US_1943
War_materiel_and_transports_of_US_1943
is_a_major_generator_of
Industrial_capacity_of_US_1943
has_as_industrial_factor
US_1943
45
Automatic generation of plausible explanations
IF

Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
Who or what is a strategicallycritical
industrial civilizationelement in US_1943?
THEN
industrial_capacity_of_US_1943 is a strategic COG
candidate for US_1943
Industrial_capacity_of_US_1943
The agent uses analogical reasoning heuristics
and general heuristics to generate a list of
plausible explanations from which the expert has
to select the correct ones
US_1943 --- has_as_industrial_factor ?
Industrial_capacity_of_US_1943
US_1943 ? component_state ---Anglo_Allies_1943
46
Sample strategies for explanation generation
Analogical reasoning heuristic
Look for a rule Rk that reduce the current task
T1. Extract the explanations Eg from the rule
Rk. Look for explanations of the current task
reduction that are similar with Eg.
General heuristics
Look for the relationships between the objects
from the question and the answer
Look for the relationships between an object from
the IF task and an object from the question or
the answer
47
Rule learning
General presentation of the rule learning method
Task formalization
Explanation of an example
Hint-based explanation generation
Rule generalization by analogy
Characterization of the learned rule
More on explanation generation by analogy
48
User hint selecting an object from the example

IF
Identify the strategic COG candidates with
respect to the industrial civilization of US_1943
Who or what is a strategicallycritical
industrial civilizationelement in US_1943?
THEN
industrial_capacity_of_US_1943 is a strategic COG
candidate for US_1943
Industrial_capacity_of_US_1943
The expert selects an object from the
example. The agent generates a list of plausible
explanations containing that object. The expert
selects the correct explanation(s).
Hint
Industrial_capacity_of_US_1943
Industrial_capacity_of_US_1943 IS
Industrial_capacity_of_US_1943 ?
has_as_industrial_factor --- US_1943
Industrial_capacity_of_US_1943 --
is_a_major_generator_of ?
War_materiel_and_transports_of_US_1943
49
Hint refinement

IF
Is Anglo_allies_1943 a single member force or a
multi-member force?
Identify the strategic COG candidatesfor
Anglo_allies_1943
THEN
Identify the strategic COG candidatesfor the
Anglo_allies_1943 which is a multi-member force
Anglo_allies_1943 is a multi-member force
The expert provides a hint Anglo_allies_1943
The agent generates a list of plausible
explanations containing that object. The expert
selects one expression that ends in and
clicks on EXPAND
EXPAND
Anglo_allies_1943 IS
Anglo_allies_1943 --- component_state ? US_1943
The agent generates the list of possible
expansions of the expression and the experts
select the explanation
Anglo_allies_1943 IS Strategic_COG_relevant_fact
or
Anglo_allies_1943 IS Force
Anglo_allies_1943 IS Military_factor
Anglo_allies_1943 IS Multi_member_force
Anglo_allies_1943 IS Multi_state_force
Anglo_allies_1943 IS Multi_state_alliance
Anglo_allies_1943 IS Equal_partners_multi_state_
alliance
50
Hint refinement (another example)
Hint
Britain_1943
Generate
...
Civilization_of_Britain_1943
has_as_civilization
...
component_state
Britain_1943
Anglo_allies_1943
has_as_governing_body
...
...
Governing_body_of_Britain_1943
...
Select and expand
PM_Churchill
has_as_political_leader
has_as_governing_body
has_as_dominant_psychosocial_factor
Governing_body_of_Britain_1943
...
Britain_1943
the will of the people
has_as_ruling_political_party
...
Select explanation
Conservative_party
Britain_1943 -- has_as_governing_body ?
governing_body_of_Britain_1943 --
-- has_as_dominant_psychosocial_factor ? will
of the people
51
No explanation necessary

IF
What type of strategic COGcandidates should
Iconsider for Britain_1943?
Identify the strategic COG candidatesfor
Britain_1943, a member ofAnglo_allies_1943
THEN
I consider strategic COG candidates with
respect to the governing elementof Britain_1943
Identify the strategic COG candidateswith
respect to the governing element of
Britain_1943, a member ofAnglo_allies_1943
Sometimes no formal explanation is necessary. In
this example, for instance, each time I want to
identify the strategic COG candidate for a state,
such as Britain, I would like to also consider
the candidates with respect the governing element
of this state. We need to invoke Rule Learning,
but then quit it without selecting any
explanation. The agent will generalize this
example to a rule.
52
Rule learning
General presentation of the rule learning method
Task formalization
Explanation of an example
Hint-based explanation generation
Rule generalization by analogy
Characterization of the learned rule
More on explanation generation by analogy
53
Analogical reasoning
Analogy criterion
Product
?O3
Industrial_factor
Force
is_a_major_generator_of
?O2
has_as_industrial_factor
?O1
less general than
less general than
similar explanation
explanation
War_materiel_and_transports_of_US_1943
War_materiel_and_fuel_of_Germany_1943
similar
is_a_major_generator_of
is_a_major_generator_of
Industrial_capacity_of_US_1943
Industrial_capacity_of_Germany_1943
US_1943
has_as_industrial_factor
Germany_1943
has_as_industrial_factor
explains?
explains
similar example
initial example

IF the task to accomplish is

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is US_1943
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is Germany_1943
similar
THEN
THEN
A strategic COG relevant factor is strategic COG
candidate for a force The force is US_1943 The
strategic COG relevant factor is
Industrial_capacity_of_US_1943
A strategic COG relevant factor is strategic COG
candidate for a force The force is
Germany_1943 The strategic COG relevant factor
is Industrial_capacity_of_Germany_1943
54
Generalization by analogy
Product
War_materiel_and_transports_of_US_1943
is_a_major_generator_of
?O3
Industrial_factor
Industrial_capacity_of_US_1943
Force
is_a_major_generator_of
?O2
US_1943
has_as_industrial_factor
has_as_industrial_factor
?O1
generalization
explains
explains

IF the task to accomplish is

IF the task to accomplish is
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is US_1943
Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1
THEN
THEN
A strategic COG relevant factor is strategic COG
candidate for a force The force is US_1943 The
strategic COG relevant factor is
Industrial_capacity_of_US_1943
A strategic COG relevant factor is strategic COG
candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
Knowledge-base constraints on the generalization
Any value of ?O1 should be an instance of
DOMAIN(has_as_industrial_factor) ? RANGE(The
force is ?O1) Force
Any value of ?O2 should be an instance of
RANGE(has_as_industrial_factor) ?
DOMAIN(is_a_major_generator_of)
Industrial_factor ? Economic_factor
Industrial_factor
Any value of ?O3 should be an instance of
RANGE(is_a_major_generator_of) Product
55
Rule learning
General presentation of the rule learning method
Task formalization
Explanation of an example
Hint-based explanation generation
Rule generalization by analogy
Characterization of the learned rule
More on explanation generation by analogy
56
Learned plausible version space rule

IF Identify the strategic COG candidates with
respect to the industrial civilization of a
force The force is ?O1

IF Identify the strategic COG candidates with
respect to the industrial civilization of ?O1
QuestionWho or what is a strategically critical
industrialcivilization element in ?O1
? Answer?O2
explanation ?O1 has_as_industrial_factor
?O2 ?O2 is_a_major_generator_of ?O3
Plausible Upper Bound Condition?O1 IS Force has_
as_industrial_factor ?O2 ?O2 IS Industrial_fa
ctor is_a_major_generator_of
?O3 ?O3 IS Product
THEN ?O2 is a strategic COG candidate for ?O1
INFORMAL STRUCTURE OF THE RULE
Plausible Lower Bound Condition ?O1 IS US_1943 ha
s_as_industrial_factor ?O2 ?O2 IS Industrial_c
apacity_of_US_1943 is_a_major_generator_of
?O3 ?O3 IS War_materiel_and_transports_of_US_1943
THEN A strategic COG relevant factor is strategic
COG candidate for a force The force is ?O1 The
strategic COG relevant factor is ?O2
FORMAL STRUCTURE OF THE RULE
57
Characterization of the learned rule
Plausible Upper Bound Condition
Universe of Instances
Eh
Plausible Lower Bound Condition
58
Rule learning
General presentation of the rule learning method
Task formalization
Explanation of an example
Hint-based explanation generation
Rule generalization by analogy
Characterization of the learned rule
More on explanation generation by analogy
59
Analogical reasoning heuristic
1. Look for a rule Rk that reduce the current
task T1. 2. Extract the explanations Eg from the
rule Rk. 3. Look for explanations of the current
task reduction that are similar with Eg.
Example to be explained
Previously learned rule Rk
IF the task to accomplish is T1g Explanation Eg
PUB condition PLB condition THEN accomplish
T11gT1ng
IF the task to accomplish is T1 THEN accomplish
T1a,T1d
Look for explanations that are similar with Eg
60
Justification of the heuristic
This heuristic is based on the observation that
the explanations of the alternative reductions of
a task tend to have similar structures. The same
factors are considered, but the relationships
between them are different.
T1
Question
Q
Ab
Aa
Ae
Answers
T1a
T1b
T1e
Eb
Ee
Explanations
Ea
61
Illustration of the heuristic in the COA domain
ASSESS-MASS-FOR-COA-WITH-DECISIVE-POINT FOR-COA
?O1 FOR-DECISIVE-POINT ?O2
Does the main effort act on ?O2 with an adequate
force ratio?
Yes, it acts with a force ratio of ?N1
No, it acts with a force ratio of ?N1
ASSESS-MASS-WHEN-MAIN-EFFORT-ACTS-ON- DECISIVE-POI
NT-WITH-POOR-FORCE-RATIO FOR-COA ?O1 FOR-DECISIV
E-POINT ?O2 FOR-UNIT ?O3 FOR-EFFORT ?O4 FOR-FOR
CE-RATIO ?N1 FOR-RECOMMENDED-FORCE-RATIO ?N2
ASSESS-MASS-WHEN-MAIN-EFFORT-ACTS-ON- DECISIVE-POI
NT-WITH-GOOD-FORCE-RATIO FOR-COA ?O1 FOR-DECISIV
E-POINT ?O2 FOR-UNIT ?O3 FOR-EFFORT ?O4 FOR-FOR
CE-RATIO ?N1 FOR-RECOMMENDED-FORCE-RATIO ?N2
Explanations ?O3 ASSIGNMENT ?O4 ?O3 TASK ?O5
OBJECT-ACTED-ON ?O2 ?O5 RECOMMENDED-FORCE-RATIO
?N2 ?O5 FORCE-RATIO ?N1 lt ?N2
Explanations ?O3 ASSIGNMENT ?O4 ?O3 TASK ?O5
OBJECT-ACTED-ON ?O2 ?O5 RECOMMENDED-FORCE-RATIO
?N2 ?O5 FORCE-RATIO ?N1 gt ?N2
62
Illustration (cont.)
1
2
3
63
Illustration from the workaround domain
WORKAROUND UNMINED DESTROYED BRIDGE WITH FIXED
BRIDGE OVER GAP
What engineering technique to use?
minor preparations
gap reduction
slope reduction
USE FIXED BRIDGE WITH MINOR PREPARATION OVER GAP
USE FIXED BRIDGE WITH GAP REDUCTION OVER GAP
USE FIXED BRIDGE WITH SLOPE REDUCTION OVER GAP
SITE103 HAS-WIDTH 25, AVLB-EQ CAN-BUILD AVLB70
MAX-GAP 17 lt 25 SITE103 BED
SITE106 HAS-WIDTH 12, AVLB-EQ CAN-BUILD AVLB70
MAX-GAP 17 gt 12 UNIT91010
MAX-WHEELED-MLC 20, AVLB-EQ CAN-BUILD AVLB70
MLC-RATING 70 ? 20 UNIT91010
MAX-TRACKED-MLC 40,AVLB-EQ CAN-BUILD AVLB70
MLC-RATING 70 ? 40 SITE103 BANK1 SITE105
MAX-SLOPE 190,UNIT91010 DEFAULT-NEGOCIABLE-SLOP
E 25 lt190 SITE103 BANK2 SITE104 MAX-SLOPE
200,UNIT91010 DEFAULT-NEGOCIABLE-SLOPE 25 lt
200
SITE103 HAS-WIDTH 25, AVLB-EQ CAN-BUILD AVLB70
MAX-GAP 17 lt 25 SITE103 HAS-WIDTH
25, AVLB-EQ CAN-BUILD AVLB70
MAX-REDUCIBLE-GAP 26 ? 25 UNIT91010
MAX-WHEELED-MLC 20,AVLB-EQ CAN-BUILD AVLB70
MLC-RATING 70 ? 20 UNIT91010 MAX-TRACKED-MLC
40,AVLB-EQ CAN-BUILD AVLB70 MLC-RATING
70 ? 40
SITE203 HAS-WIDTH 12, AVLB-EQ CAN-BUILD AVLB70
MAX-GAP 17 ? 12 UNIT91010
MAX-WHEELED-MLC 20,AVLB-EQ CAN-BUILD AVLB70
MLC-RATING 70 ? 20 UNIT91010 MAX-TRACKED-MLC 40,
AVLB-EQ CAN-BUILD AVLB70 MLC-RATING 70 ? 40
64
Another heuristic
1. Look for a rule Rk that reduce a similar task
to similar subtasks. 2. Extract the explanations
Eg from the rule Rk. 3. Look for explanations of
the current task reduction that are similar with
Eg.
65
Justification of the heuristic
This heuristic is based on the observation that
similar problem solving episodes tend to have
similar explanations

66
Illustration from the workaround domain
WORKAROUND UNMINED DESTROYED BRIDGE WITH FIXED
BRIDGE OVER GAP
What engineering technique to use?
minor preparations
gap reduction
slope reduction
USE FIXED BRIDGE WITH MINOR PREPARATION OVER GAP
USE FIXED BRIDGE WITH GAP REDUCTION OVER GAP
USE FIXED BRIDGE WITH SLOPE REDUCTION OVER GAP
Similar
Similar
explanations
explanations
explanations
Similar reductions
Similar
Similar
WORKAROUND UNMINED DESTROYED BRIDGE WITH
FLOATING BRIDGE OVER GAP
What engineering technique to use?
explanations
explanations
minor preparations
slope reduction
USE FLOATING BRIDGE WITH MINOR PREPARATION OVER
GAP
USE FLOATING BRIDGE WITH SLOPE REDUCTION OVER GAP
67
Yet another heuristic
1. Look for a rule Rk that reduces a task that is
similar to the current task even if the subtasks
are not similar. 2. Extract the explanations Eg
from the rule Rk. 3. Look for explanations of the
current task reduction that are similar with Eg.
The plausible explanations found by the agent can
be ordered by their plausibility (based on the
heuristics used).
68
Overview
A gentle introduction to Machine Learning
Rule learning
Agent teaching Hands-on experience
Illustration of rule learning in other domains
Required reading
69
Agent teaching hands-on experience
ModelingTask FormalizationRule Learning
70
(No Transcript)
71
(No Transcript)
72
(No Transcript)
73
(No Transcript)
74
(No Transcript)
75
(No Transcript)
76
(No Transcript)
77
(No Transcript)
78
(No Transcript)
79
Overview
A gentle introduction to Machine Learning
Rule learning
Agent teaching Hands-on experience
Illustration of rule learning in other domains
Required reading
80
Illustration of rule learning in other domains
Illustration in the course of action critiquing
domain
Illustration in the workaround generation domain
Illustration in the manufacturing domain
Illustration in the assessment and tutoring domain
81
The COA critiquing domain
The input example

IF the task to accomplish is
Assess security wrt countering enemy
reconnaissance for-coa COA411
THEN accomplish the task
Assess security when enemy recon is
present for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
82
The learned rule

RASWCER-001
IF the task to accomplish is ASSESS-SECURITY-WRT-
COUNTERING-ENEMY-RECONNAISSANCE
FOR-COA ?O1
Question Is an enemy recon unit present in ?O1
? Answer Yes, the enemy unit ?O2 is performing
the action ?O3 which is a
reconnaissance action.
Explanation ?O2 SOVEREIGN-ALLEGIANCE-OF-ORG
?O4 IS RED--SIDE ?O2 TASK ?O3 IS
INTELLIGENCE-COLLECTION--MILITARY-TASK
Plausible Upper Bound Condition ?O1 IS
COA-SPECIFICATION-MICROTHEORY ?O2 IS
MODERN-MILITARY-UNIT--DEPLOYABLE
SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK
?O3 ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TAS
K ?O4 IS ALLEGIANCE-OF-UNIT
Plausible Lower Bound Condition ?O1 IS
COA411 ?O2 IS RED-CSOP1
SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK
?O3 ?O3 IS SCREEN1 ?O4 IS RED--SIDE
Then accomplish the task ASSESS-SECURITY-WHEN
-ENEMY-RECON-IS-PRESENT FOR-COA
?O1 FOR-UNIT ?O2
FOR-RECON-ACTION ?O3
83
Language to logic translation
Analogical reasoning Limited natural language
processing Hints based interactions
Natural Language
Logic
Assess security for COA411 wrt countering enemy
reconnaissance
Assess security wrt countering enemy
reconnaissance for-coa COA411
Explanation RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-OR
G RED--SIDE RED-CSOP1 TASK SCREEN1 SCREEN1 IS
INTELLIGENCE-COLLECTION--MILITARY-TASK
Question Is an enemyreconnaissance unit present?
Learn the rule
Answer Yes, RED-CSOP1 which is performing the
reconnaissance action SCREEN1
Assess security for COA411 when enemy recon
RED-CSOP1 is present and performs
SCREEN1 for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
Assess security when enemy recon is
present for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
84
What is the form of the explanation?
The explanation is a sequence of object
relationships that correspond to a fragment of
the object ontology.
Explanation RED-CSOP1 SOVEREIGN-ALLEGIANCE-OF-O
RG RED--SIDE RED-CSOP1 TASK SCREEN1 SCREEN1
IS INTELLIGENCE-COLLECTION--MILITARY-TASK
MODERN-MILITARY-UNIT--DEPLOYABLE
INTELLIGENCE-COLLECTION-MILTARY-TASK
SUBCLASS-OF

SCREEN-MILITARY-TASK
INSTANCE-OF
INSTANCE-OF
TASK
RED-CSOP1
SCREEN1
SOVEREIGN-ALLEGIANCE-OF-ORG
RED--SIDE
85
User guided explanation generation

IF the task to accomplish is
Is an enemyreconnaissance unit present?
Assess security wrt countering enemy
reconnaissance for-coa COA411
THEN accomplish the task
Yes, RED-CSOP1 which is performing the
reconnaissance action SCREEN1
Assess security when enemy recon is
present for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
86
Types of hints the BEETWEEN hint

IF the task to accomplish is
Is an enemyreconnaissance unit present?
Assess security wrt countering enemy
reconnaissance for-coa COA411
THEN accomplish the task
Yes, RED-CSOP1 which is performing the
reconnaissance action SCREEN1
Assess security when enemy recon is
present for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
87
Types of hints the FROM hint

IF the task to accomplish is
Is an enemyreconnaissance unit present?
Assess security wrt countering enemy
reconnaissance for-coa COA411
THEN accomplish the task
Yes, RED-CSOP1 which is performing the
reconnaissance action SCREEN1
Assess security when enemy recon is
present for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
88
Types of hints the IS hint

IF the task to accomplish is
Is an enemyreconnaissance unit present?
Assess security wrt countering enemy
reconnaissance for-coa COA411
THEN accomplish the task
Yes, RED-CSOP1 which is performing the
reconnaissance action SCREEN1
Assess security when enemy recon is
present for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
89
Types of hints

IF the task to accomplish is
Is an enemyreconnaissance unit present?
Assess security wrt countering enemy
reconnaissance for-coa COA411
THEN accomplish the task
Yes, RED-CSOP1 which is performing the
reconnaissance action SCREEN1
Assess security when enemy recon is
present for-coa COA411 for-unit
RED-CSOP1 for-recon-action SCREEN1
90
Analogical reasoning
91
Generalization by analogy
92
Learned plausible version space rule

RASWCER-001
IF the task to accomplish is ASSESS-SECURITY-WRT-
COUNTERING-ENEMY-RECONNAISSANCE
FOR-COA ?O1
Question Is an enemy recon unit present in ?O1
? Answer Yes, the enemy unit ?O2 is performing
the action ?O3 which is a
reconnaissance action.
Justification ?O2 SOVEREIGN-ALLEGIANCE-OF-ORG
?O4 IS RED--SIDE ?O2 TASK ?O3 IS
INTELLIGENCE-COLLECTION--MILITARY-TASK
Plausible Upper Bound Condition ?O1 IS
COA-SPECIFICATION-MICROTHEORY ?O2 IS
MODERN-MILITARY-UNIT--DEPLOYABLE
SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK
?O3 ?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TAS
K ?O4 IS ALLEGIANCE-OF-UNIT
Plausible Lower Bound Condition ?O1 IS
COA411 ?O2 IS RED-CSOP1
SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK
?O3 ?O3 IS SCREEN1 ?O4 IS RED--SIDE
THEN accomplish the task ASSESS-SECURITY-WHEN
-ENEMY-RECON-IS-PRESENT FOR-COA
?O1 FOR-UNIT ?O2
FOR-RECON-ACTION ?O3
93
Illustration of rule learning in other domains
Illustration in the course of action critiquing
domain
Illustration in the workaround generation domain
Illustration in the manufacturing domain
Illustration in the assessment and tutoring domain
94
Task reduction example
Prepare river banks at site103 using
bulldozer-unit201 to allow fording by unit10
What bank needsto be prepared?
Both site107 and site105 need to be reduced
because their slopes are too steep for unit10
Report that bulldozer-unit201 has been obtained
by unit10
Reduce the slope of site107 by direct
cut using bulldozer-unit201 to allow the
fording by unit10
Reduce the slope of site105 by direct
cut using bulldozer-unit201 to allow the
fording by unit10
Ford bulldozer-unit201 at site103
95
Task formalization and learning
Task Example
Reduce the slope of site107, by direct cut, using
bulldozer-unit201, to allow the fording of unit10.
Learned Task
REDUCE-THE-SLOPE
GEOGRAPHICAL-REGION
PUB
OF
?O1
SITE107
PLB
BY-DIRECT-CUT
EQUIPMENT
PUB
USING
?O2
BULLDOZER-UNIT201
PLB
MODERN-MILITARY-ORGANIZATION
PUB
TO-ALLOW-FORDING-BY
?O3
UNIT10
PLB
Reduce the slope of ?O1, by direct cut, using
?O2, to allow the fording of ?O3.
NL description
96
Explaining the example
Learn the rule
Prepare river banks at site103 using
bulldozer-unit201 to allow fording by unit10
What bank needsto be prepared?
Unit10 located-at site108 and Site103
right-approach site108 and Site103 right-bank
site107
Unit10 has a default-negotiable-slope of 25 and
Site107 has a max-slope of 200 gt 25.
Both site107 and site105 need to be reduced
because their slopes are to steep for unit10
Site107 is on the opposite-side of site105.
Site105 is a bank.
Unit10 has a default-negotiable-slope of 25 and
Site105 has a max-slope of 200 gt 25.
Report that bulldozer-unit201 has been obtained
by unit10
Reduce the slope of site107 by direct
cut using bulldozer-unit201 to allow the
fording by unit10
Reduce the slope of site105 by direct
cut using bulldozer-unit201 to allow the
fording by unit10
Ford bulldozer-unit201 at site103
97
Learned Rule
Prepare river banks at ?O1 using ?O2 to allow
fording by ?O3
If the taskto accomplish is
and the question
What bank needs to be prepared?
has the answer
Both ?O5 and ?O6 need to be reduced because their
slopes are to steep for ?O3
?O3 located-at ?O4 and ?O1 right-approach ?O4
and ?O1 right-bank ?O5 ?O3 has a
default-negotiable-slope of ?N1 and ?O5 has a
max-slope of ?N2 gt ?N1 ?O5 is on the
opposite-side of ?O6 and ?O6 is a bank ?O3
has a default-negotiable-slope of ?N1 and ?O6
has a max-slope of ?N3 gt ?N1
because
  • ?O1 is site
  • right-approach ?O4
  • right-bank ?O5
  • ?O2 is military-equipment
  • ?O3 is military-unit
  • located-at ?O4
  • default-negotiable-slope ?N1
  • ?O4 is approach
  • ?O5 is geographical-bank
  • max-slope ?N2
  • opposite-site ?O6
  • ?O6 is geographical-bank
  • max-slope ?N3
  • ?N1 ? 0.0 , 200.0
  • ?N2 ? (0.0 , 1000.0
  • gt ?N1
  • ?N3 ? (0.0 , 1000.0
  • gt ?N1
  • ?O1 is site103
  • right-approach ?O4
  • right-bank ?O5
  • ?O2 is bulldozer-unit201
  • ?O3 is unit10
  • located-at ?O4
  • default-negotiable-slope ?N1
  • ?O4 is site108
  • ?O5 is site107
  • max-slope ?N2
  • opposite-site ?O6
  • ?O6 is site105
  • max-slope ?N3
  • ?N1 ? 25.0
  • ?N2 ? 200.0
  • gt ?N1
  • ?N3 ? 200.0
  • gt ?N1

Plausible UpperBound
Plausible Lower Bound
Plausible Version Space Condition
AFTER Report that ?O2 has been obtained by ?O3
Reduce the slope of ?O6 by direct cut using
?O2 to allow the fording by ?O3
?T3
?T1
then decompose the task into the subtasks
AFTER ?T2
Reduce the slope of ?O5 by direct cut using
?O2 to allow the fording by ?O3
?T2
AFTER ?T1
Ford ?O2 at ?O1
98
Illustration of rule learning in other domains
Illustration in the course of action critiquing
domain
Illustration in the workaround generation domain
Illustration in the manufacturing domain
Illustration in the assessment and tutoring domain
99
Illustration in the manufacturing domain
See
G. Tecuci, Building Intelligent Agents, Academic
Press, 1998, pp. 13-23, pp. 79-101 (required
reading).
100
Illustration of rule learning in other domains
Illustration in the course of action critiquing
domain
Illustration in the workaround generation domain
Illustration in the manufacturing domain
Illustration in the assessment and tutoring domain
101
Illustration in the assessment and tutoring domain
See
G. Tecuci, Building Intelligent Agents, Academic
Press, 1998, pp. 23-32, pp. 179-228 (required
reading).
102
Required reading
G. Tecuci, Building Intelligent Agents, Academic
Press, 1998, pp. 13-32, pp. 79-101, pp. 179-228
(required). Tecuci G., Boicu M., Bowman M., and
Dorin Marcu, with a commentary by Murray
Burke, An Innovative Application from the DARPA
Knowledge Bases Programs Rapid Development of a
High Performance Knowledge Base for Course of
Action Critiquing, invited paper for the special
IAAI issue of the AI Magazine, Volume 22, No, 2,
Summer 2001, pp. 43-61. http//lalab.gmu.edu/publi
cations/data/2001/COA-critiquer.pdf
(required). Tecuci G., Boicu M., Wright K., Lee
S.W., Marcu D. and Bowman M., "A Tutoring Based
Approach to the Development of Intelligent
Agents," to appear in Teodorescu, H.N., Mlynek,
D., Kandel, A. and Zimmermann, H.J. (editors).
Intelligent Systems and Interfaces, Kluwer
Academic Press. 2000. http//lalab.gmu.edu/cs785/
Chapter-1-book-2000.doc (required).
Write a Comment
User Comments (0)
About PowerShow.com