Explaining Cognitive Assistants that Learn - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Explaining Cognitive Assistants that Learn

Description:

thanks to Paulo Pinheiro da Silva, Li Ding, Cynthia Chang, ... CWM (NSF TAMI) JTP (DAML/NIMD) SPARK (DARPA CALO) UIMA (DTO NIMD. Exp Aggregation) IW Explainer ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 26
Provided by: Alyssa97
Category:

less

Transcript and Presenter's Notes

Title: Explaining Cognitive Assistants that Learn


1
Explaining Cognitive Assistants that Learn
  • Deborah McGuinness1, Alyssa Glass1,2, Michael
    Wolverton2
  • 1Knowledge Systems, AI Laboratory
  • Stanford University
  • dlm glass _at_ksl.stanford.edu
  • 1SRI International
  • mjw_at_ai.sri.com
  • thanks to Paulo Pinheiro da Silva, Li Ding,
    Cynthia Chang, Honglei Zeng, Vasco Furtado, Jim
    Blythe, Karen Myers, Ken Conley, David Morley

2
General Motivation
Provide interoperable knowledge provenance
infrastructure that supports explanations of
sources, assumptions, learned information, and
answers as an enabler for trust.
  • Interoperability as systems use varied sources
    and multiple information manipulation engines,
    they benefit more from encodings that are
    shareable interoperable
  • Provenance if users (humans and agents) are to
    use and integrate data from unknown, unreliable,
    or evolving sources, they need provenance
    metadata for evaluation
  • Explanation/Justification if information has
    been manipulated (i.e., by sound deduction or by
    heuristic processes), information manipulation
    trace information should be available
  • Trust if some sources are more trustworthy than
    others, representations should be available to
    encode, propagate, combine, and (appropriately)
    display trust values

3
Inference Web Infrastructure primary
collaborators Ding, Chang, Pinheiro da Silva,
Zeng, Fikes
  • Framework for explaining question answering tasks
    by
  • abstracting, storing, exchanging,
  • combining, annotating, filtering, segmenting,
  • comparing, and rendering proofs and proof
    fragments
  • provided by question answerers.

4
ICEE Integrated Cognitive Explanation
Environment
  • Improve Trust in Cognitive Assistants that learn
    by providing transparency concerning   
    provenance    information manipulation   
    task processing    learning

5
ICEE Architecture
Collaboration Agent
Task Manager (TM)
Explanation Dispatcher
TM Wrapper
TM Explainer
Justification Generator
6
During the demo, notice
  • User can ask questions at any time
  • Reponses are context-sensitive
  • Dependant on current task processing state and on
    provenance of underlying process
  • Explanations generated completely automatically
  • No additional work required by user to supply
    information
  • Follow-up questions provide additional detail at
    users discretion
  • Avoids needless distraction

7
  • Example Usage
  • Video Clip

8
Task Explanation
  • Ability to ask why at any point
  • Context appropriate follow-up questions are
    presented

9
Explainer Strategy (for cognitive assistants)
  • Present
  • Query
  • Answer
  • Abstraction of justification (using PML
    encodings)
  • Provide access to meta information
  • Suggests drill down options (also provides
    feedback options)

10
Sample Introspective Predicates Provenance
  • Author
  • Modifications
  • Algorithm
  • Addition date/time
  • Data used
  • Collection time span for data
  • Author comment
  • Delta from previous version
  • Link to original

Glass, A., and McGuinness, D.L. 2006.
Introspective Predicates for Explaining Task
Execution in CALO. Technical Report, KSL-06-04,
Knowledge Systems Lab., Stanford Univ.
11
Task Action Schema
  • Wrapper extracts portions of task intention
    structure through introspective predicates
  • Store extracted information in action schema
  • Designed to achieve three criteria
  • Salience
  • Reusability
  • Generality

12
SupportsTopLevelGoal(x) IntentionPreconditionMet
(x) TerminationConditionNotMet(x) gt
Executing(x)
TopLevelGoal(y) Supports(x,y) gt
SupportsTopLevelGoal(x)
ParentOf (x,y) Supports(y,z) gt Supports (x,z)
ParentOf (x,y) Supports(y,z) gt Supports (x,z)
GS GetSignature BL BuyLaptop GA GetApproval
Supports (x,x)
13
User Trust Study
  • Interviewed 10 Critical Learning Period (CLP)
    participants
  • Programmers, researchers, administrators
  • Focus of study
  • Trust
  • Failures, surprises, and other sources of
    confusion
  • Desired questions to ask CALO
  • Initial results
  • Explanations are required in order to trust
    agents that learn
  • To build trust, users want transparency and
    provenance
  • Identified question types most important to CALO
    users --gt motivation for future work

14
Future Directions
  • We will leverage results from our trust study to
    focus and prioritize our strategies explaining
    cognitive assistants e.g., learning specific
    provenance
  • We will expand our explanations of learning to
    augment learning by instruction and design and
    implement explanation of learning by
    demonstration (initially focusing on LAPDOG).
  • We will expand our initial design of explaining
    preferences in PTIME
  • Write up and distribute user trust study to CALO
    participants
  • Consider using conflicts to drive learning and
    explanations I have not finished because x
    has not completed.
  • Advanced dialogues exploiting TOWEL and other
    CALO components
  • Potentially exploit our work on IW Trust - a
    method for representing, propagating, and
    presenting trust within the CALO setting
    already have results in intelligence analyst
    tools, integration with text analytics,
    Wikipedia, likely to be used in IL, etc.

15
Explaining Learning by Demonstration
  • General Motivation
  • LAPDOG (Learning Assistant Procedures from
    Demonstration, Observation, and Generalization)
    generalizes the users demonstration to learn a
    procedure
  • While LAPDOGs generalization process is designed
    to produce reasonable procedures, it will
    occasionally get it wrong
  • Specifically, it will occasionally over
    generalize
  • Generalize the wrong variables, or too many
    variables
  • Produce too general a procedure because of a
    coarse-grained type hierarchy
  • ICEE needs to explain the relevant aspects of the
    generalization process in a user-friendly format
  • To help the user identify and correct over
    generalizations
  • To help the user understand and trust the learned
    procedures
  • Specific elements of LAPDOG reasoning to explain
  • Ontology-Based Parameter Generalization
  • The variables (elements of the users
    demonstration) that LAPDOG chooses to generalize
  • The type hierarchy on which the generalization is
    based
  • Procedure Completion
  • The knowledge-producing actions that were added
    to the demonstration
  • The generalization done on those actions
  • Background knowledge that biases the learning
  • E.g., rich information about the email, calendar
    events, files, web pages, and other objects upon
    which it executes it actions
  • Primarily for future versions of LAPDOG

16
Explaining Preferences
  • General Motivation
  • PLIANT (Preference Learning through Interactive
    Advisable Non-intrusive Training) uses
    user-elicited preferences and past choices to
    learn user scheduling preferences for PTIME,
    using a Support Vector Machine.
  • Inconsistent user preferences, over-constrained
    schedules, and necessity of exploring the
    preference space result in user confusion about
    why a schedule is being presented.
  • Lack of user understanding of PLIANTs updates
    creates confusion, mistrust, and the appearance
    that preferences are being ignored.
  • ICEE needs to provide justifications of PLIANTs
    schedule suggestions, in a user-friendly format,
    without requiring the user to understand SVM
    learning.
  • Providing Transparency into Preference Learning
  • Augment PLIANT to gather additional
    meta-information about the SVM itself
  • Support vectors identified by SVM
  • Support vectors nearest to the query point
  • Margin to the query point
  • Average margin over all data points
  • Non-support vectors nearest to the query point
  • Kernel transformation used, if any
  • Represent SVM learning and meta-information as
    justification in PML, using added SVM rules
  • Design abstraction strategies for presenting
    justification to user as a similarity-based
    explanation

17
Advantages to ICEE Approach
  • Unified framework for explaining task execution
    and deductive reasoning.
  • Architecture for reuse among many task execution
    systems.
  • Introspective predicates and software wrapper
    that extract explanation-relevant information
    from task reasoner.
  • Reusable action schema for representing task
    reasoning.
  • A version of Inference Web for generating formal
    justifications.

18
Resources
  • Overview of ICEE
  • McGuinness, D.L., Glass, A., Wolverton, M., and
    Pinheiro da Silva, P. Explaining Task Processing
    in Cognitive Assistants that Learn. AAAI 2007
    Spring Symposium on Interaction Challenges for
    Intelligent Assistants (to appear). (Updated
    version will be in FLAIRS 2007).
  • Introspective predicates
  • Glass, A., and McGuinness, D.L. Introspective
    Predicates for Explaining Task Execution in CALO.
    Technical Report, KSL-06-04, Knowledge Systems,
    AI Lab., Stanford University, 2006.
  • Video demonstration of ICEE
  • http//iw.stanford.edu/2006/10/ICEE.640.mov
  • Explanation interfaces
  • McGuinness, D.L., Ding, L., Glass, A., Chang, C.,
    Zeng, H., and Furtado, V. Explanation Interfaces
    for the Semantic Web Issues and Models. 3rd
    International Semantic Web User Interaction
    Workshop (SWUI06). Co-located with the
    International Semantic Web Conference, Athens,
    Georgia, 2006.
  • Inference Web (including above publications)
  • http//iw.stanford.edu/

19
Extra

20
How PML Works
isQueryFor
IWBase
Query fooquery1 ltformal internal
structured querygt
Question fooquestion1 ltInput language
questiongt
hasAnswer
Justification Trace
hasLanguage
NodeSet foons1 (hasConclusion )
Language
hasInferencEngine
fromQuery
isConsequentOf
InferenceEngine
hasRule
InferenceStep
InferenceRule
hasAntecendent
Source
NodeSet foons2 (hasConclusion )

hasVariableMapping
Mapping
isConsequentOf
fromAnswer
hasSourceUsage
hasSource
SourceUsage
InferenceStep
usageTime
21
Sample Task HierarchyPurchase equipment
  • Purchase equipment
  • Collect requirements
  • Get quotes
  • Do research
  • Choose set of quotes
  • Pick single item
  • Get approval
  • Place order

22
Sample Task HierarchyGet travel authorization
  • Get travel authorization
  • Collect requirements
  • Get approval, if necessary
  • Note this conditional step was added to the
    original procedure through learning by
    instruction
  • Submit travel paperwork

23
PML in Swoop
24
Explaining Extracted Entities
Source fbi_01.txt Source Usage span from 01 to
78
Same conclusion from multiple extractors
conflicting conclusion from one extractor
This extractor decided that Person_fbi-01.txt_46
is a Person and not Occupation
25
Task Management Framework


Activity Recognizer
Advice
Preferences
Advice
Preferences
Preferences
Time
Time
Task
Task
Manager
Manager
Manager
Manager
Location Estimator
PTIME
PTIME
SPARK
SPARK
Process Models
Process Models
Procedure
Execution Monitor
Execution Monitor
Task Explainer
Task Explainer
Learners
Predictor
Predictor
ICEE
ICEE
Tailor, LAPDOG, PrimTL, PLOW
ProPL
ProPL
Write a Comment
User Comments (0)
About PowerShow.com