Answering Complex Questions and Performing Deep Reasoning in Advance QA Systems - PowerPoint PPT Presentation

Loading...

PPT – Answering Complex Questions and Performing Deep Reasoning in Advance QA Systems PowerPoint presentation | free to download - id: 814b0e-MDk0O



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Answering Complex Questions and Performing Deep Reasoning in Advance QA Systems

Description:

Answering Complex Questions and Performing Deep Reasoning in Advance QA Systems Reasoning with logic and probability Chitta Baral Arizona State university – PowerPoint PPT presentation

Number of Views:20
Avg rating:3.0/5.0
Slides: 16
Provided by: asue93
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Answering Complex Questions and Performing Deep Reasoning in Advance QA Systems


1
Answering Complex Questions and Performing Deep
Reasoning in Advance QA Systems Reasoning with
logic and probability
  • Chitta Baral
  • Arizona State university
  • Co-Investigators
  • Michael Gelfond and
  • Richard Scherl.
  • Other collaborator
  • Nelson Rushton.

2
Developing formalisms for Reasoning with logic
and probability
  • One of the research issues that we are working
    on.
  • A core component of our deep reasoning goals.
  • What if reasoning
  • Reasoning about counterfactuals
  • Reasoning about causes, etc.
  • In parallel
  • We are also using existing languages and
    formalisms (AnsProlog) to domain knowledge and do
    reasoning.

3
Reasoning with counterfactuals.
  • Text (From Pearls Causality)
  • If the court orders the execution then the
    captain gives a signal.
  • If the captain gives a signal then the rifleman A
    shoots and rifleman B shoots.
  • Rifleman A may shoot out of nervousness.
  • If either of rifleman A or B shoots then the
    prisoner dies.
  • The probability that the court orders the
    execution is p.
  • The probability that A shoots out of nervousness
    is q.
  • A counterfactual question
  • What is the probability that the prisoner would
    be alive if A were not to have shot, given that
    the prisoner is in fact dead?

4
Pictorial representation
  • U court orders execution (p)
  • C Captain gives the signal
  • A Rifleman A shoots
  • B Rifleman B shoots
  • V Rifleman A is nervous (q)
  • D The prisoner dies
  • Logic
  • U causes C
  • C causes A
  • V causes A
  • C causes B
  • A causes D
  • B causes D
  • Probability
  • Pr(U) p
  • Pr(V) q

U
V
C
A
B
D
5
Bayes Nets, Causal Bayes Nets, and Pearls
structural causal models
  • Bayes nets do not have to respect causality
  • They only succinctly represent the joint
    probability distribution
  • One can not reason about causes and effects with
    Bayes nets. (One can with causal Bayes nets)
  • Even then, one needs to distinguish between doing
    and observing.
  • This distinction is important to do
    counterfactual reasoning.
  • Pearls structural causal models gives an
    algorithm to do counterfactual reasoning. But
  • The logical language is weak.
  • (Need a more general knowledge representation
    language that can express defaults, normative
    statements etc.)
  • The variables with probabilities are assumed to
    be independent.

6
Further Motivation for a richer language The
Monty Hall problem
  • http//www.remote.org/frederik/projects/ziege/bewe
    is.html
  • A player is given the opportunity to select one
    of three closed doors, behind one of which there
    is a prize, and the other 2 rooms are empty.
  • Once the player has made a selection, Monty is
    obligated to open one of the remaining closed
    doors, revealing that it does not contain the
    prize.
  • Monty gives a choice to the player to switch to
    the other unopened door if he wants.
  • Question Does it matter if the player switches,
    or
  • Which unopened door has the higher probability of
    containing the prize?

7
Illustration-1
  • First, let us assume that the car is behind door
    no. 1.
  • We can do this without reducing the validity of
    our proof, because if the car were behind door
    no. 2, we only had to exchange all occurrences of
    "door 1" with "door 2" and vice versa, and the
    proof would still hold.
  • The candidate has three choices of doors.
  • Because he has no additional information, he
    randomly selects one.
  • The possibility to choose each of the doors 1,
    2, or 3 is 1/3 each
  •   Candidate chooses p  
  • Door 1 1/3
  • Door 2 1/3
  • Door 3 1/3
  • Sum 1 

8
Illustration-2
  • Going on from this table, we have to split the
    case depending on the door opened by the host.
  • Since we assume that the car is behind door no.
    1, the host has a choice if and only if the
    candidate selects the first door - because
    otherwise there is only one "goat door" left!
  • We assume that if the host has a choice, he will
    randomly select the door to open.  
  • Candidate chooses Host opens p  
  • Door 1 Door 2 1/6
  • Door 1 Door 3 1/6
  • Door 2 Door 3 1/3
  • Door 3 Door 2 1/3
  • Sum 1 

9
Illustration-3
  • candidate who always sticks to his original
    choice no matter what happens  
  • Candidate chooses Host opens final choice win
    p  
  • Door 1 Door 2 Door 1
    yes 1/6
  • Door 1 Door 3 Door 1
    yes 1/6
  • Door 2 Door 3 Door 2 no
    1/3
  • Door 3 Door 2 Door 3 no
    1/3
  • Sum1 Sum of cases where candidate wins 1/3 
  • candidate who always switches to the other door
    whenever he gets the chance
  •  Candidate chooses Host opens final choice win
    p  
  • Door 1 Door 2 Door 3 no 1/6
  • Door 1 Door 3 Door 2 no 1/6
  • Door 2 Door 3 Door 1 yes 1/3
  • Door 3 Door 2 Door 1 yes 1/3
  • Sum1 Sum of cases where candidate wins
    2/3

10
Key Issues
  • The existing languages of probability do not
    really give us the syntax to express certain
    knowledge about the problem
  • Lot of reasoning is done by the human being
  • Our goal Develop a knowledge representation
    language and a reasoning system such that once we
    express our knowledge in that language the system
    can do the desired reasoning
  • P-log is such an attempt

11
Representing the Monty Hall problem in P-log.
  • doors 1, 2, 3.
  • open, selected, prize, D doors
  • can_open(D) ?selected D.
  • can_open(D) ? prize D.
  • can_open(D) ?not can_open(D).
  • pr(openD c can_open(D), can_open(D1), D / D1
    ) ½
  • By default pr(open D c can_open(D) ) 1 when
    there is no D1, such that


  • can_open(D1) and D / D1 .
  • random(prize), random(selected).
  • random(open X can_open(X)).
  • pr(prize D) 1/3.
  • pr(selected D) 1/3.
  • obs(selected 1). obs(open 2). obs(prize
    2).

12
General Syntax of P-log
  • Sorted Signature
  • objects and function symbols (term building
    functions and attributes)
  • Declaration
  • Definition of Sorts and typing information for
    attributes
  • Eg. doors 1, 2, 3. open, selected, prize, D
    doors
  • Regular part Collection of AnsProlog rules
  • Random Selection
  • r random(a(t) Y p(Y) ) ? B.
  • Probabilistic Information
  • Prr(a(t) y) c B) v.
  • Observations and Actions
  • obs(l).
  • do(a(t) y).

13
Semantics the main ideas
  • The logical part is translated into an AnsProlog
    program.
  • The answer sets correspond to possible worlds
  • The probabilistic part is used to define a
    measure over the possible worlds.
  • It is then used in defining the probability of
    formulas, and
  • conditional probabilities.
  • Consistency conditions.
  • Sufficiency conditions for consistency.
  • Bayess nets and Pearls causal models are
    special cases of P-log programs.

14
Rifleman Example -- various P-log encodings
(Encoding 1)
  • U, V, C, A, B, D boolean
  • random(U). random(V). random(C). random(A).
    random(B). random(D).
  • Pr(U) p. Pr(V) q.
  • Pr(CCU) 1. Pr(ACC) 1. Pr(ACV) 1.
  • Pr(BCC) 1. Pr(DCA) 1. Pr(DCB) 1.
  • Pr(CCU) 1. Pr(ACV,C) 1.
  • Pr(BCC) 1. Pr(DCA,B) 1.
  • Prediction, Explanation works as expected
  • Formulating counterfactual reasoning work in
    progress.

15
Conclusion highlights of progress
  • Modules and generalizations
  • Travel module first milestone reached.
  • Generalization of the methodology in progress.
  • Develop couple of other modules about to start.
  • Further generalize the process next step.
  • AnsProlog enhancements and its use in various
    kinds of reasoning
  • P-log and reasoning using it -- in progress.
  • CR-Prolog (Consistency restoring Prolog) in
    progress.
  • GUIs, Modular AnsProlog, etc. in progress.
About PowerShow.com