PAC Learning - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

PAC Learning

Description:

Provides insight into what can be learned, given computational constraints ... Hard: induction of the finite automaton given example strings that it accepts ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 16
Provided by: omida
Category:

less

Transcript and Presenter's Notes

Title: PAC Learning


1
PAC Learning
2
Computational Learning Theory
  • Modeling and understanding capabilities of
    computational learning systems
  • Provides insight into what can be learned, given
    computational constraints
  • Identifies limits of learning (for particular
    learning approaches)
  • Suggests fruitful avenues of research

3
Overview
  • PAC learning
  • Two examples of analysis of learnability
  • Boolean functions on n variables
  • k-bounded decision lists

4
PAC learning
  • PAC Probably Approximately Correct
  • A subfield of computational learning theory
  • Explains why/how learning works and the limits of
    some forms of learning
  • Motivation how may a machine learn to cope with
    a world too complex to describe or understand?

5
PAC learning (cont.)
  • One possibility just learn a function/behavior
    that is usually successful
  • But we dont know the target function
  • How can we be sure the learnt function will be
    usually correct?

6
PAC (cont.)
  • Answer A hypothesis successful on training
    (seen) examples may likely be successful at
    unseen ones
  • Underlying assumption training examples are
    representative of future (test) examples
  • Formally The distributions giving the training
    and test examples are the same

7
Formulating Success in Learning
  • Assume the Boolean classification problem
  • f() is the ideal function
  • Error (h)probability of misclassification by
    hype h
  • Error (h) Pr (h(x) f(x)), where x is drawn
    from distribution D
  • h() is approximately correct if error(h) lt e, for
    some tiny error tolerance e gt 0

8
Desiderata (constraints)
  • The learning algorithm should only need a small
    number of training examples
  • Take little time at processing examples and
    producing the hypothesis
  • Produce a hypothesis that is likely approximately
    correct (e.g. 1-e)
  • What is small or fast? Polynomial (in number of
    examples, features and error tolerance)

9
Learnability
  • Existence of such algorithms depends on
  • The expressiveness of the hypothesis space that
    the algorithm searches through
  • Whether such a class contains f() (or a good
    approximation)
  • A hypothesis class C is PAC learnable if such an
    algorithm exists given any target f() in C

10
Number of Examples
  • Strategy compute the number of examples m, such
    that any consistent hypothesis with all m is
    probably approximately correct
  • What is a good upper bound on m?
  • (derivation in class)

11
Unrestricted Boolean Functions
  • Too many Boolean functions!
  • Too many remain consistent with a polynomial size
    set of examples
  • Solution Assume your target function is in a
    smaller class of functions
  • What if it wasnt? (Try another class)

12
Example Learnable Classes
  • Example learnable classes
  • Disjunctions or Conjunctions
  • Linear threshold functions (perceptrons)
  • K-DLs
  • Not known disjunctive normal form
  • Hard induction of the finite automaton given
    example strings that it accepts (but learnable if
    allowed to query)

13
Decision Lists
  • Similar to decision trees
  • A list of conjunctive implications
  • Go through the list until a conjunct matches
  • Can represent any Boolean function?
  • Restricted class Each conjunct takes no more
    than k attributes (e.g. k2)

14
Learnability of K-DL
  • The number of K-DLs limited
  • The number of bad K-DLs limited
  • A polynomial number of examples suffices to
    distinguish bad hypes
  • Intuitively Limit amount of interaction among
    features

15
Summary
  • PAC learning formalizes/quantifies inductive
    learnability
  • Brings into focus the trade-off between
    expressiveness of a hypothesis language and
    learnability
  • Some classes are (purely inductively) learnable,
    but many other basic ones may not be
  • Need other theories to explain learning phenomena
    where pure inductive learning is hard
Write a Comment
User Comments (0)
About PowerShow.com