Chapter 8: Instance Based Learning - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Chapter 8: Instance Based Learning

Description:

Remarks on Lazy and Eager Learning. Lazy Learning : generalization at query time. k-N-N ... eager: a single linear function that covers the entire instance space ... – PowerPoint PPT presentation

Number of Views:199
Avg rating:3.0/5.0
Slides: 20
Provided by: csSung
Category:

less

Transcript and Presenter's Notes

Title: Chapter 8: Instance Based Learning


1
Chapter 8 Instance Based Learning
2
Abstracts
  • Learning method
  • Store the training examples v.s construct a
    general target function
  • Nearest neighbor
  • Locally weighted regression
  • Radial basis functions
  • Case-based reasoning
  • Lazy learning method
  • Local estimation
  • cf) target function for the entire instance space

3
INTRODUCTION
  • Learning storing the presented training data
  • Local approximation
  • complex target function is constructed by less
    complex local target functions
  • Disadvantage
  • the cost of classifying new instances can be high

4
Basic Schema
  • ?? - Concept attribute-value pair
  • ?? ?? - Instance
  • ??, ??, ?? - prototypes same as instance

5
K-Nearest Neighbor Learning
  • Instance
  • points in the n-dimensional space
  • feature vector lta1(x), a2(x),...,an(x)gt
  • distance
  • target function discrete or real value

6
  • Training algorithm
  • For each training example (x,f(x)), add the
    example to the list training_examples
  • Classification algorithm
  • Given a query instance xq to be classified,
  • Lex x1...xk denote the k instances from
    training_examples that are nearest to xq
  • Return

7
Distance-Weighted N-N Algorithm
  • Giving greater weight to closer neighbors
  • discrete case
  • real case

8
Remarks on k-N-N Algorithm
  • Robust to noisy training data
  • Effective in sufficiently large set of training
    data
  • Subset of instance attributes
  • Dominated by irrelevant attributes
  • weight each attribute differently
  • Indexing the stored training examples
  • kd-tree

9
Locally Weighted Regression
  • Local
  • the function is approximated based only on data
    near the query point.
  • Weighted
  • the contribution of each training example is
    weighted by its distance from the query point.
  • Regression
  • approximating real-valued function

10
Locally Weighted Linear Regression
  • where ai(x) ith attribute of the instance x
  • minimize the squared error sum
  • where D is training set
  • where is a learning rate

11
A Local Approximation
  • 1. minimize the squared error over k nearest
    neighbor
  • 2. minimize the squared error over entire set D,
    with weights
  • 3. combine 1,2
  • with training rule

12
Radial Basis Functions
  • Distance weighted regression and ANN
  • where xu instance from X
  • Ku(d(xu,x)) kernel function
  • The contribution from each of the Ku(d(xu,x))
    terms is localized to a region nearby the point
    xu Gaussian Function
  • Corresponding two layer network
  • first layer computes the values of the various
    Ku(d(xu,x))
  • second layer computes a linear combination of
    first-layer unit values.

13
RBF network
  • Training
  • construct kernel function
  • adjust weights
  • RBF networks provide a global approximation to
    the target function, represented by a linear
    combination of many local kernel functions.

14
Case-Based Reasoning
  • CBR
  • lazy learning
  • classify new query instances by similar instances
  • symbolic descriptions (not n-dimensional space)

15
CADET system
16
Contd
  • Library functional description cases
  • Search
  • matching the design problem
  • various subgraphs
  • rewrite rule by general knowledge
  • merging problem
  • Target function
  • f maps function graphs to the structures that
    implement them

17
Remarks on Lazy and Eager Learning
  • Lazy Learning generalization at query time
  • k-N-N
  • locally weighted regression
  • case-based reasoning
  • Eager Learning generalization at training time
  • radial basis function
  • Back-Propagation

18
Differences
  • Computation time
  • train eager gt lazy
  • query eager lt lazy
  • Classifications produced for new queries.
  • Target function
  • eager a single linear function that covers the
    entire instance space
  • lazy a combination of many local approximations

19
Summary
  • Instance-based learning
  • form a different local approximation for each
    query instance
  • k-N-N search
  • target function is estimated from known k-N-N
  • Locally weighted regression
  • an explicit local approximation to the target
    function
  • RBF
  • ANN constructed from localized kernel functions
  • CBR
  • instances represented by logical descriptions
Write a Comment
User Comments (0)
About PowerShow.com