Vehicle Detection Using Adaboost - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Vehicle Detection Using Adaboost

Description:

A target detection rate for the cascade, OR. A target false detection rate for the cascade. ... Features are not Haar-type Filters. ... – PowerPoint PPT presentation

Number of Views:252
Avg rating:3.0/5.0
Slides: 18
Provided by: usersS
Category:

less

Transcript and Presenter's Notes

Title: Vehicle Detection Using Adaboost


1
Vehicle DetectionUsing Adaboost
  • Shane Brennan
  • UCSC
  • June 2005

2
Interface
  • Input A set of features (user-defined). An
    array of positive image samples. An array of
    negative image samples. A distance function. A
    target detection rate for the cascade, OR A
    target false detection rate for the cascade.
  • Output A strong classifier using each of the
    features given as input.

3
Feature Representation
  • Features are not Haar-type Filters.
  • Instead, find the edge map of an image (using
    Sobel), and transform this edgemap into a
    directional-edge-histogram.
  • The feature will be a histogram consisting of K
    bins. Each bin will contain the relative
    frequency of edges that lie in an angle of size
    360 / K.
  • This will (hopefully) work because cars have
    strong edges in only two directions, while the
    background will have edges in many directions.

4
Features and Distance Function
  • Features are selected before Adaboost training.
  • These features can be selected by the user to
    take advantage of the brains ability to
    discriminate between objects and non-objects.
  • The distance function will return a scalar result
    based on the distance of a given image from a
    given feature.

5
The Creation of a Weak Learner
  • Given A single feature, the sets of positive
    and negative images, and a positive detection
    rate or a false positive rate.
  • Provide A weak learner with a threshold that
    satisfies the conditions given. Also return an
    error value which will be used later to
    rank the weak learners.

6
Distances
  • The algorithmCall the distance function on each
    positive and negative image. From this, create
    two arrays negative_distances, and
    positive_distances.
  • These arrays are then sorted (using any sorting
    algorithm. Quicksort is presently used).
  • From the sorted arrays, the median distance for
    positive and negative samples is found by simply
    reading the midpoint in the array.
    Somedian_positive positive_distancessize/2

7
Parity
  • Given the median values of the arrays, the
    parity of the weak learner can be determined.
  • The parity simply indicates whether the positive
    samples contain a higher distance value than the
    lower samples, or vice versa.
  • Parity 1 indicates that the median of the
    positive values is greater than the negative
    values.
  • Parity -1 indicates that the median of the
    positive values is lower than the negative
    values.
  • This aids in thresholding.

8
(No Transcript)
9
Thresholding
  • The threshold is created using the sorted
    distance arrays (negative_distances and
    positive_distances) and either a target detection
    rate or a target false positive rate.
  • For a target detection rate of 90, the threshold
    is initialy set to positive_distancessize
    .9
  • This ensures that 90 of the positive training
    samples would be classified correctly.
  • However, this is a rather tight threshold, and
    can be improved upon.

10
Thresholding, continued
  • If the positive and negative samples are
    distributed far apart, the detection rate can be
    increased without increasing the false detection
    rate.
  • The threshold is incremented (or decremented,
    depending on the parity) by small amounts until a
    negative sample is misclassified (changing the
    false detection rate).
  • The final threshold is the midpoint between the
    initial threshold, and this new threshold.
  • This will hopefully provide better classification.

11
(No Transcript)
12
Creating a Cascade
  • From the list of features, call the Weak Learner
    function for each feature.
  • Order the Weak Learners based on their error,
    with the lowest error being first in the list.
  • The first item in the cascade will be the most
    discriminative (lowest error) feature. The second
    item will be the second most discriminative
    feature, etc.

13
Evaluation
  • To classify an input image as containing an
    object the image must pass through each weak
    learner in the cascade.
  • To pass through a weak learner, the distance of
    the input image must satisfy the following
    condition parity distance gt parity
    threshold
  • As can be seen, I consider images that have a
    distance equal to the threshold to be positive.

14
Making the Detector Invariant
  • The feature representation is orientation
    invariant, so we need not create separate
    cascades for each possible orientation.
  • The feature representation is also invariant to
    changes in lighting, assuming that we can
    successfully detect edges at the given lighting
    level.
  • The image is scanned at many different
    resolutions to detect vehicles at different
    scales. This need to not be too precise as the
    feature representation is scale invariant.

15
Future Work
  • The thresholding scheme could use improvement, I
    am still thinking about ideas for this.
  • Instead of finding parity (and the threshold)
    based on median distance values, I could do it
    using mean distance values and standard
    deviations. This would have the undesirable
    consequence of making the weak learners more
    vulnerable to outliers, as well as mislabled
    training images.

16
Future Work, continued
  • We wish to use image-based features (parts of the
    vehicle) as well as structural-based features
    (relationships between the parts of the vehicle).
    However, it is very difficult to obtain negative
    images for the structural-based features, so we
    cannot use traditional AdaBoost for the training
    of these types of features. Instead, we must find
    another method to create classifiers for these
    features.

17
  • Thank You
Write a Comment
User Comments (0)
About PowerShow.com