Using Image Priors in Maximum Margin Classifiers - PowerPoint PPT Presentation

About This Presentation
Title:

Using Image Priors in Maximum Margin Classifiers

Description:

Using Image Priors in Maximum Margin Classifiers. Tali Brayer. Margarita Osadchy. Daniel Keren ... Locate instances of object category in a given image. ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 17
Provided by: Gabi48
Category:

less

Transcript and Presenter's Notes

Title: Using Image Priors in Maximum Margin Classifiers


1
Using Image Priors in Maximum Margin Classifiers
  • Tali Brayer
  • Margarita Osadchy
  • Daniel Keren

2
Object Detection
  • Problem
  • Locate instances of object category in a given
    image.

Background Object (Category)
Very large Relatively small
Complex (thousands of categories) Simple (single category)
Large prior to appear in an image Small prior
Easy to collect (not easy to learn from examples) Hard to collect
3
Intuition
  • Denote H to be the acceptance region of a
    classifier. We propose to minimize
  • Pr(All images) ( Pr(bkg)) in H except for
    the object samples.

All images Background
We have a prior on the distribution of all
natural images
4
Other work
  • Combine small labeled training set with large
    unlabeled set semi-supervised learning EM with
    generative mixture models, Fisher
    kernel,self-training, co-training, transductive
    SVM, and graph-based methods
  • All good for the symmetric case, but

We have more information marginal
background
5
Distribution of Natural Images Boltzmann-like
6
Linear SVM
Maximal margin
Enough training data
7
Linear SVM
Class 1
Class 2
8
MM classifier with Prior
Class 1
Class 2
9
  • Minimize the probability of natural images over H
  • After some manipulations it reduces to

Q
10

Relation between the number of natural random
images in the positive half-space and the integral
of images that wxbgt0
Random w with unit norm and random b from -0.5,
0.5
11
Training Algorithm
Probability constraint (d?0)
12
Convex Constraint
convex
13
Results
  • Tested categories cars (side view), faces.
  • Training 5/10/20/60/(all available data)
    objects images. All available background images.
  • Test
  • Face set 472 faces, 23,573 bkg. Images
  • Cars test 299 cars, 10,111 bkg. images
  • Ran 50 trials for each set with different random
    choices of training data.
  • Weighted SVM was used to deal with the asymmetry
    in class sizes.

CBCL
UIUC
14
Average recognition rate() Faces
5 10 60 all
Weighted Linear SVM 70 72.5 75.2 77
Weighted Kernel SVM 69.7 72.6 79.6 83
MM_prior Linear 72.7 75 78 80.3
MM_prior Kernel 71.7 75.2 79.1 -
15
Average recognition rate() Cars
5 10 60 all
Weighted Linear SVM 89.24 91.8 92.8 93.7
Weighted Kernel SVM 90 92.9 95.4 96
MM_prior Linear 91.3 93 94.3 95.3
MM_prior Kernel 89.4 93.2 95.8 -
16
Future Work
  • Video.
  • Explore additional and more robust features.
  • Refining the priors (using background examples).
  • Kernelization.
Write a Comment
User Comments (0)
About PowerShow.com