Foreground Focus: Finding Meaningful Features in Unlabeled Images - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

Foreground Focus: Finding Meaningful Features in Unlabeled Images

Description:

Foreground Focus: Finding Meaningful Features in Unlabeled Images – PowerPoint PPT presentation

Number of Views:202
Avg rating:3.0/5.0
Slides: 55
Provided by: fci5
Category:

less

Transcript and Presenter's Notes

Title: Foreground Focus: Finding Meaningful Features in Unlabeled Images


1
Foreground Focus Finding Meaningful Features in
Unlabeled Images
  • Yong Jae Lee and Kristen Grauman
  • University of Texas at Austin

2
  • Supervised learning methods yield good
    recognition performance in practice. But
  • Supervision is Expensive
  • collect training examples, perform labeling,
    segmentation, etc.
  • Supervision has Bias
  • variability of the target data may not be
    captured (i.e., not general enough)

We propose an Unsupervised Foreground Detection
and Category Learning method based on image
clustering
3
Related Work
  • Unsupervised Category Discovery
  • Topic models pLSA, LDA
  • - Fergus et al., Sivic et al., Quelhas et al.,
    ICCV 2005, Fei-Fei Perona, CVPR 2005, Liu
    Chen, ICCV 2007
  • Image Clustering
  • - Grauman Darrell, CVPR 2006, Dueck Frey,
    ICCV 2007
  • Image Clustering with localization
  • - Kim et al., CVPR 2008
  • Supervised Feature Selection / Part Discovery
  • Discriminative Feature Selection
  • - Dorko Schmid, ICCV 2003, Quack et al., ICCV
    2007
  • Weakly Supervised Learning
  • - Weber et al., ECCV 2000, Fergus et al., CVPR
    2003, Chum Zisserman, CVPR 2007
  • Query Expansion
  • - Chum et al., ICCV 2007

4
Problem
5
Mutual Relationship between Foreground Features
and Clusters
  • If we have only foreground features, we can form
    good clusters

Clusters formed from full image matches
6
Mutual Relationship between Foreground Features
and Clusters
  • If we have good clusters, we can detect the
    foreground

7
Mutual Relationship between Foreground Features
and Clusters
  • If we have good clusters, we can detect the
    foreground
  • If we have only foreground features, we can form
    good clusters

8
Our Approach
  • Unsupervised task that iteratively seeks the
    mutual support between discovered objects and
    their defining features

Refine feature weights given current clusters
Update cluster based on weighted semi-local
feature matches
9
Sets of local features
10
Optimal Partial Matching
X (f1(X),w1),(f2(X),w2),,(fn(X),wn)
Y (f1(Y),w1),(f2(Y),w2),,(fm(Y),wm)
Earth Movers Distance
Rubner et al., IJCV 2000
features from sets
,
X and Y
distance between the descriptors
scalars giving the amount of weight
mapped from
,
11
Feature Contribution to Match
f1(X)
f1(Y)
f2(X)
f2(Y)
f3(X)
Y
X
12
Feature Contribution to Match
D(fi(X), fj(Y))
f1(X)
f1(Y)
f2(X)
f2(Y)
f3(X)
Y
X
Weight computation is influenced by both the
flow (amount of mass transferred) and distance
between the matching features Contribution
weight / distance
Contribution to Match
Feature index
13
Feature Contribution to Match
f1(X)
f1(Y)
f2(X)
f2(Y)
f3(X)
Y
X
Weight computation is influenced by both the
flow (amount of mass transferred) and distance
between the matching features Contribution
weight / distance
Contribution to Match
Feature index
14
Feature Contribution to Match
f1(X)
f1(Y)
f2(X)
f2(Y)
f3(X)
Y
X
Weight computation is influenced by both the
flow (amount of mass transferred) and distance
between the matching features Contribution
weight / distance
Contribution to Match
Feature index
15
Feature Contribution to Match
f1(X)
f1(Y)
f2(X)
f2(Y)
f3(X)
Y
X
Weight computation is influenced by both the
flow (amount of mass transferred) and distance
between the matching features Contribution
weight / distance
Contribution to Match
Feature index
16
Feature Contribution to Match
f1(X)
f1(Y)
f2(X)
f2(Y)
f3(X)
Y
X
Weight computation is influenced by both the
flow (amount of mass transferred) and distance
between the matching features Contribution
weight / distance
Contribution to Match
Feature index
17
Mutual Relationship between Foreground Features
and Clusters
  • If we have good clusters, we can detect the
    foreground
  • If we have only foreground features, we can form
    good clusters

18
Computing Feature Weights
contribution to match
19
Computing Feature Weights
20
Computing Feature Weights
21
Computing Feature Weights
22
Computing Feature Weights
23
Computing Feature Weights
24
Mutual Relationship between Foreground Features
and Clusters
  • If we have good clusters, we can detect the
    foreground
  • If we have only foreground features, we can form
    good clusters

25
Computing Image Similarity
26
Computing Image Similarity
27
Computing Image Similarity
28
Forming Clusters
29
Forming Clusters
30
Forming Clusters
Compute Pair-wise Partial Matching Image
Similarities
31
Forming Clusters
Normalized Cuts Clustering
32
Mutual Relationship between Foreground Features
and Clusters
  • If we have good clusters, we can detect the
    foreground
  • If we have only foreground features, we can form
    good clusters
  • Now we have the pieces to do both

33
Cluster and Feature Weight Refinement Iteration
1
Images as Local Feature Sets
Pair-wise Partial Matching
Normalized Cuts Clustering
Initial Set of Clusters
34
Cluster and Feature Weight Refinement Iteration
1
Compute Feature Weights
New Feature Weights
35
Cluster and Feature Weight Refinement Iteration
2
Images as Local Feature Sets w/ New Weights
Pair-wise Partial Matching
Noticeable Change in Matching
Normalized Cuts Clustering
36
Cluster and Feature Weight Refinement Iteration
2
New Set of Clusters
Compute Feature Weights
New Feature Weights
37
Cluster and Feature Weight Refinement Iteration
3
Pair-wise Partial Matching Normalized Cuts
Final Set of Clusters
New Feature Weights
38
Local features may not produce good matches
Local features
Lazebnik et al., BMVC 2004, Sivic Zisserman,
CVPR 2004, Agarwal
Triggs, ECCV 2006,
Pantofaru et al., Beyond Patches Wkshp 2006,
Quack et al., ICCV 2007
39
Experiments
  • Goals
  • Unsupervised Foreground Discovery
  • Unsupervised Category Discovery
  • Comparison with Related Methods
  • Datasets
  • Caltech-101, Microsoft Research Cambridge,
    Caltech-4
  • Semi-local Features
  • Densely sampled SIFT, DoG SIFT, Hessian-Affine
    SIFT
  • Number of Clusters of Classes

40
Quality of Foreground Detection
  • Object categories with highest clutter were
    chosen
  • 2 supervised classifiers built 1) trained on all
    features, 2) trained on foreground features
  • Ranked categories for which segmentation most
    helped supervised classification

41
Quality of Foreground Detection
10-classes subset
- highly weighted features
42
Quality of Clusters Formed
  • Cluster quality for the 4-classes and 10-classes
    sets of Caltech-101
  • Quality Measure F-measure
  • Black dotted lines indicate the best possible
    quality that could be obtained if the ground
    truth segmentation were known

43
Comparison with clustering methods
  • Affinity Propagation message passing algorithm
    which identifies good exemplars by propagating
    non-metric affinities Dueck Frey, ICCV 2007
  • Partial Match Clusters forms groups with
    partial-match spectral clustering but does not
    iteratively improve foreground feature weights
    and cluster assignments Grauman Darrell, CVPR
    2006

Caltech-101 subsets 7-class (N441) and 20-class
(N1230)
Caltech-4 dataset (N3188), 10 runs with 400
randomly selected images
44
Comparison with topic models
1 correspondence-based pLSA variant - Liu
Chen, ICCV 2007 2 pLSA with spatial
information - Liu Chen, CVPR wkshop, 2006
  • Comparison of accuracy of foreground discovery
  • Positive Class Caltech motorcycle class (826
    images)
  • Negative Class Caltech background class (900
    images)
  • Foreground detection rate threshold varied among
    top 20 most confident features

45
Assumptions and Limitations
  • Support of the pattern among multiple examples in
    the dataset
  • Some support must be detected in the initial
    iteration
  • Background can be consistently reoccurring
    introduce semi-supervision

46
Contributions
  • Unsupervised foreground feature selection from
    unlabeled images
  • Automatic object category learning
  • Mutual reinforcement of foreground and category
    discovery benefits both
  • Novel semi-local descriptor

47
Future Work
  • Incremental updates to unlabeled dataset
  • Extension to multi-label cluster assignments
  • Automatic Model Selection k
  • Automatically construct summaries of unstructured
    image collections

48
Questions?
49
Quality of Foreground Detection and Clusters
Formed
  • Microsoft Research Cambridge (MSRC)v1 dataset

50
Proximity Distribution Descriptor
  • p base feature
  • Ellipses denote features, their patterns indicate
    the visual word types, numbers indicate rank
    order of spatial proximity to the base feature
  • Motivated by Proximity Distribution Kernels Ling
    Soatto, ICCV 2007

51
Computing Feature Weights
52
Computing Feature Weights
Normalization to keep original weight
53
Face, Dalmatian, Hedgehog, Okapi
- highly weighted features
54
9 Affinity Propagation - Dueck Frey, ICCV
2007 FF-Dense Foreground Focus with semi-local
descriptors (dense SIFT base features) FF-Sparse
Foreground Focus with semi-local descriptors (DoG
SIFT base features) FF-SIFT Foreground Focus
with DoG SIFT features
Write a Comment
User Comments (0)
About PowerShow.com