Title: Pose, Illumination and Expression Invariant Pairwise Face-Similarity Measure via Doppelganger List Comparison
1Pose, Illumination and Expression Invariant
Pairwise Face-Similarity Measure via Doppelganger
List Comparison
- Author Florian Schroff, Tali Treibitz,
- David Kriegman, Serge Belongie
- Speaker??
2Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
3Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
4Authors(1/4)
5Authors(2/4)
- Tali Treibitz
- Background
- Ph.D. student in the Dept. of Electrical
Engineering, Technion - Publication??CVPR,??PAMI
6Authors(3/4)
- David J. Kriegman
- Background
- UCSD Professor of Computer Science
Engineering.UIUC Adjunct Professor of Computer
Science and Beckman Institute . IEEE Transactions
on Pattern Analysis Machine Intelligence,
Editor-in-Chief, 2005-2009
7Authors(4/4)
- Serge J. Belongie
- Background
- Professor
- Computer Science and Engineering
- University of California, San Diego
8Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
9Paper Information
- ????
- ICCV 2011
- ????
- Chunhui Zhu Fang Wen Jian Sun . A
Rank-Order Distance based Clustering Algorithm
for Face Tagging, CVPR2011 - Lior Wolf Tal HassnerYaniv Taigman One
shot similarity kernel, ICCV09 - Kumar, N. Berg, A.C. Belhumeur, P.N.
Nayar, S.K.Attribute and simile classifiers for
face verification, CVPR09 -
-
10Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
11Abstract(1/2)
- Face recognition approaches have traditionally
focused on direct comparisons between aligned
images, e.g. using pixel values or local image
features. Such comparisons become prohibitively
difficult when comparing faces across extreme
differences in pose, illumination and expression. - To this end we describe an image of a face by an
ordered list of identities from a Library. The
order of the list is determined by the similarity
of the Library images to the probe image. The
lists act as a signature for each face image
similarity between face images is determined via
the similarity of the signatures.
12Abstract(2/2)
- Here the CMU Multi-PIE database, which includes
images of 337 individuals in more than 2000 pose,
lighting and illumination combinations, serves as
the Library. - We show improved performance over state of the
art face-similarity measures based on local
features, such as FPLBP, especially across large
pose variations on FacePix and Multi-PIE. On LFW
we show improved performance in comparison with
measures like SIFT (on fiducials), LBP, FPLBP and
Gabor (C1).
13Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
14Motivation
Learn a new distance metric D
15Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
16MethodsOverview
17Methods-Assumption
- This approach stems from the observation that
ranked Doppelganger lists are similar for similar
people(Even under different imaging conditions)
18Methods-Set up Face database
- Using MultiPIE as a Face Library
19Methods-Finding Alike
20Methods-Compare List
21Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
22Experiment on FacePix(across pose)
23Experiment- Verification Across Large Variations
of Pose
24Experiment- on Multi-PIE
- The classification performance using ten fold
cross-validation is 766 2.0(both FPLBP and
SSIM on direct image comparison perform near
chance). To the best of our knowledge these are
the first results reported across all pose,
illumination and expression conditions on
Multi-PIE.
25Experiment on LFW(1/2)
26Experiment on LFW(2/2)
27Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
28Conclusion(1/2)
- To the best of our knowledge, we have shown the
first verification results for face-similarity
measures under truly unconstrained expression,
illumination and pose, including full profile, on
both Multi-PIE and FacePix. - The advantages of the ranked Doppelganger lists
become apparent when the two probe images depict
faces in very different poses. Our method does
not require explicit training and is able to cope
with large pose ranges. - It is straightforward to generalize our method to
an even larger variety of imaging conditions, by
adding further examples to the Library. No change
in our algorithm is required, as its only
assumption is that imaging conditions.
29Conclusion(2/2)
- We expect that a great deal of improvement can be
achieved by using this powerful comparison method
as an additional feature in a complete
verification or recognition pipeline where it can
add the robustness that is required for face
recognition across large pose ranges.
Furthermore, we - are currently exploring the use of ranked
lists of identities in other classification
domains.
30Thanks for listening
31Relative Attributes
- Author Devi Parikh, Kristen Grauman
- Speaker??
32Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
33Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
34Authors(1/2)
- Devi Parikhhttp//ttic.uchicago.edu/dparikh/
- Background
- Research Assistant Professor at Toyota
Technological Institute at Chicago (TTIC) - Publication L. Zitnick and D. Parikh The Role
of Image Understanding in Segmentation IEEE
Conference on Computer Vision and Pattern
Recognition (CVPR), 2012 (to appear)D. Parikh
and L. Zitnick, Exploring Tiny Images The Roles
of Appearance and Contextual Information for
Machine and Human Object Recognition ,Pattern
Analysis and Machine Intelligence (PAMI), 2012
(to appear) - ????,?????
-
35Authors(2/2)
- Kristen Graumanhttp//www.cs.utexas.edu/grauman/
- Background
- Clare Boothe Luce Assistant Professor
- Microsoft Research New Faculty Fellow
- Department of Computer Science University of
Texas at Austin - Publication??CVPR,??ICCV
36Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
37Paper Information
- ????
- ICCV 2011 Oral
- ????
- Marr Prize!
-
38Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
39Abstract(1/2)
- Human-nameable visual attributes can benefit
various recognition tasks. However, existing
techniques restrict these properties to
categorical labels (for example, a person is
smiling or not, a scene is dry or not), and
thus fail to capture more general semantic
relationships. - We propose to model relative attributes. Given
training data stating how object/scene categories
relate according to different attributes, we
learn a ranking function per attribute. The
learned ranking functions predict the relative
strength of each property in novel images.
40Abstract(2/2)
- We then build a generative model over the joint
space of attribute ranking outputs, and propose a
novel form of zero-shot learning in which the
supervisor relates the unseen object category to
previously seen objects via attributes (for
example, bears are furrier than giraffes). - We further show how the proposed relative
attributes enable richer textual descriptions for
new images, which in practice are more precise
for human interpretation. We demonstrate the
approach on datasets of faces and natural
scenes, and show its clear advantages over
traditional binary attribute prediction for these
new tasks
41Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
42Motivation
However, for a large variety of attributes, not
only is this binary setting restrictive, but it
is also unnatural.
Why we model relative attributes?
43Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
44MethodsFormulation(1/3)
45MethodsFormulation(2/3)
- Objective Function
- Compared to SVM
46MethodsFormulation(3/3)
- Margin and support vectors
Geometric margin
47Methods- ZeroShotLearning From Relationships(1/3)
48Methods- ZeroShotLearning From Relationships(2/3)
49Methods- ZeroShotLearning From Relationships(3/3)
50Methods- Describing Images in Relative Terms(1/2)
51Methods- Describing Images in Relative Terms(2/2)
52Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
53Experiment-Overview(1/2)
54Experiment-Overview(2/2)
55Experiment- Relative zero-shot Learning(1/4)
How does performance vary with more unseen
categories?
56Experiment- Relative zero-shot Learning(2/4)
ltlt baseline supervision
can give unique ordering on all classes
57Experiment- Relative zero-shot Learning(3/4)
58Experiment- Relative zero-shot Learning(4/4)
Relative attributes jointly carve out space for
unseen category
59Experiment-Human study(2/2)
- 18 subjects
- Test cases
- OSR, 20 PubFig
60Outline
- Authors
- Paper Information
- Abstract
- Motivation
- Methods
- Experiment
- Conclusion
61Conclusion
- We introduced relative attributes, which allow
for a richer language of supervision and
description than the commonly used categorical
(binary) attributes. We presented two novel
applications zero-shot learning based on
relationships and describing images relative to
other images or categories. Through extensive
experiments as well as a human subject study, we
clearly demonstrated the advantages of our idea.
Future work includes exploring more novel
applications of relative attributes, such as
guided search or interactive learning, and
automatic discovery of relative attributes.
62Thanks for listening