Learning to Combine Bottom-Up and Top-Down Segmentation - PowerPoint PPT Presentation

About This Presentation
Title:

Learning to Combine Bottom-Up and Top-Down Segmentation

Description:

Tu et al ICCV03 Image parsing: segmentation, detection, and recognition. ... Large number of freedom degrees in tentacles configuration- requires a complex ... – PowerPoint PPT presentation

Number of Views:49
Avg rating:3.0/5.0
Slides: 33
Provided by: Ale8425
Category:

less

Transcript and Presenter's Notes

Title: Learning to Combine Bottom-Up and Top-Down Segmentation


1
Learning to Combine Bottom-Up and Top-Down
Segmentation
  • Anat Levin and Yair Weiss
  • School of CSEng,
  • The Hebrew University of Jerusalem, Israel

2
Bottom-up segmentation
Bottom-up approaches Use low level cues to
group similar pixels
  • Malik et al, 2000
  • Sharon et al, 2001
  • Comaniciu and Meer, 2002

3
Bottom-up segmentation is ill posed
Many possible segmentation are equally good based
on low level cues alone.
Some segmentation example (maybe horses from
Erans paper)
images from Borenstein and Ullman 02
4
Top-down segmentation
  • Class-specific, top-down segmentation (Borenstein
    Ullman Eccv02)
  • Winn and Jojic 05
  • Leibe et al 04
  • Yuille and Hallinan 02.
  • Liu and Sclaroff 01
  • Yu and Shi 03

5
Combining top-down and bottom-up segmentation
  • Find a segmentation
  • Similar to the top-down model
  • Aligns with image edges

6
Previous approaches
  • Borenstein et al 04 Combining top-down and
    bottom up segmentation.
  • Tu et al ICCV03 Image parsing segmentation,
    detection, and recognition.
  • Kumar et al CVPR05 Obj-Cut.
  • Shotton et al ECCV06 TextonBoost

Previous approaches Train top-down and bottom-up
models independently
7
Why learning top-down and bottom-up models
simultaneously?
  • Large number of freedom degrees in tentacles
    configuration- requires a complex deformable top
    down model
  • On the other hand rather uniform colors- low
    level segmentation is easy

8
Our approach
  • Learn top-down and bottom-up models
    simultaneously
  • Reduces at run time to energy minimization with
    binary labels (graph min cut)

9
Energy model
Consistency with fragments segmentation
Segmentation alignment with image edges
10
Energy model
Segmentation alignment with image edges
Consistency with fragments segmentation
11
Energy model
Segmentation alignment with image edges
Consistency with fragments segmentation
12
Learning from segmented class images
Training data
Goal Learn fragments for an energy function
13
Learning energy functions using conditional
random fields
  • Theory of CRFs
  • Lafferty et al 2001
  • LeCun and Huang 2005
  • CRFs For vision
  • Kumar and Hebert 2003
  • Ren et al 2006
  • He et al 2004, 2006
  • Quattoni et al 2005
  • Torralba et al 04

14
Learning energy functions using conditional
random fields
It's not enough to succeed. Others must fail.
Gore Vidal
15
Learning energy functions using conditional
random fields
It's not enough to succeed. Others must fail.
Gore Vidal
16
Differentiating CRFs log-likelihood
Log-likelihood is convex with respect to
Log-likelihood gradients with respect to
Expected feature response minus observed feature
response
Yair- in the original version of this slide I had
another equation expressing the expectation as a
sum of marginals (see next hidden slide). At
least for me, it wasnt originally clear what
this expectation means before I saw the other
equation. However, I try to delete un necessary
equations..
17
Conditional random fields-computational
challenges
  • CRFs cost- evaluating partition function
  • Derivatives- evaluating marginal probabilities
  • Use approximate estimations
  • Sampling
  • Belief Propagation and Bethe free energy
  • Used in this work Tree reweighted belief
    propagation and Tree reweighted upper bound
    (Wainwright et al 03)

18
Fragments selection
Candidate fragments pool
19
Fragments selection challenges
Straightforward computation of likelihood
improvement is impractical 2000 Fragments 50
Training images 10 Fragments selection iterations
1,000,000 inference operations!
20
Fragments selection
First order approximation to log-likelihood gain
Fragment not accounted for by the existing model
  • Similar idea in different contexts
  • Zhu et al 1997
  • Lafferty et al 2004
  • McCallum 2003

21
Fragments selection
First order approximation to log-likelihood gain
  • Requires a single inference process on the
    previous iteration energy to evaluate
    approximations with respect to all fragments
  • First order approximation evaluation is linear in
    the fragment size

22
Fragments selection- summary
  • Initialization Low- level term
  • For k1K
  • Run TRBP inference using the previous iteration
    energy.
  • Approximate likelihood gain of candidate
    fragments
  • Add to energy the fragment with maximal gain.

23
Training horses model
24
Training horses model-one fragment
25
Training horses model-two fragments
26
Training horses model-three fragments
27
Results- horses dataset
28
Results- horses dataset
Mislabeled pixels percent
Fragments number
Comparable to previous results (Kumar et al,
Borenstein et al.) but with far fewer fragments
29
Results- artificial octopi
30
Results- cows dataset
From the TU Darmstadt Database
31
Results- cows dataset
Mislabeled pixels percent
Fragments number
32
Conclusions
  • Simultaneously learning top-down and bottom-up
    segmentation cues.
  • Learning formulated as estimation in Conditional
    Random Fields
  • Novel, efficient fragments selection algorithm
  • Algorithm achieves state of the art performance
    with a significantly smaller number of fragments
Write a Comment
User Comments (0)
About PowerShow.com