Segmentation by Clustering - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Segmentation by Clustering

Description:

Segmentation as clustering ... merged windows traversed are clustered together. ... Repeat, growing the windows, until everything is merged into one cluster ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 37
Provided by: robf161
Category:

less

Transcript and Presenter's Notes

Title: Segmentation by Clustering


1
Segmentation by Clustering
2
First, clean up from last time. Graph-based Image
Segmentation
Image (I)
Eigenvector X(W)
Discretization
Intensity Color Edges Texture
Graph Affinities (W)
Slide from Timothee Cour (http//www.seas.upenn.ed
u/timothee)
3
Affinity matrix
N pixels
Similarity of image pixels to selected
pixel Brighter means more similar
M pixels
Warning the size of W is quadratic with the
number of parameters!
Reshape
NM pixels
NM pixels
4
Eigenvector clustering
5
These are all relatively easy to compare in
matlab
6
Segmentation as clustering
  • Idea address the image as a set of points in the
    n-dimensional space
  • Gray level images p(x,y,I(x,y)) in R3
  • Color images p (x,y,R(x,y),G(x,y),B(x,y)) in R5
  • Texture p (x,y,vector_of_fetures)
  • Color Histograms p(R(x,y),G(x,y),B(x,y)) in R3.
    we ignore the spatial information.
  • From this stage, we forget the meaning of each
    coordinate. We deal with arbitrary set of points
    --- so we can use all the usual clustering tools
  • need to normalize the features)

7
heirarchical splitting merging
Here, we merge each time the closest neighbors.
8
K-means
  • Idea
  • Determine the number of clusters
  • Find the cluster centers and point-cluster
    correspondences to minimize error
  • Problem Exhaustive search is too expensive.
  • Solution We will use instead an iterative
    search. Recall the ideal quantization
    procedure.

Algorithm fix cluster centers allocate points
to closest cluster fix allocation compute best
cluster centers
Error function
9
Example clustering with K-means using
gray-level and color histograms(from slides by
D.A. forsyth)
10
Mean Shift
  • K-means is a powerful and popular method for
    clustering. However
  • It assumes a pre-determined number of clusters
  • It likes compact clusters. Sometimes, we are
    looking for long but continues clusters.
  • Mean Shift
  • Determine a window size (usually small)
  • For each point p
  • Compute a weighted mean of the points in the
    window
  • Set p p m
  • Continue until convergence.

11
Mean Shift Theory
12
Intuitive Description
Region of interest
Center of mass
Mean Shift vector
Objective Find the densest region
Distribution of identical billiard balls
13
Intuitive Description
Region of interest
Center of mass
Mean Shift vector
Objective Find the densest region
Distribution of identical billiard balls
14
Intuitive Description
Region of interest
Center of mass
Mean Shift vector
Objective Find the densest region
Distribution of identical billiard balls
15
Intuitive Description
Region of interest
Center of mass
Mean Shift vector
Objective Find the densest region
Distribution of identical billiard balls
16
Intuitive Description
Region of interest
Center of mass
Mean Shift vector
Objective Find the densest region
Distribution of identical billiard balls
17
Intuitive Description
Region of interest
Center of mass
Mean Shift vector
Objective Find the densest region
Distribution of identical billiard balls
18
Intuitive Description
Region of interest
Center of mass
Objective Find the densest region
Distribution of identical billiard balls
19
What is Mean Shift ?
A tool for Finding modes in a set of data
samples, manifesting an underlying probability
density function (PDF) in RN
  • PDF in feature space
  • Color space
  • Scale space
  • Actually any feature space you can conceive

Non-parametric Density Estimation
Discrete PDF Representation
Non-parametric Density GRADIENT Estimation
(Mean Shift)
PDF Analysis
20
Non-Parametric Density Estimation
Assumption The data points are sampled from an
underlying PDF
Data point density implies PDF value !
Assumed Underlying PDF
Real Data Samples
21
Non-Parametric Density Estimation
Assumed Underlying PDF
Real Data Samples
22
Non-Parametric Density Estimation
?
Assumed Underlying PDF
Real Data Samples
23
Parametric Density Estimation
Assumption The data points are sampled from an
underlying PDF
Estimate
Assumed Underlying PDF
Real Data Samples
24
Kernel Density EstimationParzen Windows -
General Framework
A function of some finite number of data
points x1xn
  • Kernel Properties
  • Normalized
  • Symmetric
  • Exponential weight decay
  • ???

25
Kernel Density Estimation Parzen Windows -
Function Forms
A function of some finite number of data
points x1xn
In practice one uses the forms
or
Same function on each dimension
Function of vector length only
26
Kernel Density EstimationVarious Kernels
A function of some finite number of data
points x1xn
  • Examples
  • Epanechnikov Kernel
  • Uniform Kernel
  • Normal Kernel

27
Kernel Density Estimation
Gradient
Give up estimating the PDF ! Estimate ONLY the
gradient
Using the Kernel form
We get
Size of window
28
Kernel Density Estimation
Computing The Mean Shift
Gradient
29
Computing The Mean Shift
Yet another Kernel density estimation !
  • Simple Mean Shift procedure
  • Compute mean shift vector
  • Translate the Kernel window by m(x)

30
Mean Shift Mode Detection
What happens if we reach a saddle point ?
Perturb the mode position and check if we return
back
  • Updated Mean Shift Procedure
  • Find all modes using the Simple Mean Shift
    Procedure
  • Prune modes by perturbing them (find saddle
    points and plateaus)
  • Prune nearby take highest mode in the window

31
Mean Shift Properties
  • Automatic convergence speed the mean shift
    vector size depends on the gradient itself.
  • Near maxima, the steps are small and refined
  • Convergence is guaranteed for infinitesimal
    steps only ? infinitely convergent, (therefore
    set a lower bound)
  • For Uniform Kernel ( ), convergence is
    achieved in a finite number of steps
  • Normal Kernel ( ) exhibits a smooth
    trajectory, but is slower than Uniform Kernel
    ( ).

Adaptive Gradient Ascent
32
Real Modality Analysis
Tessellate the space with windows
Run the procedure in parallel
33
Real Modality Analysis
The blue data points were traversed by the
windows towards the mode
34
Mean Shift Segmentation
  • Perhaps the best technique to date

http//www.caip.rutgers.edu/comanici/MSPAMI/msPam
iResults.html
35
Mean Shift Segmentation
  • Mean Shift Setmentation Algorithm
  • Convert the image into tokens (via color,
    gradients, texture measures etc).
  • Choose initial search window locations uniformly
    in the data.
  • Compute the mean shift window location for each
    initial position.
  • Merge windows that end up on the same peak or
    mode.
  • The data these merged windows traversed are
    clustered together.

Image From Dorin Comaniciu and Peter Meer,
Distribution Free Decomposition of Multivariate
Data, Pattern Analysis Applications
(1999)22230
36
Mean Shift Segmentation Extension
Is scale (search window size) sensitive.
Solution, use all scales
  • Gary Bradskis internally published agglomerative
    clustering extension
  • Mean shift dendrograms
  • Place a tiny mean shift window over each data
    point
  • Grow the window and mean shift it
  • Track windows that merge along with the data they
    transversed
  • Repeat, growing the windows, until everything is
    merged into one cluster

Best 4 clusters
Best 2 clusters
Advantage over agglomerative clustering Highly
parallelizable
37
Mean Shift SegmentationResults
http//www.caip.rutgers.edu/comanici/MSPAMI/msPam
iResults.html
Write a Comment
User Comments (0)
About PowerShow.com