Transformation-invariant clustering using the EM algorithm - PowerPoint PPT Presentation

About This Presentation
Title:

Transformation-invariant clustering using the EM algorithm

Description:

unsupervised learning of image structure regardless of transformation ... clustering as density modeling grouping 'similar' images together. Goal. Invariance ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 15
Provided by: mrp9
Learn more at: https://www.cnbc.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Transformation-invariant clustering using the EM algorithm


1
Transformation-invariant clustering using the EM
algorithm
Brendan Frey and Nebojsa Jojic
IEEE Trans on PAMI, 25(1) 2003
2
Goal
  • unsupervised learning of image structure
    regardless of transformation
  • probabilistic description of the data
  • clustering as density modeling grouping
    similar images together

Invariance
  • manifold in data space
  • all points on manifold equivalent
  • complex even for basic transformations
  • how to approximate?

3
Approximating the Invariance Manifold
  • discrete set of points
  • sparse matrices Ti map cannonical feature z into
    transformed feature x (observed)
  • as a Gaussian probability model,
  • all possible transformations T enumerated

4
This is what it would look like for...
  • a 2x3 image with pixel-shift translations
    (wrap-around)

z
T1...T6
x
5
The full statistical model
  • for one feature (one cluster)
  • data, given latent repr
  • joint of all variables
  • Gaussian post-transformation with noise ?
  • Gaussian pre-transformation with noise F
  • for multiple features (clusters), mixture model

6
The full statistical model
  • the generative equation
  • for each feature, have a cannonical mean and
    cannonical variance
  • image contains one of the cannonical features
    (mixture model) that has undergone one
    transformation

7
Inference
and is Gassian
  • marginals for inferring parameters T, c, z

8
Adapting the rest of parameters
  • pre-transformation noise
  • post-tranformation noise
  • all learned with EM
  • E-step assume known params, infer P(z, T, c)
  • M-step update parameters

9
Experiments
recovering 4 clusters
4 clusters w/o transform.
10
Pre/post transformation noise
11
Pre/post transformation noise
mean
variance
single Gaussian model of image
µ
F
transformation-invariant model, no post-t noise
µ
F
?
transformation-invariant model, with post-t noise
12
Conclusions
  • fast (uses sparse matrices, FFT)
  • incorporates pre- and post-transformation noise
  • works on artificial data, clustering simple image
    sets, cleaning up somewhat contrived examples
  • can be extended to make use of time series data,
    account for more transformations
  • poor transformation model
  • fixed, pre-specified transformations
  • must be sparse
  • poor feature model
  • Gaussian representation of structure

13
(No Transcript)
14
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com