Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes - PowerPoint PPT Presentation

About This Presentation
Title:

Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes

Description:

Introduction. Surface Matching. Spin Images. Spin Image ... Introduction. Surface Matching is used to recognize an object in a scene to obtain information ... – PowerPoint PPT presentation

Number of Views:318
Avg rating:3.0/5.0
Slides: 52
Provided by: Jan52
Learn more at: https://www.cs.jhu.edu
Category:

less

Transcript and Presenter's Notes

Title: Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes


1
Using Spin Images for Efficient Object
Recognition in Cluttered 3D Scenes
  • Andrew Johnson
  • Martial Herbert

2
Outline
  • Introduction
  • Surface Matching
  • Spin Images
  • Spin Image Parameters
  • Surface Matching Engine
  • Object Recognition
  • Analysis of Recognition
  • Conclusion

3
Introduction
  • Surface Matching is used to recognize an object
    in a scene to obtain information
  • Surface alignment is used to determine
    transformation between coordinate systems
  • Two possible coordinate systems for surface
    representation viewer-centered and
    object-centered

4
Viewer-centered
  • Dependent on surface view
  • Easy to construct
  • Surface description changes with the viewpoint
  • Surfaces have to be aligned before comparison
  • Separate representations must be stored for each
    viewpoint

5
Object-centered
  • Object is described in a coordinate System fixed
    to the object
  • View-independent
  • No alignment required
  • More compact single representation
  • Finding a coordinate system is tough

6
Clutter
  • Real-world surface data has multiple objects
  • Can cause clutter
  • Segment the scene into components
    (object/non-object)
  • Object-centered coordinate systems using local
    features

7
Occlusion
  • Surface data can have missing components
  • Alter global properties of surfaces
  • Complicate object-centered system construction
  • Real-life object-centered representation must be
    robust to clutter and occlusion

8
Outline
  • Introduction
  • Surface Matching
  • Spin Images
  • Spin Image Parameters
  • Surface Matching Engine
  • Object Recognition
  • Analysis of Recognition
  • Conclusion

9
Surface matching representation
  • Dense collection of 3D points and surface normals
  • Object centered coordinate system
  • Descriptive image that encodes global properties
  • Independent of transformation between surfaces

10
Surface representation components
11
Surface matching
  • Match individual points to match complete
    surfaces
  • Localized problems
  • Can handle clutter and occlusion without
    segmenting the scene
  • Clutter no matching points
  • Occlusion will not be searched for

12
Surface Matching
  • 2D images associated with each point
  • Local basis at an oriented point on an object
    surface
  • Positions can be described by two parameters
  • Create a descriptive image by using these
    parameters in a 2D histogram
  • Apply 2D template and pattern matching techniques

13
Previous work
  • Splash representation
  • Point signatures
  • 1D representations that accumulate surface
    information along a 3D curve
  • 2D images are more descriptive
  • Oriented-point basis is robust
  • Techniques from image processing can be used

14
Ideas in this paper
  • Description and analysis of use of spin images
  • Multi-model object recognition
  • Scenes contain clutter and occlusion
  • Localization of spin images by reducing spin
    image generation patterns
  • Using statistical eigen-analysis to reduce image
    dimensionality

15
Outline
  • Introduction
  • Surface Matching
  • Spin Images
  • Spin Image Parameters
  • Surface Matching Engine
  • Object Recognition
  • Analysis of Recognition
  • Conclusion

16
Spin Images
  • An oriented point defines a partial
    object-centered coordinate system
  • Radial coordinate
  • Elevation coordinate
  • Computed for a vertex in the surface mesh
  • The bin that indexes this is incremented
  • In the accumulator, dark areas correspond to bins
    with several projected points

17
Spin images for three oriented points
18
Spin images from different surfaces
  • Spin images from two different surfaces for the
    same object will be similar
  • Allowances for noise and sampling variations
  • Uniform sampling linearly related spin images
  • Use the linear correlation coefficient to compare
  • Similarity measure l2 distance

19
Outline
  • Introduction
  • Surface Matching
  • Spin Images
  • Spin Image Parameters
  • Bin Size
  • Image Width
  • Support Angle
  • Surface Matching Engine
  • Object Recognition
  • Analysis of Recognition
  • Conclusion

20
Spin Image Generation Parameters
  • Bin Size Determines storage size of spin images,
    averaging in spin images, descriptiveness
  • Set as a multiple of mesh resolution
  • Four times not descriptive enough
  • One-fourth Not enough averaging
  • Ideal bin size Mesh resolution

21
Bin size
22
Effect of Bin Size
Match distance Median Euclidean distance Lower
match distance better matches
23
Image Width
  • Equal number of rows and columns
  • Image width
  • Image width bin size Support Distance
  • Amount of space swept by a spin image
  • Lower image width, lower descriptiveness
  • Lower image width, reduced chance of cluttering

24
Image width
Image width decreases, match distance
decreases Here, image width 15
25
Support Angle
  • Maximum angle between direction of the basis of
    the oriented point and the surface normal of
    contributing points
  • A has position and normal (pa, na)
  • B has position and normal (pb, nb)
  • B is in the spin image of A if acos(na, nb)lt
    support angle
  • Support angle is used to minimize self-occlusion
    and clutter

26
Support Angle
Lower support angle, lower descriptiveness,
higher match distance Support angle necessary for
robustness to clutter and occlusion
Here, support angle 60 degrees
27
Spin Image localization
28
Outline
  • Introduction
  • Surface Matching
  • Spin Images
  • Spin Image Parameters
  • Surface Matching Engine
  • Object Recognition
  • Analysis of Recognition
  • Conclusion

29
Surface Matching Engine
  • Compare spin images from points on two surfaces
  • Establish point correlation
  • All spin images from the model are stored in a
    stack
  • The spin image of a vertex on the scene is
    compared to a best-match
  • Point correlation is made between the two points
  • Point correspondences are grouped, outliers are
    eliminated

30
Block Diagram
31
Outline
  • Introduction
  • Surface Matching
  • Spin Images
  • Spin Image Parameters
  • Surface Matching Engine
  • Object Recognition
  • Spin Image Compression
  • Matching compressed images
  • Results
  • Analysis of Recognition
  • Conclusion

32
Object Recognition
  • Every model can be represented as a polygonal
    mesh
  • Spin images for all vertices are created and
    stored for every model
  • The best match at recognition time leads to
    simultaneous recognition and localization
  • Inefficient too many bins and a linear growth
    rate during matching

33
Spin Image Compression
Spin images of points close to each other will be
correlated, as will those that are equal and on
opposite sides
34
PCA
  • The eigenspace is computed using PCA
  • The l2 distance in spin image space is best
    approximated to the l2 in eigenspace.
  • N spin images xi of size D
  • Mean is xm
  • Subtracting the mean from each, to make it more
    effective,
  • âi xi - xm

35
Spin Image Compression
  • Sm â1 , â2 , . ân
  • Covariance matrix Cm Sm(Sm)t
  • The eigenvectors are obtained from
  • ?miemi Cmemi
  • The eigenvectors can be considered spin images
    (eigenspin images)
  • We determine the model projection dimension, s
  • The s-tuple is the representation
  • pj (âjem1, âjem2, âjem3. âjems)

36
Matching Compressed Spin Images
  • A scene must be generated for matching
  • To generate the s-tuple
  • y yj xm
  • Qj (yjem1, yjem2, yjems)
  • The l2 distance between the model and scene
    tuples is used
  • We find the closest points and store s-tuples in
    an efficient closest-point data structure

37
Results
  • Model library of 20 objects

Cluttered scenes were created, followed by mesh
smoothing and resampling
38
Recognition
Seven models were simultaneously recognized
Models have been closely packed and the scenes
include clutter and occlusion
39
Another Result
40
Analysis in Complex Scenes
  • Experimental verification of performance in
    scenes with clutter and occlusion
  • Experiment
  • Consider several scene data sets
  • Run the recognition algorithm
  • Interactively measure clutter and occlusion along
    with success in recognition

41
Recognition success states
  • Model exists and is recognized true-positive
  • Model does not exist, but algorithm recognizes
    it false-positive
  • Model exists, but algorithm cannot recognize it
    false-negative
  • No true-negative state in their experiment

42
Recognition Trial
  • Model is placed on the scene with other objects
  • Scene with occlusion and clutter is imaged
  • The algorithm matches the object to the data
  • User then separates the surface patch of the
    model from the rest of the surface
  • Amount of occlusion and clutter are calculated

43
Analysis
  • Occlusion 1-(model surface patch area/total
    model surface area)
  • Clutter 1-(clutter points in relevant
    volume/total points in relevant volume)
  • 100 scenes were created using 4 models
  • bunny, faucet, Mr. Potato head and y-split
  • Randomly placed models

44
Analysis
45
Results
  • Low occlusion recognition failures are
    non-existent
  • High occlusion failures are many
  • After about 70 of occlusion, spin imaging no
    longer works well
  • Effect of clutter is uniform until a high level
    of clutter is reached
  • Spin image matching is fairly independent of
    clutter

46
Results for occlusion
Recognition is high until occlusion is 70, after
which it is no longer successful
47
Results for clutter
Spin image matching is independent of clutter
48
Recognized models
True-positive models Matching without
compression matches more than with compression
49
Matching Time
  • Images with compression match slower than those
    without compression
  • The number of points in the scene influence
    matching time

50
Conclusion
  • Algorithm handles all general shapes
  • Through compression using PCA, the spin image
    representation is made efficient for large model
    libraries
  • Robust to occlusion and clutter
  • Spin images are applicable to 3D computer vision
    problems
  • Can be made more efficient and robust with
    extensions like automated learning and better
    parameterizations

51
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com