So far: Face Detection Cameras, 3D stereo, Object Recognition SegmentationTracking Last three lectur - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

So far: Face Detection Cameras, 3D stereo, Object Recognition SegmentationTracking Last three lectur

Description:

Cameras, 3D stereo, Object Recognition. Segmentation/Tracking. Last three lectures: Extras. ... a point has the same color, no matter where the camera is ... – PowerPoint PPT presentation

Number of Views:92
Avg rating:3.0/5.0
Slides: 23
Provided by: csWu4
Category:

less

Transcript and Presenter's Notes

Title: So far: Face Detection Cameras, 3D stereo, Object Recognition SegmentationTracking Last three lectur


1
So farFace DetectionCameras,3D
stereo,Object RecognitionSegmentation/Tracking
Last three lectures Extras. A few things
everyone from a vision class should know (but
didnt fit into the above categories).Todays
topics may appear on the final exam. Thursdays
and next Tuesdays will not.
2
What have we ignored.
  • Face DetectionWe took the input images as a
    given. Tried to make sure that we were invariant
    to lighting changes, but didnt reason explicitly
    about lights.
  • Cameras
  • Our definition of camera was strictly geometric.
    We saw where a 3D point was projected, but
    didnt think about how bright it should be.
  • 3D stereo
  • We hoped that a point in one image would look
    the same from another viewpoint.
  • Object RecognitionWe hoped that a point on one
    object would look the same in another example
    image.
  • Segmentation/TrackingKept track of objects that
    looked the same grouped together things that
    look the same.

3
Getting 3D information from appearance
4
BRDF "Bidirectional Reflectance Distribution
Function"
(MERL reflectance data set)
  • Description of how light energy with intensity S,
    incident on an object is transferred from the
    object to the camera sensor with intensity I.
    Often smooth (but not always)

5
Simplified Reflectance Models
  • Simplified model assumes no special orientation
    of the surface. Often, but not always true.

Direction to light Light
When is a zero? Large?
6
Simplified Reflectance Models.
LAMBERTIAN MODEL I S r COS q
7
Why is the Lambertian model so good for vision
(and so bland for graphics?)
surface normal
light source
Under the Lambertian model, a point has the same
color, no matter where the camera is
8
LAMBERTIAN REFLECTANCE MAP, in the camera
coordinate system
Image Plane
Y
(p,q,-1)
Z
X
(ps,qs,-1)
Optic Axis
So, p,q (0,0) is consistent with a surface
exactly facing the camera.
9
(p,q,-1)
(p,q) Representation for surface normal
vectors For normal nx,ny,nz (p,q) -nx/nz,
-ny/nz
Cool fact Under gnomic projection, straight
lines on plane great circles on the sphere
10
LAMBERTIAN REFLECTANCE MAP
If we know the shape (and therefore the surface
normal), and the lighting direction, we can color
the sphere.
Alternatively, if we have the image and the
lighting direction, we can get a constraint on
the surface normal (p,q)
11
LAMBERTIAN REFLECTANCE MAP
ps0.7 qs0.3
ps -2 qs -1
Only unknowns in yellow, iso-brightness contours
are quadratic in p,q, So, if you measure
brightness, you have one constraint on the normal.
12
If you get 3 pictures from different lighting
conditions, you can solve for the surface
normals. This is called Photometric Stereo.
Derivation of local surface normal at each pixel
creates the derived normal map a needle plot
showing the p,q vector of the normal estimates.
13
More formally
Use THREE sources in directions Image Intensities
measured at point (x,y)
orientation
albedo
14
Photometric Stereo RESULT
INPUT
albedo
orientation
15
From Surface Orientations to Shape
Integrate needle map
16
(No Transcript)
17
Shape from shading.
18
SHAPE FROM SHADING
  • First Attempt Minimize error in agreement with
    Image Irradiance Equation over the region of
    interest

Can we solve for p(x,y), q(x,y) and get surface
normals over entire image?
19
SHAPE FROM SHADING
  • Regularization find smooth p,q map

20
Solve for light and shape
If we know the shape (and therefore the surface
normal), and the lighting direction, we can color
the sphere.
If we know the shape (and therefore the surface
normal), and the image, we can solve for the
lighting direction. (easy, why?)
If we know the image and the lighting direction,
we can solve for the shape.
If we just have the image, and the assumption
there is only one light source, we can guess the
lighting direction and iterate the above two
steps.
21
Shape from Shading
We know the normal at the contour. This provides
boundary conditions.
OCCLUDING BOUNDARY
22
Human often faced with this last task
  • Assumption, light is up/left
Write a Comment
User Comments (0)
About PowerShow.com