Title: So far: Face Detection Cameras, 3D stereo, Object Recognition SegmentationTracking Last three lectur
1So farFace DetectionCameras,3D
stereo,Object RecognitionSegmentation/Tracking
Last three lectures Extras. A few things
everyone from a vision class should know (but
didnt fit into the above categories).Todays
topics may appear on the final exam. Thursdays
and next Tuesdays will not.
2What have we ignored.
- Face DetectionWe took the input images as a
given. Tried to make sure that we were invariant
to lighting changes, but didnt reason explicitly
about lights. - Cameras
- Our definition of camera was strictly geometric.
We saw where a 3D point was projected, but
didnt think about how bright it should be. - 3D stereo
- We hoped that a point in one image would look
the same from another viewpoint. - Object RecognitionWe hoped that a point on one
object would look the same in another example
image. - Segmentation/TrackingKept track of objects that
looked the same grouped together things that
look the same.
3Getting 3D information from appearance
4BRDF "Bidirectional Reflectance Distribution
Function"
(MERL reflectance data set)
- Description of how light energy with intensity S,
incident on an object is transferred from the
object to the camera sensor with intensity I.
Often smooth (but not always)
5Simplified Reflectance Models
- Simplified model assumes no special orientation
of the surface. Often, but not always true.
Direction to light Light
When is a zero? Large?
6Simplified Reflectance Models.
LAMBERTIAN MODEL I S r COS q
7Why is the Lambertian model so good for vision
(and so bland for graphics?)
surface normal
light source
Under the Lambertian model, a point has the same
color, no matter where the camera is
8LAMBERTIAN REFLECTANCE MAP, in the camera
coordinate system
Image Plane
Y
(p,q,-1)
Z
X
(ps,qs,-1)
Optic Axis
So, p,q (0,0) is consistent with a surface
exactly facing the camera.
9(p,q,-1)
(p,q) Representation for surface normal
vectors For normal nx,ny,nz (p,q) -nx/nz,
-ny/nz
Cool fact Under gnomic projection, straight
lines on plane great circles on the sphere
10LAMBERTIAN REFLECTANCE MAP
If we know the shape (and therefore the surface
normal), and the lighting direction, we can color
the sphere.
Alternatively, if we have the image and the
lighting direction, we can get a constraint on
the surface normal (p,q)
11LAMBERTIAN REFLECTANCE MAP
ps0.7 qs0.3
ps -2 qs -1
Only unknowns in yellow, iso-brightness contours
are quadratic in p,q, So, if you measure
brightness, you have one constraint on the normal.
12If you get 3 pictures from different lighting
conditions, you can solve for the surface
normals. This is called Photometric Stereo.
Derivation of local surface normal at each pixel
creates the derived normal map a needle plot
showing the p,q vector of the normal estimates.
13More formally
Use THREE sources in directions Image Intensities
measured at point (x,y)
orientation
albedo
14Photometric Stereo RESULT
INPUT
albedo
orientation
15From Surface Orientations to Shape
Integrate needle map
16(No Transcript)
17Shape from shading.
18SHAPE FROM SHADING
- First Attempt Minimize error in agreement with
Image Irradiance Equation over the region of
interest
Can we solve for p(x,y), q(x,y) and get surface
normals over entire image?
19SHAPE FROM SHADING
- Regularization find smooth p,q map
20Solve for light and shape
If we know the shape (and therefore the surface
normal), and the lighting direction, we can color
the sphere.
If we know the shape (and therefore the surface
normal), and the image, we can solve for the
lighting direction. (easy, why?)
If we know the image and the lighting direction,
we can solve for the shape.
If we just have the image, and the assumption
there is only one light source, we can guess the
lighting direction and iterate the above two
steps.
21Shape from Shading
We know the normal at the contour. This provides
boundary conditions.
OCCLUDING BOUNDARY
22Human often faced with this last task
- Assumption, light is up/left