Face tracking for interaction -review and work - PowerPoint PPT Presentation

About This Presentation
Title:

Face tracking for interaction -review and work

Description:

2. Deformation(model the face expression and talking... interactive animation of faces and heads// using input from video, Proceedings ... – PowerPoint PPT presentation

Number of Views:169
Avg rating:3.0/5.0
Slides: 48
Provided by: chang
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Face tracking for interaction -review and work


1
Face tracking for interaction-review and work
  • Changbo Hu
  • Advisor Matthew Turk
  • Department of Computer Science, University of
    California, Santa Barbara

2
Outline
  • Review
  • What is the aim of face tracking?
  • How did people do it?
  • What we are going to go?
  • Current Works
  • Mean-shift skin tracking
  • Mean-shift elliptical head tracking
  • Face tracking and imitation

3
Face in interaction
  • Where?
  • Who?
  • What?
  • What we expect computer?
  • To perceive the above information
  • To response properly

4
Applications
  • Authentication
  • Human recognition
  • Internet
  • Human-computer interface
  • Facial animation
  • Talking agent
  • Model-based video coding

5
The role of tracking
  • Two meaning
  • When face detected, keep up its motion
  • Tracking is easier in some sense
  • Some Tasks request you
  • To know its pose
  • To improve performance for recognition of face
    and expression
  • Synthesis and animation

6
What facts cause face variation?
1. Pose (model the relative view to camera ) 2.
Deformation(model the face expression and
talking) 3. Intensity change (model the
illumination and sensor)
7
What is face tracking?
  • To find all the variation factors
  • Problem formulation

translation
deformation
Intensity sensor
rotation
projection
8
How people did it?
9
ctned
10
To look into some details
Gang Xu, ICPR98
Black, CVPR 95
11
To look into some details
Blake, ICCV98
Bilinear combination of motion and expression
Cassia CVPR99
12
To look into some details
Pentland, Computer Graphics, 96
DT, PAMI 93
13
To look into some details
Pentland ICCV workgroup 99
14
To look into some details
GorkTurk ICCV01
15
What will we do?
  • Task
  • Personalized full tracking and animation of
    face
  • Start point 2d face location
  • Selecting face model
  • Modeling expression
  • Modeling illumination
  • Animation

16
What conditions we have?
  • Personalized face is specific
  • to model shape
  • to model expression
  • to have stable feature points
  • to sample lighting effect
  • Statistical learning
  • PCA, ASM,AAM
  • muscle vector, human metric for expression
  • Learn feature point location

17
Start point--current work
  • Mean shift tracking of skin color
  • Mean shift tracking of elliptical head
  • 2 step face tracking and expression imitation

18
Selecting face model
Face modeling itself is a large topic, related in
graphics, talking face, etc. What model should we
choose , must considering 1. The model can
account for 3d motion 2. The model is easy to
adjust to individual
From Reference 29
19
Face model data capture
  • to determine head geometry
  • method
  • two calibrated front and frofile images
  • 10 feature ponits--four eye corners, two
    nostrils, the bottom of the upper front teeth,
    the chin, the base of ears

20
Face model locate features
  • to locate the facial features with high precision
    in three steps
  • to find a coarse outline of the head and
    estimation of main features
  • to analyze the important areas in more detail
  • zooms in on specific points and measure with high
    accuracy.

21
Face model locate features
22
Face model Location of main features
  • texture segmentation
  • using luminance image
  • bandpass filter and adaptive threshold
  • morphological operation
  • connected component analysis
  • extracting the center of mass, width, and height
    of each blob

23
Face model Location of main features
  • color segmentation
  • background color /skin,hair color
  • extraction the similar feature as the texture
  • evaluating combination of features
  • to train a 2-d head model (size)
  • to score blobs to select candidates
  • to check each eye candidate for good combination
  • to evaluate whole head

24
Face model Measuring facial features
  • to find the exact dimension
  • area around the mouth and the eye
  • using HSI color space
  • threshold for each color cluster(predefined)
  • recalibrating the color thresholds dynamcally
  • remarkable accurate, not robust enough
  • 2 pixels, standard deviation

25
Face model Measuring facial feature
the colors of teeth, lips and the inner,dark part
of the mouth is prelearned
26
Face model High accuracy feature points
  • Correlation analysis
  • a group of kernel
  • kernel chosen by width and height
  • scan in the image for the best correlation
  • 20X20 in 100X100, conjugate gradient descent
    approach
  • 0.5 pixel standard deviation

27
Face model High accuracy from correlation
28
Face model Pose estimation
  • using 6 corners, 3d known from the model
  • iteration equation (to find i,j and Z0)
  • lowpass filtering on their trajectories

29
Modeling expression
  • Like AAM, create pose free apperance patches

30
Modeling illumination
  • 3D linear space , assuming Labersion surface,
    without shadowing
  • Considering shadowing and distrotion, can
    increase the basis to around 10
  • Using only one subject, we can learn the linear
    space by eperiment

31
Animation
  • Synthesis animation
  • Performance driven sketch animation

32
End
Questions and comments?
33
Mean shift color tracking
  • An implementation to show power of skin
  • Feature is probability of skin hue
  • Mean-shift search
  • Choose a search window size.
  • Choose the initial location of the search window.
  • Compute the mean location in the search window.
  • Center the search window at the mean location
    computed in Step3.
  • Repeat Steps 3 and 4 until convergence

34
ctned
  • Find the zeroth moment M00
  • Find the first moment for x and y, M10, M01
  • Then the mean search window location (the
    centroid) is (xc, yc)
  • (xc M10/ M00, yc M01/ M00 )
  • Get features from the blob
  • Length, weighth, rotation

35
ctned
back
36
Meanshift elliptical head tracking
  • Based on shape and adaptive color the
  • head is shaped as an ellipse and the heads
    appearance is represented by adaptive color.
  • First mean shift to track the color blob
  • Second Maximizing the normalized gradient around
    the
  • boundary of the elliptical head.

37
Why adaptive color
The heads hue vary during tracking, esp. in
different views or big rotation, such as
In order to handle this problem, we modify the
heads color continuously during tracking using
tracking result.
hT the initial color representation hR the
tracking result color in the current frame hN
the heads color for tracking in the next frame
38
Relocate elliptical head
  • Maximizing the normalized Gradient
  • Assuming the elliptical heads state
  • gi is the intensity gradient at perimeter pixel i
    of the ellipse
  • Nh is the number of pixels on the perimeter of
    the ellipse.
  • Then update color

39
Benefits
  • Compared with Bradskis paper and Stanford
    elliptical head paper, our approach has the
    benefits
  • Robust (fusion of color and gradient cue,
    adaptive to color changing)
  • Fast (do not need to search, meanshift iterate
    fast)

40
Demo
back
41
Real time face pose tracking expression
imitation (still on)
  • A modification to Active apperance model
  • The most obvious drawback of AAM?
  • slow, because it can not apply PCA projection
    directly
  • Explictly compute the rigid motion by a rigid of
    feature points
  • Learning the PCA space for nonrigid shape and
    appearance

42
Two step face tracking
Formulation Rigid features x1, nonrigid
features x2 Ta(x1)-gtz1, the same T a (x2)-gtz2
Deal with unprecise of rigid points by
synthesized feedback In the synthyzied Z2,
relocate rigid feature x1 and compute new
T Iteration untill covergence
43
Pose free expression
Pose T
New face with pose and expression
44
Animation
One implementaion using a hand drawing
corresponding modes, for example
back
45
Reference
  • H. li , PAMI93 H. li, P. Rovainen, and R.
    Forcheimer, 3-D motion estimation in model based
    facial image codingPAMI, 6,1993
  • DT, PAMI 93 D. Terzopulos and K. Water,
    Analysis and synthesis of facial image sequences
    using physical and anatomical models. PAMI, 6,
    1993
  • Black, CVPR 95 M Black, Yacoob, Tracking and
    recognizing rigid and non-rigid facial motion
    using local parametric model of image motion,
    CVPR95
  • Essa ICCV95 I. Essa and A. Pentland. Facial
    expression recognition using a dynamic model and
    motion energy. InProc. 5th Int.Conf. on Computer
    Vision, pages 360367, 1995.
  • Darell CVPR96 Trevor Darrell, Baback Moghaddam
    Alex pentland, Active face tracking and pose
    estimation in an Interactive room, CVPR96,
  • Pentland, Computer Graphics, 96 Urfan Essa,
    Sumit Basu, T Darrel, Pentland, Modeling,
    tracking and interactive animation of faces and
    heads// using input from video, Proceedings
    Computer Graphics, 1996
  • L. Davis FG96 T. Horprasert, Y. Yacoob, and l.S
    Davis, computing 3D head orientation from
    monocular image sequence, FG96
  • Yacoob, PAM96 Y. Yacoob and LS Davis,
    computing spatio-temporal representations of
    human faces, PAMI, 6, 1996
  • Decarlo, CVPR 96 D. Decarlo and D . Metaxas,
    the intergration of optical flow and deformable
    models woth applications to human face shape and
    motion estimation, CVPR 96

46
  • Nesi RTI96 P. Nesi and R. Magnol_. Tracking and
    synthesizing facial motions with dynamic
    contours. Real Time Imaging, 267-79, 1996.
  • Oliver CVPR97 Nuria Olivedr, Alex Pentland,
    LAFTER Lips and Face real time tracker, CVPR97,
  • DT, CVPR97 P. Fieguth and D Terzopoulous,
    Color-based tracking of heads and other mobile
    objects at video frame rates CVPR97
  • Pentland CVPR97 TS. Jebra and A Pentland,
    Parameterized structure from motion for 3D
    adaptive feedback tracking of faces CVPR97
  • Cootes ECCV 98 T. Cootes, G Edwards, Active
    appearance model, ECCV98,
  • Gang Xu, ICPR98Gang Xu and Takeo Sugimoto,
    "Rits Eye A Software-Based System for Realtime
    Face Detection and Tracking Using Pan-Tilt-Zoom
    Controllable Camera", Proc. of 14th International
    Conference on Pattern Recognition, pp.1194-1197,
    1998
  • Birtchfield CVPR98 Stan Birchfield, Elliptical
    head tracking using Intensity Gradients and color
    histograms, CVPR 98
  • Hager PAMI98 G Hager, P Belhumeur, Efficient
    Region Tracking With Parametric Models of
    Geometry and Illumination (with P. Belhumeur),
    IEEE Transactions on Pattern Analysis and Machine
    Intelligence, 20(10), pp.1125-1139, 1998
  • Shodl PUI98 Schödl, Haro, and Essa, Head
    tracking using a textured polygonal model, PUI98.
  • Blake ICCV98 B. Bascle, A. Blake, Separability
    of pose and expression in facial tracking and
    animation, ICCV98

47
cnted
  • Cassia CVPR99 La Cascia, M, Sclaroff, S., fast,
    Reliable Head tracking under illumination, CVPR99
  • Pentland ICCV workgroup 99 J. strom, T.
    Jebara, S. Baru, A. Pentland, Real time tracking
    and modeling of faces an EKF-based analysis by
    synthesis approach, In International Conference
    on Computer Vision Workshop on Modelling
    People,Corfu, Greece, September 1999.
  • GorkTurk ICCV01 Salih Burak Gokturk, Jean-Yves
    Bouguet, et. al, A data-driven model for
    monocular face tracking, ICCV 2001
  • Y Li ICCV01 Yongmin Li, Shaogang Gong and
    Heather Liddell, Modeling face dynamically across
    views and over time, ICCV, 2001
  • Feris ICCV workgroup 01 Rogerio S Feris,
    Roberto m. Cesar Jr, Efficient real-time face
    tracking in wavelet subspace, ICCV Workshop, 2001
  • Ahlberg RATFFG-RTS01 Jorgen Ahlberg, Using the
    Active Appearance Algorithm for Face and Facial
    Feature Tracking 2nd International Workshop on
    Recognition, Analysis and Tracking of Faces and
    Gestures in Realtime Systems (RATFFG-RTS), pp. 68
    - 72, Vancouver, Canada, July 2001.
  • CC Chang IJCV02Chin-Chun Chang and Wen-hsinag
    Tsai, Determination of head pose and facial
    expression from a single perspective view by
    successive scaled orthographic approximations,
    IJCV,3,2002
  • Dorin Comaniciu and Peter Meer. Real-time
    tracking of non-rigid objects using Mean shift.
    In the Proc.of the IEEE CVPR, 2000, pp 142-149.
  • G.R.Bradski. Real-Time Face and Object Tracking
    as a Component of a Perceptual User Interface.
    IEEE Workshop Application of Computer Vision.
    1998, pp 214-219
  • Eric Cosatto and Hans Peter Graf, Photo-realistic
    talking-heads from image samples, IEEE trans. On
    Multimedia, vol.2, No.3, September 2000
Write a Comment
User Comments (0)
About PowerShow.com