6.088 Digital and Computational Photography 6.882 Advanced Computational Photography Gradient image processing - PowerPoint PPT Presentation

About This Presentation
Title:

6.088 Digital and Computational Photography 6.882 Advanced Computational Photography Gradient image processing

Description:

6'088 Digital and Computational Photography 6'882 Advanced Computational Photography Gradient image – PowerPoint PPT presentation

Number of Views:198
Avg rating:3.0/5.0
Slides: 103
Provided by: FredoD5
Category:

less

Transcript and Presenter's Notes

Title: 6.088 Digital and Computational Photography 6.882 Advanced Computational Photography Gradient image processing


1
6.088 Digital and Computational Photography
6.882 Advanced Computational PhotographyGradi
ent image processing
WarningFrench Mathematicians inside
Frédo DurandMIT - EECS
2
How was pset 1?
3
What have we learnt last time?
  • Log is good
  • Luminance is different from chrominance
  • Separate components
  • Low and high frequencies
  • Strong edges are important

4
Homomorphic filtering
  • Oppenhein, in the sixties
  • Images are the product of illumination and albedo
  • Similarly, many sounds are the product of an
    envelope and a modulation
  • Illumination is usually slow-varying
  • Perform albedo-illumination using low-pass
    filtering of the log image
  • http//www.cs.sfu.ca/stella/papers/blairthesis/ma
    in/node33.html
  • See also Koenderink "Image processing done
    right"http//www.springerlink.com/(l1bpumaapconcb
    jngteojwqv)/app/home/contribution.asp?referrerpar
    entbacktoissue,11,53journal,1538,3333linkingpu
    blicationresults,1105633,1

5
What's great about the bilateral filter
  • Separate image into two components
  • Preserve strong edges
  • Non-iterative
  • More controllable, stable
  • Can be accelerated
  • Lots of other applications

6
Bilateral filtering on meshes
  • http//www.cs.tau.ac.il/dcor/online_papers/papers
    /shachar03.pdf
  • http//people.csail.mit.edu/thouis/JDD03.pdf

7
Questions?
8
Today Gradient manipulation
  • Idea
  • Human visual system is very sensitive to gradient
  • Gradient encode edges and local contrast quite
    well
  • Do your editing in the gradient domain
  • Reconstruct image from gradient
  • Various instances of this idea, Ill mostly
    follow Perez et al. Siggraph 2003
  • http//research.microsoft.com/vision/cambridge/p
    apers/perez_siggraph03.pdf

r
9
Problems with direct cloning
From Perez et al. 2003
10
Solution clone gradient
11
Gradients and grayscale images
  • Grayscale image n n scalars
  • Gradient
  • Overcomplete!
  • Whats up with this?
  • Not all vector fields are the gradient of an
    image!
  • Only if they are curl-free (a.k.a. conservative)
  • But it does not matter for us

n n 2D vectors
12
Today message I
  • Manipulating the gradient is powerful

13
Today message II
  • Optimization is powerful
  • In particular least square
  • Good Least square optimization reduces to a big
    linear system
  • We are going to spend our time going back and
    force between minimization and setting
    derivatives to zero.
  • Your head will spin.
  • Linear algebra is your friend
  • Big sparse linear systems can be solved
    efficiently

14
Today message III
  • Toy examples are good to further understanding
  • 1D can however be overly simplifying, n-D is much
    more complicated

15
Questions?
16
Seamless Poisson cloning
  • Given vector field v (pasted gradient), find the
    value of f in unknown region that optimize

Poisson equationwith Dirichlet conditions
Pasted gradient
Mask
unknownregion
Background
17
Discrete 1D example minimization
  • Copy to
  • Min ((f2-f1)-1)2
  • Min ((f3-f2)-(-1))2
  • Min ((f4-f3)-2)2
  • Min ((f5-f4)-(-1))2
  • Min ((f6-f5)-(-1))2

With f16f61
18
1D example minimization
  • Copy to
  • Min ((f2-6)-1)2 gt f2249-14f2
  • Min ((f3-f2)-(-1))2 gt f32f221-2f3f2 2f3-2f2
  • Min ((f4-f3)-2)2 gt f42f324-2f3f4 -4f44f3
  • Min ((f5-f4)-(-1))2 gt f52f421-2f5f4 2f5-2f4
  • Min ((1-f5)-(-1))2 gt f524-4f5

19
1D example big quadratic
  • Copy to
  • Min (f2249-14f2
  • f32f221-2f3f2 2f3-2f2
  • f42f324-2f3f4 -4f44f3
  • f52f421-2f5f4 2f5-2f4
  • f524-4f5) Denote it Q

20
1D example derivatives
  • Copy to

Min (f2249-14f2 f32f221-2f3f2 2f3-2f2
f42f324-2f3f4 -4f44f3 f52f421-2f5f4
2f5-2f4 f524-4f5) Denote it Q
21
1D example set derivatives to zero
  • Copy to

gt
22
1D example
  • Copy to

23
Questions?
24
1D example remarks
  • Copy to
  • Matrix is sparse
  • Matrix is symmetric
  • Everything is a multiple of 2
  • because square and derivative of square
  • Matrix is a convolution (kernel -2 4 -2)
  • Matrix is independent of gradient field. Only RHS
    is
  • Matrix is a second derivative

25
Questions?
26
Lets try to further analyze
  • What is a simple case?

27
Membrane interpolation
  • What if v is null?
  • Laplace equation (a.k.a. membrane equation )

28
1D example minimization
  • Minimize derivatives to interpolate
  • Min (f2-f1)2
  • Min (f3-f2)2
  • Min (f4-f3)2
  • Min (f5-f4)2
  • Min (f6-f5)2

With f16f61
29
1D example derivatives
  • Minimize derivatives to interpolate

Min (f2236-12f2 f32f22-2f3f2
f42f32-2f3f4 f52f42-2f5f4 f521-2f5)
Denote it Q
30
1D example set derivatives to zero
  • Minimize derivatives to interpolate

gt
31
1D example
  • Minimize derivatives to interpolate
  • Pretty much says that second derivative should
    be zero
  • (-1 2 -1) is a second derivative filter

32
Intuition
  • In 1D just linear interpolation!
  • The min of s f is the slope integrated over the
    interval
  • Locally, if the second derivative was not zero,
    this would mean that the first derivative is
    varying, which is bad since we want s f to be
    minimized
  • Note that, in 1D by setting f'', we leave two
    degrees of freedom. This is exactly what we need
    to control the boundary condition at x1 and x2

x1
x2
33
In 2D membrane interpolation
x1
x2
34
Membrane interpolation
  • What if v is null?
  • Laplace equation (a.k.a. membrane equation )
  • Mathematicians will tell you there is an
    Associated Euler-Lagrange equation
  • Where the Laplacian ? is similar to -1 2 -1in 1D
  • Kind of the idea that we want a minimum, so we
    kind of derive and get a simpler equation

35
Questions?
36
What is v is not null?
37
What if v is not null?
  • 1D case

Seamlessly paste
onto
Just add a linear function so that the boundary
condition is respected
38
(Review) Seamless Poisson cloning
  • Given vector field v (pasted gradient), find the
    value of f in unknown region that optimize

Poisson equationwith Dirichlet conditions
Pasted gradient
Mask
unknownregion
Background
39
What if v is not null 2D
  • Variational minimization (integral of a
    functional)with boundary condition
  • Euler-Lagrange equation
  • (Compared to Laplace, we have replaced ? 0 by ?
    div)

40
In 2D, if v is conservative
  • If v is the gradient of an image g
  • Correction function so that
  • performs membrane interpolation over ?

41
1D example
  • Copy to

Add
Result
Difference
Solve Laplace
42
In 2D, if v is NOT conservative
  • Also need to project the vector field v to a
    conservative field
  • And do the membrane thing
  • Of course, we do not need to worry about it, its
    all handled naturally by the least square approach

43
Questions?
44
Recap
  • Find image whose gradient best approximates the
    input gradient
  • least square Minimization
  • Discrete case turns into linear equation
  • Set derivatives to zero
  • Derivatives of quadratic gt linear
  • Continuous turns into Euler-Lagrange form
  • ? f div v
  • When gradient is null, membrane interpolation
  • Linear interpolation in 1D

45
Fourier perspective
  • Gradient in Fourier?
  • Multiply coeffs by i ?
  • Parseval theorem?
  • Integral of square is the same in space
    frequencysx f(x)2 dx s? F(?)2 d?
  • Least square on gradient ?
  • Least square in Fourier with weight ?
  • Tries to respect high frequencies at the
    potential cost of low frequencies

46
Fourier interpretation
  • Least square on gradient
  • Parseval anybody?
  • Integral of squared stuff is the same in Fourier
    and primal
  • What is the gradient/derivative in Fourier?
  • Multiply coefficients by frequency and i
  • Seen in Fourier, Poisson editing does a weighted
    least square of the image where low frequencies
    have a small weight and high frequencies a big
    weight

47
Questions?
48
Warning
  • What follows is not strictly necessary to
    implement Poisson image editing
  • But
  • It helps understand the properties of the
    equation
  • It helps to read the literature
  • It's cool math

49
Calculus
  • Simplified version
  • Want to minimize g(x) over the space of real
    values x
  • Derive and set g'(x)0
  • Now we have a more complex equation we want to
    minimize a variational equation over the space of
    functions f
  • It's a complex business to derive wrt functions
  • In general, derivatives are well defined only for
    functions over 1D domains

50
Derivative definition
  • 1D derivative
  • multidimensional derivative
  • For a direction v, directional derivative is
  • For functionals ?
  • Do something similar, replace vector by function

51
Calculus of variation 1D
  • We want to minimize with
    f(x1)a, f(x2)b
  • Assume we have a solution f
  • Try to define some notion of 1D derivative wrt to
    a 1D parameter ? in a given direction of
    functional space
  • For a perturbation function ?(x) that also
    respects the boundary condition (i.e.
    ?(x1)?(x2)0)and scalar ?, the integral s
    (f'(x)? ?'(x))2 dx should be bigger than for f
    alone

52
Calculus of variation 1D
  • s (f'(x)? ?'(x))2 dx should be bigger than for
    f alone
  • s f'(x) 2 2 ? ?'(x) f'(x) ?2?'(x)2 dx
  • The third term is always positive and is
    negligible when ? goes to zero
  • Derive wrt ? and set to zero
  • s 2 ?'(x)f'(x) dx 0

53
Calculus of variation 1D
  • How do we get rid of ? ? And still include the
    knowledge that ?(x1)?(x2)0
  • When we have an integral of a product and we are
    playing with derivatives, look into integration
    by parts
  • Now how do you remember integration by parts?
  • Integrate one, derive the other
  • It's about the derivative of a product in an
    integral

54
Calculus of variation 1D
  • Integrate by parts
  • We know that ?(x1)?(x2)0
  • We get
  • Must be true for any ?
  • Therefore, f''(x) must be zero everywhere

55
Summary
  • Variational minimization (integral of a
    functional)with boundary condition
  • Derive Euler-Lagrange equation
  • Use perturbation function
  • Calculus of variation. Set to zero. Integrate by
    parts.
  • Check out the hidden slides for detail

56
Questions?
57
Discrete solver Recall 1D
  • Copy to

gt
58
Discrete Poisson solver
  • Two approaches
  • Minimize variational problem
  • Solve Euler-Lagrange equation
  • In practice, variational is best
  • In both cases, need to discretize derivatives
  • Finite differences over 4 pixel neighbors
  • We are going to work using pairs
  • Partial derivatives are easy on pairs
  • Same for the discretization of v

p
q
59
Discrete Poisson solver
  • Minimize variational problem
  • Rearrange and call Np the neighbors of p
  • Big yet sparse linear system

Discretized gradient
Discretized v g(p)-g(q)
Boundary condition
(all pairs that are in ?)
Only for boundary pixels
60
Discrete Poisson solver
  • Minimize variational problem
  • Rearrange and call Np the neighbors of p
  • Big yet sparse linear system

Discretized gradient
Discretized v g(p)-g(q)
Boundary condition
(all pairs that are in ?)
Only for boundary pixels
61
Result (eye candy)
62
Questions?
63
Recap
  • Find image whose gradient best approximates the
    input gradient
  • least square Minimization
  • Discrete case turns into big sparse linear
    equation
  • Set derivatives to zero
  • Derivatives of quadratic gt linear

64
Solving big matrix systems
  • Axb
  • You can use Matlabs \
  • (Gaussian elimination)
  • But not very scalable

65
Iterative solvers
  • Important ideas
  • Do not inverse matrix
  • Maintain a vector x that progresses towards the
    solution
  • Updates mostly require to apply the matrix.
  • In many cases, it means you do no even need to
    store the matrix (e.g. for a convolution matrix
    you only need the kernel)
  • Usually, you dont even wait until convergence
  • Big questions in which direction do you walk?
  • Yes, very similar to gradient descent

66
Solving big matrix systems
  • Axb, where A is sparse (many zero entries)
  • In Pset 3, we ask you to use conjugate gradient
  • http//www.cs.cmu.edu/quake-papers/painless-conju
    gate-gradient.pdf
  • http//www.library.cornell.edu/nr/bookcpdf/c10-6.p
    df

67
Conjugate gradient
  • The Conjugate Gradient Method is the most
    prominent iterative method for solving sparse
    systems of linear equations. Unfortunately, many
    textbook treatments of the topic are written with
    neither illustrations nor intuition, and their
    victims can be found to this day babbling
    senselessly in the corners of dusty libraries.
    For this reason, a deep, geometric understanding
    of the method has been reserved for the elite
    brilliant few who have painstakingly decoded the
    mumblings of their forebears. Nevertheless, the
    Conjugate Gradient Method is a composite of
    simple, elegant ideas that almost anyone can
    understand. Of course, a reader as intelligent as
    yourself will learn them almost effortlessly.

68
Axb
  • A is square, symmetric and positive-definite
  • When A is dense, youre stuck, use
    backsubstitution
  • When A is sparse, iterative techniques (such as
    Conjugate Gradient) are faster and more memory
    efficient
  • Simple example
  • (Yeah yeah, its not sparse)

69
Turn Axb into a minimization problem
  • Minimization is more logical to analyze iteration
    (gradient ascent/descent)
  • Quadratic form
  • c can be ignored because we want to minimize
  • Intuition
  • the solution of a linear system is always the
    intersection of n hyperplanes
  • Take the square distance to them
  • A needs to be positive-definite so that we have a
    nice parabola

70
Gradient of the quadratic form
  • Not our image gradient!
  • Multidimensional gradient (as many dim as rows in
    matrix)

since
And since A is symmetric
Not surprising we turned Axb into the
quadratic minimization(if A is not symmetric,
conjuagte gradient finds solution for
71
Steepest descent/ascent
  • Pick gradient direction
  • Find optimum in this direction

Gradient direction
Gradient direction
Energy along the gradient
72
Residual
  • At iteration i, we are at a point x(i)
  • Residual r(i)b-Ax(i)
  • Cool property of quadratic form residual -
    gradient

73
Behavior of gradient descent
  • Zigzag or goes straight depending if were lucky
  • Ends up doing multiple steps in the same direction

74
Conjugate gradient
  • Smarter choice of direction
  • Ideally, step directions should be orthogonal to
    one another (no redundancy)
  • But tough to achieve
  • Next best thing make them A-orthogonal
    (conjugate)That is, orthogonal when transformed
    by A

75
Conjugate gradient
  • For each step
  • Take the residual (gradient)
  • Make it A-orthogonal to the previous ones
  • Find minimum along this direction
  • Plus life is good
  • In practice, you only need the previous one
  • You can show that the new residual r(i1) is
    already A-orthogonal to all previous directions
    p but p(i)

76
Recap
  • Poisson image cloning paste gradient, enforce
    boundary condition
  • Variational formulation
  • Also Euler-Lagrange formulation
  • Discretize variational version, leads to big but
    sparse linear system
  • Conjugate gradient is a smart iterative technique
    to solve it

77
Questions?
78
(No Transcript)
79
(No Transcript)
80
Manipulate the gradient
  • Mix gradients of g f take the max

81
(No Transcript)
82
(No Transcript)
83
(No Transcript)
84
Reduce big gradients
  • Dynamic range compression
  • See Fattal et al. 2002

85
Questions?
86
Issues with Poisson cloning
  • Colors
  • Contrast
  • The backgrounds in f g should be similar

87
Improvement local contrast
  • Use the log
  • Or use covariant derivatives (next slides)

88
Covariant derivatives Photoshop
  • Photoshop Healing brush
  • Developed independently from Poisson editing by
    Todor Georgiev (Adobe)

From Todor Georgiev's slides http//photo.csail.mi
t.edu/posters/todor_slides.pdf
89
Seamless Image Stitching in the Gradient Domain
  • Anat Levin, Assaf Zomet, Shmuel Peleg, and Yair
    Weisshttp//www.cs.huji.ac.il/alevin/papers/eccv
    04-blending.pdfhttp//eprints.pascal-network.org/
    archive/00001062/01/tips05-blending.pdf
  • Various strategies (optimal cut, feathering)

90
Photomontage
  • http//grail.cs.washington.edu/projects/photomonta
    ge/photomontage.pdf

91
Elder's edge representation
  • http//elderlab.yorku.ca/elder/publications/journ
    als/ElderPAMI01.pdf

92
Gradient tone mapping
  • Fattal et al. Siggraph 2002

Slide from Siggraph 2005 by Raskar (Graphs by
Fattal et al.)
93
Gradient attenuation
From Fattal et al.
94
Fattal et al. Gradient tone mapping
95
Gradient tone mapping
  • Socolinsky, D. Dynamic Range Constraints in Image
    Fusion and Visualization , in Proceedings of
    Signal and Image Processing 2000, Las Vegas,
    November 2000.

96
Gradient tone mapping
  • Socolinsky, D. Dynamic Range Constraints in Image
    Fusion and Visualization , in Proceedings of
    Signal and Image Processing 2000.

97
  • Socolinsky, D. and Wolff, L.B., A new paradigm
    for multispectral image visualization and data
    fusion, IEEE Conference on Computer Vision and
    Pattern Recognition (CVPR), Fort Collins, June
    1999.

98
Retinex
  • Land, Land and McCann (inventor/founder of
    polaroid)
  • Theory of lightness perception (albedo vs.
    illumination)
  • Strong gradients come from albedo, illumination
    is smooth

99
Questions?
100
Color2gray
  • Use Lab gradient to create grayscale images

101
Poisson Matting
  • Sun et al. Siggraph 2004
  • Assume gradient of F B is negligible
  • Plus various image-editing tools to refine matte

102
Gradient camera?
  • Tumblin et al. CVPR 2005 http//www.cfar.umd.edu/
    aagrawal/gradcam/gradcam.html

103
Poisson-ish mesh editing
  • http//portal.acm.org/citation.cfm?id1057432.1057
    456
  • http//www.cad.zju.edu.cn/home/xudong/Projects/mes
    h_editing/main.htm
  • http//people.csail.mit.edu/sumner/research/deftra
    nsfer/

104
Questions?
105
Alternative to membrane
Data
  • Thin plate minimize second derivative

Membrane interpolation
Thin-plate interpolation
106
Inpainting
  • More elaborate energy functional/PDEs
  • http//www-mount.ee.umn.edu/guille/inpainting.htm

107
Key references
  • Socolinsky, D. Dynamic Range Constraints in Image
    Fusion and Visualization 2000.
    http//www.equinoxsensors.com/news.html
  • Elder, Image editing in the contour domain, 2001
    http//elderlab.yorku.ca/elder/publications/journ
    als/ElderPAMI01.pdf
  • Fattal et al. 2002Gradient Domain HDR
    Compression http//www.cs.huji.ac.il/7Edanix/hdr/
  • Poisson Image Editing Perez et al.
    http//research.microsoft.com/vision/cambridge/pap
    ers/perez_siggraph03.pdf
  • Covariant Derivatives and Vision, Todor Georgiev
    (Adobe Systems) ECCV 2006

108
Poisson, Laplace, Lagrange, Fourier, Monge,
Parseval
  • Fourier studied under Lagrange, Laplace Monge,
    and Legendre Poisson were around
  • They all raised serious objections about
    Fourier's work on Trigomometric series
  • http//www.ece.umd.edu/taylor/frame2.htm
  • http//www.mathphysics.com/pde/history.html
  • http//www-groups.dcs.st-and.ac.uk/history/Mathem
    aticians/Fourier.html
  • http//www.memagazine.org/contents/current/webonly
    /wex80905.html
  • http//www.shsu.edu/icc_cmf/bio/fourier.html
  • http//en.wikipedia.org/wiki/Simeon_Poisson
  • http//en.wikipedia.org/wiki/Pierre-Simon_Laplace
  • http//en.wikipedia.org/wiki/Jean_Baptiste_Joseph_
    Fourier
  • http//www-groups.dcs.st-and.ac.uk/history/Mathem
    aticians/Parseval.html

109
Refs Laplace and Poisson
  • http//www.ifm.liu.se/boser/elma/Lect4.pdf
  • http//farside.ph.utexas.edu/teaching/329/lectures
    /node74.html
  • http//en.wikipedia.org/wiki/Poisson's_equation
  • http//www.colorado.edu/engineering/CAS/courses.d/
    AFEM.d/AFEM.Ch03.d/AFEM.Ch03.pdf

110
Gradient image editing refs
  • http//research.microsoft.com/vision/cambridge/pap
    ers/perez_siggraph03.pdf
  • http//www.cs.huji.ac.il/alevin/papers/eccv04-ble
    nding.pdf
  • http//www.eg.org/EG/DL/WS/COMPAESTH/COMPAESTH05/0
    75-081.pdf.abstract.pdf
  • http//photo.csail.mit.edu/posters/Georgiev_Covari
    ant.pdf
  • Covariant Derivatives and Vision, Todor Georgiev
    (Adobe Systems) ECCV 2006
  • http//www.mpi-sb.mpg.de/hitoshi/research/image_r
    estoration/index.shtml
  • http//www.cs.tau.ac.il/tommer/vidoegrad/
  • http//ieeexplore.ieee.org/search/wrapper.jsp?arnu
    mber1467600
  • http//grail.cs.washington.edu/projects/photomonta
    ge/
  • http//www.cfar.umd.edu/aagrawal/iccv05/surface_r
    econstruction.html
  • http//www.merl.com/people/raskar/Flash05/
  • http//research.microsoft.com/carrot/new_page_1.h
    tm
  • http//www.idiom.com/zilla/Work/scatteredInterpol
    ation.pdf

111
Links
  • How to Get Your SIGGRAPH Paper Rejected, Jim
    Kajiya, SIGGRAPH 1993 Papers Chair, (link)
  • Ted Adelson's Informal guidelines for writing a
    paper, 1991. (link)
  • Notes on technical writing, Don Knuth, 1989.
    (pdf)
  • What's wrong with these equations, David Mermin,
    Physics Today, Oct., 1989. (pdf)
  • Ten Simple Rules for Mathematical Writing,
    Dimitri P. Bertsekas (link)
  • Advice on Research and Writing (at CMU)
  • How (and How Not) to Write a Good Systems Paper
    by Roy Levin and David D. Redell
  • Things I Hope Not to See or Hear at SIGGRAPH by
    Jim Blinn
  • How to have your abstract rejected

112
Poisson image editing
  • Two aspects
  • When the new gradient is conservative Just
    membrane interpolation to ensure boundary
    condition
  • Otherwise allows you to work with
    non-conservative vector fields and
  • Why is it good?
  • More weight on high frequencies
  • Membrane tries to use low frequencies to match
    boundaries conditions
  • Manipulation of the gradient can be cool (e.g.
    max of the two gradients)
  • Manipulate local features (edge/gradient) and
    worry about global consistency later
  • Smart thing to do work in log domain
  • Limitations
  • Color shift, contrast shift (depends strongly on
    the difference between the two respective
    backgrounds)

113
Other functionals
  • I lied, some people have used smarted energy
    functions Todor Georgievs initial
    implementation of the Photoshop healing brush.
Write a Comment
User Comments (0)
About PowerShow.com