Algorithms for a large sparse nonlinear eigenvalue problem - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Algorithms for a large sparse nonlinear eigenvalue problem

Description:

Let A(l) be an n by n matrix whose elements depend on a scalar parameter l. In ... A simple approach (Gat et al., 1997; the case of n = 3) ... – PowerPoint PPT presentation

Number of Views:69
Avg rating:3.0/5.0
Slides: 23
Provided by: naCseNa
Category:

less

Transcript and Presenter's Notes

Title: Algorithms for a large sparse nonlinear eigenvalue problem


1
Algorithms for a large sparse nonlinear
eigenvalue problem
  • Yusaku Yamamoto
  • Dept. of Computational Science Engineering
  • Nagoya University

2
Outline
  • Introduction
  • Algorithms
  • Multivariate Newtons method
  • Use of the linear eigenvalue of smallest modulus
  • Use of the signed smallest singular value
  • Numerical experiments
  • Conclusion

3
Introduction
  • Nonlinear eigenvalue problem
  • Let A(l) be an n by n matrix whose elements
    depend on a scalar parameter l. In the nonlinear
    eigenvalue problem, we seek a value of l for
    which there exists a nonzero vector x such that
  • l and x are called the (nonlinear) eigenvalues
    and eigenvectors, respectively.
  • Examples
  • A(l) A lI Linear eigenvalue problem
  • A(l) l2 M lC K Quadratic eigenalue
    problem
  • A(l) (el 1) A1 l2 A2 A3 General
    nonlinear eigenvalue problem

A(l) x 0.
4
Applications of the nonlinear eigenvalue problem
  • Structural mechanics
  • Decay system l2 Mx lCx Kx 0
  • M mass matrix, C damping matrix, K
    stiffness matrix
  • Electronic structure calculation
  • Kohn-Sham equation H(l)f lf
  • H(l) Kohn-Sham Hamiltonian
  • Theoretical fluid dynamics
  • Computation of the scaling exponent in turbulent
    flow

5
Solution of a quadratic eigenproblem
  • Transformation to a linear generalized
    eigenproblem
  • Quadratic eigenproblem can be transformed into a
    linear generalized eigenproblem of twice the
    size.
  • In general, nonlinear eigenproblem of order k can
    be transformed into a linear generalized
    eigenproblem of k times the size.
  • Efficient algorithms for linear eigenproblems
    (QR, Krylov subspace method, etc.) can be applied.

l2 Mx lCx Kx 0
6
Our target problem
  • Computation of the scaling exponent of a passive
    scalar field in a turbulent flow
  • Governing equation of the passive scalar y
  • n-point correlation function
  • Scaling exponent
  • We are interested in computing ln in the case of
    n 4.

Turbulent flow
Yn(sx1, sx2, , sxn) sln Yn(x1, x2, , xn)
7
Our target problem (contd)
  • The PDE satisfied by Y4
  • By writing Y4 sl4F4, we have
  • Non-linearity comes from folding the (unbounded)
    domain of (x1, x2, x3, x4) into a bounded one
    using the scaling law.

F Y4 0, where
(l4(l4-1)A l4B C) F4 0 quadratic
eigenproblem
Y4(s, sr1, sr2, , sr5) r1l4 Y4(s/r1, s,
sr2/r1, , sr5/r1 )
8
Our target problem (contd)
  • Problem characteristics
  • A(l) is large (n 105), sparse and nonsymmetric.
  • Dependence of A(l) on l is fully nonlinear
    (includes both exponential and polynomial terms
    in l), but is analytical.
  • Computation of A(l) takes a long time.
  • The smallest positive eigenvalue is sought.

9
Solution of a general nonlinear eigenproblem
  • Necessary and sufficient condition for l
  • A simple approach (Gat et al., 1997 the case of
    n 3)
  • Find the solution l of det(A(l)) 0 by Newtons
    method.
  • Find the eigenvector x as the null space of a
    constant matrix A(l) .
  • Difficulty
  • Computation of det(A(l)) requires roughly the
    same cost as the LU decomposition of A(l).
  • Not efficient when A(l) is large and sparse.

10
Approach based on multivariate Newtons method
  • Basic idea
  • Regard A(l) x 0 as nonlinear simultaneous
    equations w.r.t. n1 variables l and x and solve
    them by Newtons method.
  • Since there are only n equations, we add a
    normalization condition vtx 1 using some vector
    v.
  • Equations to be solved
  • Iteration formula

11
Multivariate Newtons method (contd)
  • Advantages
  • Each iteration consists of solving linear
    simultaneous equations.
  • Much cheaper than computing det(A(l)).
  • Convergence is quadratic if the initial values l0
    and x0 are sufficiently close to the solution.
  • Disadvantages
  • The iterates may converge to unwanted eigenpairs
    or fail to converge unless both l0 and x0 are
    sufficiently good.
  • It is in general difficult to find a good initial
    value x0 for the eigenvector.
  • A'(l) is necessary in addition to A(l).

12
Approaches based on the linear eigenvalue
/singular value of smallest modulus
  • Definition
  • For a fixed l, we call m a linear eigenvalue of
    A(l) if there exists a nonzero vector y such that
    A(l)y my.
  • For a fixed l, we call s gt0 a linear singular
    value of A(l) if s 2 is a linear eigenvalue of
    A(l)TA(l).
  • Linear eigenvalue / singular value are simply an
    eigenvalue
  • / singular value of A(l) viewed as a constant
    matrix.
  • m and s are functions of l.
  • Necessary and sufficient conditions for l

x 0, A(l) x 0 det(A(l)) 0
A(l) has a zero linear eigenvalue.
A(l) has a zero linear singular value.
13
Approaches based on the linear eigenvalue
/singular value of smallest modulus (contd)
  • A possible approach
  • Let
  • m(l) the linear eigenvalue of smallest modulus
    of A(l),
  • s(l) the smallest linear singular value of
    A(l).
  • Find the solution l l to m(l) 0 or s(l) 0.
  • Find the eigenvector x as the null space of a
    constant matrix A(l).
  • Advantages
  • Only the initial value for l is required.
  • m(l) and s(l) can be computed much more cheaply
    than det(A(l)).
  • Use of the Lanczos, Arnoldi, and Jacobi-Davidson
    methods
  • A'(l) is not necessary if the secant method is
    used to find l.

14
Approach based on the linear eigenvalue of
smallest modulus
  • Algorithm based on the secant method
  • Difficulty
  • When A(l) is nonsymmetric, computing m(l) is
    expensive.
  • Though it is much less expensive than computing
    det(A(l)).

Set two initial values l0 and l1. Repeat the
following until m(ll) becomes sufficiently
small Find the eigenvector x as the null
space of a constant matrix A(ll).
m(l)
ll2
ll
l
ll1
15
Approach based on the smallest linear singular
value
  • Possible advantages
  • For nonsymmetric matrices, singular values can be
    computed much more easily than eigenvalues.
  • Problems
  • The linear singular value s(l) of A(l) is defined
    as the positive square root of the linear
    eigenvalue of A(l)TA(l).
  • Hence, s(l) is not smooth at s(l) 0.
  • The secant method cannot be applied.
  • Solution
  • Modify the definition of s(l) so that it is
    smooth near s(l) 0.
  • Analytical singular value
  • Signed smallest singular value

s(l)
l
ll1
ll2
ll
16
Analytical singular value decomposition
  • Theorem 1 (Bunse-Gerstner et al., 1991)
  • Let the elements of A(l) be analytical functions
    of l. Then there exist orthogonal matrices U(l)
    and V(l) and a diagonal matrix S(l)
    diag(s1(l), s2(l), , sn(l) ) whose elements
    are analytical functions of l and which satisfy
  • This is called the analytical singular value
    decomposition
  • of A(l).
  • Notes
  • Analytical singular values may be negative.
  • In general, s1(l) gt s2(l) gt ... gt sn(l) does
    not hold.
  • Analytical singular values are expensive to
    compute.
  • Requires the solution of ODEs starting from some
    initial point l0.

si(l)
A(l) U(l) S(l) V(l)T.
s1(l)
s3(l)
l
l0
s2(l)
17
Signed smallest singular value
  • Definition
  • Let vn and un be the right and left singular
    vectors of A(l) corresponding to the smallest
    linear singular value sn . Then we call sn sn
    sgn(vnTun) the signed smallest singular value of
    A(l).
  • Theorem 2
  • Assume that sn(l) is a simple root and
    vn(l)Tun(l) 0 in an interval l0 l
    l1. Then the signed smallest singular value
  • sn(l) sgn(vn(l)Tun(l)) is an analytic
    function of l in this interval.
  • Proof
  • From the uniqueness of SVD, un(l) un(l) and
    vn(l) vn(l) . Hence,
  • The right-hand-side is clearly analytical
    when vn(l)Tun(l) 0.

.
.
.
sn(l) sn(l) sgn(vn(l)Tun(l))
sn(l)vnT(l)un(l) / vnT(l)un(l)
vnT(l)A(l)vn(l) / vnT(l)un(l)
vnT(l)A(l)vn(l) / vnT(l)un(l).
18
Approach based on the signed smallest singular
value
  • Characteristics of the signed smallest singular
    value sn(l)
  • sn(l) 0 sn(l) 0
  • sn(l) is an analytical function of l under
    suitable assumptions.
  • Easy to compute (requires only sn(l), vn(l) and
    un(l) )
  • Algorithm based on the secant method

s (l)
ll2
ll
l
ll1
19
Numerical experiments
  • Test problem
  • Computation of the scaling exponent in turbulent
    flow
  • Matrix size is 35,000 and 100,000.
  • Seek the smallest positive (nonlinear) eigenvalue
  • It is known that the eigenvalue is in 0, 4, but
    the estimate for the (nonlinear) eigenvector is
    unknown.
  • Computational environment
  • Fujitsu PrimePower HPC2500 (16PU)

20
Algorithm I Approach based on the linear
eigenvalue of smallest modulus
  • Result for n35,000
  • Nonlinear eigenvalue 2.926654
  • Computational time 35,520 sec. for each value of
    l.
  • Secant iteration 4 times

m(l)
l
21
Algorithm II Approach based on the signed
smallest singular value
  • Result for n35,000
  • Result for n100,000
  • Computational time 16,200 sec. for each value of
    l.
  • Could not be computed with Algorithm 1 because
    the computational time was too long.

s(l)
  • Nonlinear eigenvalue 2.926654
  • Computational time
  • 2,005 sec. for each value of l.
  • (1/18 of Algorithm 1)
  • Secant iteration 4 times

l
22
Conclusion
  • Summary of this study
  • We proposed an algorithm for large sparse
    nonlinear eigenproblem based on the signed
    smallest positive singular value.
  • The algorithm proved much faster than the method
    based on the linear eigenvalue of smallest
    modulus.
  • Future work
  • Application of the algorithm to various nonlinear
    eigenproblems
Write a Comment
User Comments (0)
About PowerShow.com