Lecture 22 Exam 2 Review - PowerPoint PPT Presentation

1 / 79
About This Presentation
Title:

Lecture 22 Exam 2 Review

Description:

Matrix eigenvalues arise from discrete models of physical systems. Discrete models ... can be determined using same technique that was used in discrete method. ... – PowerPoint PPT presentation

Number of Views:74
Avg rating:3.0/5.0
Slides: 80
Provided by: San59
Category:

less

Transcript and Presenter's Notes

Title: Lecture 22 Exam 2 Review


1
Lecture 22 - Exam 2 Review
  • CVEN 302
  • July 29, 2002

2
Lectures Goals
  • Chapter 6 - LU Decomposition
  • Chapter 7 - Eigen-analysis
  • Chapter 8 - Interpolation
  • Chapter 9 - Approximation
  • Chapter 11 - Numerical Differentiation and
    Integration

3
Chapter 6
  • LU Decomposition of Matrices

4
LU Decomposition
  • A modification of the elimination method, called
    the LU decomposition. The technique will rewrite
    the matrix as the product of two matrices.
  • A LU

5
LU Decomposition
  • There are variation of the technique using
    different methods.
  • Crouts reduction (U has ones on the diagonal).
  • Doolittles method( L has ones on the diagonal).
  • Choleskys method ( The diagonal terms are the
    same value for the L and U matrices).

6
LU Decomposition Solving
  • Using the LU decomposition
  • Ax LUx LUx b
  • Solve
  • Ly b
  • and then solve
  • Ux y

7
LU Decomposition
  • The matrices are represented by

8
LU Decomposition (Crouts reduction)
  • Matrix decomposition

9
LU Decomposition (Doolittles Method)
  • Matrix decomposition

10
Choleskys Method
  • Matrix is decomposed into
  • where, lii uii

11
Tridiagonal Matrix
  • For a banded matrix using Doolittles method,
    i.e. a tridiagonal matrix.

12
Pivoting of the LU Decomposition
  • Still need pivoting in LU decomposition
  • Messes up order of L
  • What to do?
  • Need to pivot both L and a permutation matrix
    P
  • Initialize P as identity matrix and pivot when
    A is pivoted ? Also pivot L

13
Pivoting of the LU Decomposition
  • Permutation matrix P
  • - permutation of identity matrix I
  • Permutation matrix performs bookkeeping
    associated with the row exchanges
  • Permuted matrix P A
  • LU factorization of the permuted matrix
  • P A L U

14
Chapter 7
  • Eigen-analysis

15
Eigen-Analysis
  • Matrix eigenvalues arise from discrete models of
    physical systems
  • Discrete models
  • Finite number of degrees of freedom result in a
    finite number of eigenvalues and eigenvectors.

16
Eigenvalues
  • Computing eigenvalues of a matrix is important in
    numerous applications.
  • In numerical analysis, the convergence of an
    iterative sequence involving matrices is
    determined by the size of the eigenvalues of the
    iterative matrix.
  • In dynamic systems, the eigenvalues indicate
    whether a system is oscillatory, stable (decaying
    oscillations) or unstable(growing oscillation).
  • Oscillator system, the eigenvalues of
    differential equations or the coefficient matrix
    of a finite element model are directly related to
    natural frequencies of the system.
  • Regression analysis, eigenvectors of correlation
    matrix are used to select new predictor variables
    that are linear combinations of the original
    predictor variables.

17
General Form of the Equations
  • The general form of the equations

18
Power Method
The basic computation of the power method is
summarized as
The equation can be written as
19
Power Method
The basic computation of the power method is
summarized as
The equation can be written as
20
Shift Method
It is possible to obtain another eigenvalue from
the set equations by using a technique known as
shifting the matrix.
Subtract the a vector from each side, thereby
changing the maximum eigenvalue
21
Shift Method
The eigenvalue, s, is the maximum value of the
matrix A. The matrix is rewritten in a form.
Use the Power Method to obtain the largest
eigenvalue of B.
22
Inverse Power Method
The inverse method is similar to the power
method, except that it finds the smallest
eigenvalue. Using the following technique.
23
Inverse Power Method
The algorithm is the same as the Power method and
the eigenvector is not the eigenvector for the
smallest eigenvalue. To obtain the smallest
eigenvalue from the power method.
24
Accelerated Power Method
The Power method can be accelerated by using the
Rayleigh Quotient instead of the largest wk
value. The Rayeigh Quotient is defined as
25
Accelerated Power Method
The values of the next z term is defined
as The Power method is adapted to use the new
value.
26
QR Factorization
  • Another form of factorization
  • A QR
  • Produces an orthogonal matrix (Q) and a right
    upper triangular matrix (R)
  • Orthogonal matrix - inverse is transpose


27
QR Factorization
Why do we care? We can use Q and R to find
eigenvalues 1. Get Q and R (A QR) 2. Let A
RQ 3. Diagonal elements of A are eigenvalue
approximations 4. Iterate until converged
Note QR eigenvalue method gives all eigenvalues
simultaneously, not just the
dominant ?
28
Householder Matrix
  • Householder matrix reduces zk1 ,,zn to zero

29
Householder Matrix
  • To achieve the above operation, v must be a
    linear combination of x and ek

30
Chapter 8
  • Interpolation

31
Interpolation Methods
Interpolation uses the data to approximate a
function, which will fit all of the data points.
All of the data is used to approximate the values
of the function inside the bounds of the data.
We will look at polynomial and rational function
interpolation of the data and piece-wise
interpolation of the data.
32
Polynomial Interpolation Methods
  • Lagrange Interpolation Polynomial - a
    straightforward, but computational awkward way to
    construct an interpolating polynomial.
  • Newton Interpolation Polynomial - there is no
    difference between the Newton and Lagrange
    results. The difference between the two is the
    approach to obtaining the coefficients.

33
Hermite Interpolation
  • The Advantages
  • The segments of the piecewise Hermite polynomial
    have a continuous first derivative at support
    points.
  • The shape of the function being interpolated is
    better matched, because the tangent of this
    function and tangent of Hermite polynomial agree
    at the support points.

34
Rational Function Interpolation
Polynomial are not always the best match of data.
A rational function can be used to represent the
steps. A rational function is a ratio of two
polynomials. This is useful when you deal with
fitting imaginary functions zx iy. The
Bulirsch-Stoer algorithm creates a function where
the numerator is of the same order as the
denominator or 1 less.
35
Rational Function Interpolation
The Rational Function interpolation are required
for the location and function value need to be
known. or
36
Cubic Spline Interpolation
Hermite Polynomials produce a smooth
interpolation, they have a disadvantage that the
slope of the input function must be specified at
each breakpoint. Cubic Spline interpolation use
only the data points used to maintaining the
desired smoothness of the function and is
piecewise continuous.
37
Chapter 9
  • Approximation

38
Approximation Methods
What is the difference between approximation and
interpolation?
  • Interpolation matches the data points exactly.
    In case of experimental data, this assumption is
    not often true.
  • Approximation - we want to consider the curve
    that will fit the data with the smallest error.

39
Least Square Fit Approximations
The solution is the minimization of the sum of
squares. This will give a least square solution.
This is known as the Maximum Likelihood Principle.
40
Least Square Error
How do you minimize the error?
Take the derivative with the coefficients and set
it equal to zero.
41
Least Square Coefficients for Quadratic Fit
The equations can be written as
42
Polynomial Least Square
The technique can be used to all forms of
polynomials of the form
43
Polynomial Least Square
Solving large sets of linear equations are not a
simple task. They can have the undesirable
property known as ill-conditioning. The results
of this method is that round-off errors in
solving for the coefficients cause unusually
large errors in the curve fits.
44
Polynomial Least Square
Or measure of the variance of the problem
Where, n is the degree polynomial and N is the
number of elements and Yk are the data points
and,
45
Nonlinear Least Squared Approximation Method
How would you handle a problem, which is modeled
as
46
Nonlinear Least Squared Approximation Method
Take the natural log of the equations
and
47
Continuous Least Square Functions
Instead of modeling a known complex function over
a region, we would like to model the values with
a simple polynomial. This technique uses a
least squares over a continuous region. The
coefficients of the polynomial can be determined
using same technique that was used in discrete
method.
48
Continuous Least Square Functions
The technique minimizes the error of the function
uses an integral.
where
49
Continuous Least Square Functions
Take the derivative of the error with respect to
the coefficients and set it equal to zero.
And compute the components of the coefficient
matrix. The right hand side of the matrix will
be the function we are modeling times a x value.
50
Continuous Least Square Function
  • There are other forms of equations, which can be
    used to represent continuous functions. Examples
    of these functions are
  • Legrendre Polynomials
  • Tchebyshev Polynomials
  • Cosines and sines.

51
Legendre Polynomial
The Legendre polynomials are a set of orthogonal
functions, which can be used to represent a
function as components of a function.
52
Legendre Polynomial
These function are orthogonal over a range -1,
1 . This range can be scaled to fit the
function. The orthogonal functions are defined
as
53
Continuous Functions
Other forms of orthogonal functions are sines and
cosines, which are used in Fourier approximation.
The advantages for the sines and cosines are
that they can model large time scales. You will
need to clip the ends of the series so that it
will have zeros at the ends.
54
Chapter 11
  • Numerical Differentiation and Integration

55
Numerical Differentiation
A Taylor series or Lagrange interpolation of
points can be used to find the derivatives. The
Taylor series expansion is defined as
56
Numerical Differentiation
Assume that the data points are equally spaced
and the equations can be written as
57
Differential Error
Notice that the errors of the forward and
backward 1st derivative of the equations have an
error of the order of O(Dx) and the central
differentiation has an error of order O(Dx2).
The central difference has an better accuracy and
lower error that the others. This can be
improved by using more terms to model the first
derivative.
58
Higher Order Derivatives
To find higher derivatives, use the Taylor series
expansions of term and eliminate the terms from
the sum of equations. To improve the error in
the problem add additional terms.
59
Lagrange Differentiation
Another form of differentiation is to use the
Lagrange interpolation between three points. The
values can be determine for unevenly spaced
points. Given
60
Lagrange Differentiation
Differentiate the Lagrange interpolation
Assume a constant spacing
61
Richardson Extrapolation
This technique uses the concept of variable grid
sizes to reduce the error. The technique uses a
simple method for eliminating the error. Consider
a second order central difference technique.
Write the equation in the form
62
Richardson Extrapolation
The central difference can be defined as
Write the equation with different grid sizes
63
Richardson Extrapolation
The equation can be rewritten as
It can be rewritten in the form
64
Richardson Extrapolation
The technique can be extrapolated to include the
higher order error elimination by using a finer
grid.
65
Trapezoid Rule
  • Integrate to obtain the rule

66
Simpsons 1/3-Rule
Integrate the Lagrange interpolation
67
Simpsons 3/8-Rule
68
Midpoint Rule
  • Newton-Cotes Open Formula

f(x)
x
a
b
xm
69
Composite Trapezoid Rule
f(x)
x
x0
x1
x2
h
h
x3
h
h
x4
70
Composite Simpsons Rule
  • Multiple applications of Simpsons rule

71
Richardson Extrapolation
  • Use trapezoidal rule as an example
  • subintervals n 2j 1, 2, 4, 8, 16, .

72
Richardson Extrapolation
  • For trapezoidal rule

73
Richardson Extrapolation
  • kth level of extrapolation

74
Romberg Integration
  • Accelerated Trapezoid Rule

75
Gaussian Quadratures
  • Newton-Cotes Formulae
  • use evenly-spaced functional values
  • Gaussian Quadratures
  • select functional values at non-uniformly
    distributed points to achieve higher accuracy
  • change of variables so that the interval of
    integration is -1,1
  • Gauss-Legendre formulae

76
Gaussian Quadrature on -1, 1
  • Exact integral for f x0, x1, x2, x3
  • Four equations for four unknowns

77
Gaussian Quadrature on -1, 1
  • Exact integral for f x0, x1, x2, x3

78
Gaussian Quadrature on -1, 1
  • Exact integral for f x0, x1, x2, x3, x4, x5

79
Summary
  • Open book and open notes.
  • The exam will be 5-8 problems.
  • Short answer type problems use a table to
    differentiate between techniques.
  • Problems are not going to be excessive.
  • Make a short summary of the material.
  • Only use your notes, when you have forgotten
    something, do not depend on them.
Write a Comment
User Comments (0)
About PowerShow.com