PETE 301 - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

PETE 301

Description:

a can be adjusted during the iteration process. Questions ... We will iterate on the parameters (a1 and a2) from a starting guess. ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 29
Provided by: peter361
Category:
Tags: pete | iteration

less

Transcript and Presenter's Notes

Title: PETE 301


1
PETE 301
  • Nonlinear Systems of Equations

2
Nonlinear Systems of Equations
  • f(x)0 (n variables, n equations)
  • Newton-Raphson method
  • Meaning is the same as in one variable
  • Create a linear approximation locally
  • Force the linear approximating function to solve
    the system and continue the iteration

3
Multivariable Taylor Series
  • Taylor series
  • We will truncate this series to give

4
Multivariable Taylor Series
  • This can be represented using a matrix known as
    the Jacobian

5
Multiple Equations
  • We are interested in solving cases with several
    equations.

6
Multiple Equations
  • This case is a simple extension from the single
    equation case

7
Newton-Raphson
  • If we want f(x1Dx1, x2Dx2)0 the Taylor series
    implies

8
Newton-Raphson
  • Solve it for the step vector
  • We solve it by LU decomposition and back
    substitution for
  • (Equivalent to )
  • Once Dx is known we calculate the new x

9
Cautious Approach
  • We calculate the new approximant according to
  • where a is 1 if there is no problem of
    convergence and alpha is taken less if we
    encounter convergence problems.
  • a can be adjusted during the iteration process.

10
Questions
  • How many subroutines does the user have to
    provide for a Newton-Raphson method?
  • Answer two subroutines
  • What is the input to the f(x) subroutine?
  • What is the output?
  • What is the input to the J(x) subroutine?
  • What is the output?

11
Example
  • Two equations
  • The f function
  • The Jacobian matrix

12
Example - LTS101
  • The starting point
  • The convergence criterion
  • Now alpha is 1
  • Another safeguard max 20 iterations

13
Newton-Raphson Pros and Cons
  • Second order convergence
  • Needs analytical Jacobian
  • Convergence as in the case of single variable
  • Good if you start from the vicinity of the
    solution
  • Might diverge
  • Basically it is in every sophisticated
    engineering code

14
Convergence Criterion
  • Sometimes it is formulated in terms of function
    values ("errors")
  • That is dangerous, because often the functions
    are math constructions without clear physical
    meaning (and the unit is blurred)
  • In practice x always has a well defined unit, and
    "eps" has to correspond to it
  • If the elements of x are of different nature, use
    normalization

15
Newton-Raphson Variants
  • If the Jacobian is not available analytically, we
    can estimate it by finite difference or
  • Quasi Newton (1) We can continuously improve its
    estimate with every new function evaluation or
  • Quasi Newton (2) Even better continuously
    improve the estimate of the INVERSE OF THE
    JACOBIAN with every new function evaluation

16
Minimization Multivariate Case
  • In many engineering problems we wish to minimize
    a nonlinear function which depends on several
    variables.
  • One method for doing this is the polytope method.
    One of its chief advantages is that it does not
    require function derivative information.

17
Polytope Method
  • The polytope method was proposed by Nelder and
    Mead and involves moving a simplex of N1
    points across an N-dimensional surface.
  • Easiest to think about in 2D!

18
Polytope Method
  • In a 2D example think of the 3 points forming a
    triangle.
  • The method proceeds by choosing a fourth point
    and checking if the function value at this point
    is less than the function values at the original
    3 points.

From Practical Optimization, Gill, Wright and
Murray
19
Polytope Method
  • The method is trying to move the triangle of
    points in a direction which will minimize the
    function.
  • Rules define how a fourth point is calculated and
    how this point is used to replace one of the
    values defining the triangle.

20
Steepest Descent Method
  • An alternative to the polytope method is the
    steepest descent method. This method uses both
    the values of the function and the values of the
    function derivative.
  • The steepest descent method looks for the
    direction in which the function decreases the
    most rapidly and changes the estimates of the
    independent variables (x) in that direction.

21
Constrained Minimization
  • Linear objective function and linear constraints
  • Linear Programming (simplex algorithm)
  • Nonlinear objective function and/or constraints
  • - Example Excel solver

22
ODEs Systems of Equations
  • Original Runge-Kutta 4 (one equation)

23
System of ODEs
  • Runge Kutta 4 (applied to system of equations)

Bold face symbols are vectors
24
Nonlinear Least Squares
  • Consider using regression to fit the parameters
    a1 and a2 in the model using measured data y(x)
  • This model can not be linearized so the
    regression techniques we learned previously will
    not work
  • use nonlinear regression

25
Nonlinear Least Squares
  • First expand with a multivariate Taylor series
  • We will iterate on the parameters (a1 and a2)
    from a starting guess. Taylor series predicts
    f(xi1) will match y(xi) if

26
Nonlinear Least Squares
  • To match every data point (yi) a system of
    equations must be formed and solved
  • where

27
Nonlinear Least Squares
  • Can DZDA be solved with LU factorization?
  • Is the system square?

28
Nonlinear Least Squares
  • The system can not be solved in its current form
    because it is not square.
  • Multiply both sides by Z
  • This can be solved for DA. Equivalent to
    minimizing the sum of the squared difference
    between the data and the model.
Write a Comment
User Comments (0)
About PowerShow.com