Computational Methods in Physics PHYS 3437 - PowerPoint PPT Presentation

About This Presentation
Title:

Computational Methods in Physics PHYS 3437

Description:

Algorithm outline for handling nth order ODEs. Advanced warning ... The trade-off is the increased expense of using an interative procedure to find the roots ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 37
Provided by: RobTh6
Category:

less

Transcript and Presenter's Notes

Title: Computational Methods in Physics PHYS 3437


1
Computational Methods in Physics PHYS 3437
  • Dr Rob Thacker
  • Dept of Astronomy Physics (MM-301C)
  • thacker_at_ap.smu.ca

2
Todays Lecture
  • Notes on projects talks
  • Issues with adaptive step size selection
  • Second order ODEs
  • nth order ODEs
  • Algorithm outline for handling nth order ODEs

Advanced warning no class on March 31st
3
Project time-line
  • By now you should have approximately ½ of the
    project coding development done
  • There are only 4 weeks until the first talk
  • I would advise you to have at least some outline
    of how your write-up will look already
  • Project reports are due on the last day of term

4
Talk Format
  • Allowing for changing over presenters,
    connecting/disconnecting etc, we can allow up to
    60 minutes per lecture period
  • 3 people per lecture means 20 minutes per
    presenter
  • 15 minute presentation
  • 5 minutes for questions
  • Remember your presentation will be marked by your
    peers

5
Issues with the adaptive step size algorithm
  • Step (8) If err lt 0.1e ? h is too small.
    Increase by 1.5 for next step
  • If err gt e ? h is too big.
    Halve h and repeat iteration
  • If 0.1 e err e ? h is OK.
  • Clearly if there is problems with convergence
    this step will continue to keep dividing h
    forever. You need to set up a limit here, either
    by not allowing h to get smaller than some preset
    limit or counting the number of times you halve h
  • Because of rounding error as you increment x it
    is very unlikely that you will precisely hit xmax
    with the final step
  • Therefore you should choose hmin(h,xmax-x) where
    x is current position

6
Issues with the adaptive step size algorithm
more gotchas
  • Step (7) Estimate error
  • Should add a very small (tiny) number to the
    absolute value of the denominator to avoid a
    divide by zero (when the two estimates are very
    close to zero you might - in incredibly rare
    situations - get the two numbers adding to zero).
  • Since you store values of y and x in an array it
    is possible that you may run out of storage
    (because you need too many points). You should do
    some check of the number of positions stored
    versus the maximum possible allowed in your
    decleration of the x y arrays . This will avoid
    the possibility of a segmentation fault.

7
Second order ODEs
  • Consider the general second order ODE
  • We now require two initial values be provided,
    namely the y0 value and the derivative y(x0)
  • These are called the Cauchy conditions
  • If we let zy, then zy and we have

(1)
8
Second order ODEs cont
  • We have thus turned a second order ODE into a
    first order ODE for a vector
  • Can apply the R-K solver to the system but you
    now have two components to integrate
  • At each step must update all x, y and z values

9
Diagrammatically
Remember zg(x,y,y)
Remember yz
y(x)
z0g0
z(x)
y1
z1
y0z0
z0
y0
x0 x1
x0 x1
10
nth order ODEs
  • Systems with higher than second order derivatives
    are actually quite rare in physics
  • Nonetheless we can adapt the idea for 2nd order
    systems to nth order
  • Suppose we have a system specified by
  • Such an equation requires n initial values for
    the derivatives, suppose again we have the Cauchy
    conds.

11
Definitions for the nth order system
12
Definitions for the nth order system
So another system of coupled first order
equations that can be solved via R-K.
13
Algorithm for solving such a system
  • Useful parameters to set
  • imaxnumber of points at which y(x) is to be
    evaluated (1000 is reasonable)
  • nmaxhighest order of ODE to be solved (say 9)
  • errmaxhighest tolerated error (say 1.010-9)
  • Declare x(imax),y(imax,nmax),y0(nmax),yl(nmax),
  • ym(nmax),yr(nmax),ytilde(nmax),ystar(nmax)

Note that the definitions used here are not quite
consistent with the variable definitions used in
the discussion of the single function case.
14
User inputs
  • Need to get from the user (not necessarily at run
    time)
  • The g(x,y,y,) function
  • Domain of x, i.e. what are xmin and xmax
  • What is the order of the ODE to be solved, stored
    in variable nord
  • What are the initial values for y and the
    derivatives (the Cauchy conditions), these are
    stored in y0(nmax)

15
Correspondence of arrays to variables
  • The arrays y(imax,nmax), y0(nmax) corresponds to
    the y derivatives as follows
  • y(1imax,1)y vals
    with y(1,1)y0y0(1)
  • y(1imax,2)y vals
    with y(1,2)y0y0(2)
  • y(1imax,3)y vals
    with y(1,3)y0y0(3)
  • y(1imax,nord)y(nord-1) vals with
    y(1,nord)y(n-1)0 y0(nord)



16
Choose initial step size initialize yl values
  • Apply same criterion as standard Runge-Kutta
  • dx0.1errmax(xmax-xmin)
  • dxmin10-3dx
  • We can use this value to ensure that adaptive
    step size is never less than 1/1000th of the
    initial guess
  • x(1)xlxmin
  • Set initial position for solver
  • Initialize yl (left y-values for the first
    interval)
  • do n1,nord yl(n)y0(n)y(1,n)

17
Start adaptive loop
  • Set i1 this will count number of x positions
    evaluated
  • dxhdx/2 --- half width of zone
  • xrxldx --- right hand boundary
  • xmxldxh --- mid point of zone
  • Perform R-K calculations on this zone for all yn
  • Need to calculate all y values on right boundary
    using a single R-K step (stored in ytilde(nmax))
  • Need to calculate all y values on right boundary
    using two half R-K steps (stored in ystar(nmax))
    for example

call rk(xl,yl,ytilde,nord,dx) call rk(xl,yl,ym
,nord,dxh) call rk(xm,ym,ystar ,nord,dxh)
18
Now evaluate R.E.-esque value yr(n)
  • err0.
  • do n1,nord
  • If errlt 0.1errmax then increase dx dxdx1.5
  • If err gt errmax ? dx is too big
    dxmax(dxh,dxmin) and repeat evaluation of this
    zone
  • If 0.1 errmax err errmax ? dx is OK we
    need to store all results

Error is now set over all the functions being
integrated
19
Update values and prepare for next step
  • Increment i ii1 (check that it doesnt exceed
    imax)
  • x(i)xr
  • xlxr (note should check xl hasnt gone past
    xmax that h value is chosen appropriately)
  • do n1,nord
  • y(i,n)yr(n)
  • yl(n)yr(n)
  • Then return to top of loop and do next step

20
Runge-Kutta routine
  • Subroutine rk(xl,yl,yr,nord,dx)
  • yl and yr will be arrays of size nord
  • Set useful variables
  • dxhdx/2.
  • xrxldx
  • xmxldxh
  • Recall, given yf(x,y), (xl,yl), and dx then

21
Steps in algorithm
  • Complications have a vector of functions and
    vector of yl values
  • call derivs(xl,yl,f0,nord) (sets all function
    f0 values)
  • do n1,nord
  • call derivs(xm, ,nord) (sets function
    values)
  • do n1,nord
  • call derivs(xm, ,nord) (sets function
    values)
  • do n1,nord
  • call derivs(xr, ,nord) (sets function
    values)

22
Calculate R-K formula derivs subroutine
  • do i1,nord
  • Thats the end of the R-K routine
  • subroutine derivs(x,y,yp,nord)
  • do n1,nord-1 yp(n)y(n1)
  • Lastly set yp(nord)g(x,y,y,,y(nord-1))

23
Summary
  • There are a few issues with the adaptive step
    size algorithm you need to be concerned about
    (avoiding divide by zero etc.)
  • Second order systems can be turned into coupled
    first order systems
  • Nth order systems can be turned into n coupled
    first order systems
  • The adaptive R-K algorithm for vectors is in
    principle similar to that for a single function
  • However, must loop over vectors

24
Implicit Methods Backward Euler method
NON-EXAMINABLE BUT USEFUL TO KNOW
  • Recall in the Forward Euler method
  • Rather than predicting forward, we can predict
    backward using the value of f(tn,yn)
  • Rewriting in terms of n1 and n
  • Replacing yn1r we need to use a root finding
    method to solve

25
Notes on implicit Methods
  • Implicit methods tend to be more stable, and for
    the same step size more accurate
  • The trade-off is the increased expense of using
    an interative procedure to find the roots
  • This can become very expensive in coupled systems
  • Richardson Extrapolation can also be applied to
    implicit ODEs, results in higher order schemes
    with good convergence properties
  • Use exactly the same procedure as outline in
    previous lecture, compare expansions at h and h/2
  • Crank-Nicholson is another popular implicit
    method
  • Relies upon the derivative at both the start and
    end points of the interval
  • Second order accurate solution

26
Multistep methods
  • Thus far weve consider self starting methods
    that use only values from xn and xn1
  • Alternatively, accuracy can be improved by using
    a linear combination of additional points
  • Utilize yn-s1,yn-s2,,yn to construct
    approximations to derivatives of order up to s,
    at tn
  • Example for s2

27
Comparison of single and multistep methods
28
Second order method
  • We can utilize this relationship to describe a
    multistep second order method
  • Generalized to higher orders, these methods are
    known as Adams-Bashforth (predictor) methods
  • s1 recovers the Euler method
  • Implicit methodologies are possible as well
  • Adams-Moulton (predictor-corrector) methods
  • Since these methods rely on multisteps the first
    few values of y must be calculated by another
    method, e.g. RK
  • Starting method needs to be as accurate as the
    multistep method

29
Stiff problems
  • Definition of stiffness
  • Loosely speaking, the initial value problem is
    referred to as being stiff if the absolute
    stability requirement dictates a much smaller
    time step than is needed to satisfy approximation
    requirements alone.
  • Fomally An IVP is stiff in some interval 0,b
    if the step size needed to maintain stability of
    the forward Euler method is much smaller than the
    step size required to represent the solution
    accurately.
  • Stability requirements are overriding accuracy
    requirements
  • Why does this happen?
  • Trying to integrate smooth solutions that are
    surrounded by strongly divergent or oscillatory
    solutions
  • Small deviations away from the true solution lead
    to forward terms being very inaccurate

30
Example
  • Consider y(t)-15y(t), t0, y(0)1
  • Exact solution y(t)e-15t, so y(t)?0 as t?0
  • If we examine the forward Euler method, strong
    oscillatory behaviour forces us to take very
    small steps even though the function looks quite
    smooth

31
Implicit methods in stiff problems
  • Because implicit methods can use longer
    timesteps, they are strongly favoured in
    integrations of stiff systems
  • Consider a two-stage Adams-Moulton integrator

32
Adams-Moulton solution for h0.125
Much better behaviour and convergence
33
Choosing stiff solvers
  • This isnt as easy as you might think
  • Performance of different algorithms can be quite
    dependent upon the specific problem
  • Researchers often write papers comparing the
    performance of different solvers on a given
    problem and then advise on which one to use
  • This is a sensible way to do things
  • I recommend you do the same if you have a stiff
    problem
  • Try solvers from library packages like ODEPACK or
    the Numerical recipes routines

34
Stability of the Forward Euler method
  • Stability is more important than the truncation
    error
  • Consider yly for some complex l
  • Provided Re l lt 0 solution is bounded
  • Substitute into the Euler method
  • For yn to remain bounded we must have
  • Thus a poorly chosen Dt that breaks the above
    inequality will lead to yn increasing without
    limit

35
Behaviour of small errors
  • We considered the previous yly equation because
    it describes the behaviour of small changes
  • Suppose we have a solution ysye where e is the
    small error.
  • Substitute into yf(t,y) and use a Taylor
    expansion

To leading order in e
36
Next lecture
  • Monte Carlo methods
Write a Comment
User Comments (0)
About PowerShow.com