Introduction%20 - PowerPoint PPT Presentation

About This Presentation
Title:

Introduction%20

Description:

aerospace: simulate subsonic & supersonic air flow around full aircraft, no wind ... astrophysics: galactic collisions. automotive: simulate car crash ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 24
Provided by: paulhe1
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Introduction%20


1
IntroductionSystems of Linear Equations
  • Paul Heckbert
  • Computer Science Department
  • Carnegie Mellon University

2
State of the Art in Scientific Computing
  • aerospace simulate subsonic supersonic air
    flow around full aircraft, no wind tunnel
  • divide space into millions of tetrahedra or
    parallelepipeds, solve sparse linear or nonlinear
    PDE
  • nuclear simulate nuclear explosion!
  • especially important because of nuclear test bans
  • weather prediction entire planet, 2 weeks into
    future
  • astrophysics galactic collisions
  • automotive simulate car crash
  • biology simulate protein folding drug design

3
Strategies for Simplifying Problems
  • replace infinite process with finite process
  • e.g. integrals to sums
  • replace general matrices with simple matrices
  • e.g. diagonal
  • replace complex functions with simple ones
  • e.g. polynomials
  • replace nonlinear problems with linear problems
  • replace differential equations with algebraic
    equations
  • e.g. linear systems
  • replace high-order systems with low-order systems
  • replace infinite-dimensional spaces with
    finite-dim. ones
  • e.g. all real functions on 0,1 with samples on
    n-element grid

4
Sources of Error
  • error type example car crash simulation
  • modeling approximate car geometry
  • empirical measurements incorrect tire friction
    coeff.
  • previous computations error in initial speed of
    car
  • truncation or discretization numerical solution
    to dif.eq.
  • rounding used floats, not doubles
  • Each step introduces some error, but magnitudes
    may differ greatly.
  • Look for the largest source of error the weak
    link in the chain.

5
Quantifying Error
  • (absolute error) (approximate value) (true
    value)
  • Fundamental difficulty with measuring error
  • For many problems we cannot compute the exact
    answer, we can only approximate it!
  • Often, the best we can do is estimate the error!

6
Significant Digits
  • main()
  • float f 1./3.
  • printf(".20f\n", f) // print to 20 digits
  • 0.33333334326744080000
  • we get 7 significant digits the rest is junk!
  • When reporting results, only show the significant
    digits!

7
IEEE Floating Point Format
  • very widely used standard for floating point
  • C float is 4 bytes 24 bit mantissa, 8 bit
    exponent
  • about 7 significant digits
  • smallest pos. no 1.3e-38, largest 3.4e38
  • C double is 8 bytes 53 bit mantissa, 11 bit
    exponent
  • about 16 significant digits
  • smallest pos. 2.3e-308, largest 1.7e308
  • special values
  • Inf - infinity (e.g. 1/0)
  • NaN - not a number, undefined (e.g. 0/0)

8
C program to test floating point
  • include ltmath.hgtmain() int i float
    f double d for (i0 ilt55 i) f 1.
    pow(.5, i) d 1. pow(.5, i) printf("2d
    .9f .18f\n", i, f, d)

9
Output precision of float double
  • i float double 12(-i) 0
    2.000000000 2.000000000000000000 1 1.500000000
    1.500000000000000000 2 1.250000000
    1.250000000000000000 3 1.125000000
    1.125000000000000000 4 1.062500000
    1.06250000000000000021 1.000000477
    1.00000047683715820022 1.000000238
    1.00000023841857910023 1.000000119
    1.00000011920928960024 1.000000000
    1.00000005960464480050 1.000000000
    1.00000000000000090051 1.000000000
    1.00000000000000040052 1.000000000
    1.00000000000000020053 1.000000000
    1.000000000000000000 

10
Condition Number of a Problem
  • some problems are harder to solve accurately than
    others
  • The condition number is a measure of how
    sensitive a problem is to changes in its input
  • Condlt1 or so problem is well-conditioned
  • Condgtgt1 problem is ill-conditioned

11
Condition Number -- Examples
  • well conditioned ill-conditioned
  • taking a step on level ground step near cliff
  • tan(x) near x45, say tan(x) near x90
  • (because f infinite)
  • cos(x) not near x90 cos(x) near x90
  • (because f zero)

12
Systems of Linear Equations
  • Solve Axb for x
  • A is n?n matrix
  • x and b are n-vectors (column matrices)
  • Later well look at overdetermined and
    underdetermined systems, where the matrix is not
    square (equations not equal to unknowns)

13
Matrix Properties
  • For a square, n?n matrix
  • rank is the max. no. of linearly independent rows
    or columns
  • full rank rank is n
  • rank-deficient rank is less than n
  • singular matrix determinant zero no inverse
    linearly dependent rank-deficient (Ax0 for
    some nonzero x)

14
Matrix Rank Examples
  • Rank 2 Rank 1 Rank 0

15
Geometric Interpretation - 22 System
  • intersection of 2 lines

Rank 1 matrix means lines are parallel. For most
a, lines non-coincident, so no solution. For
a-9, lines coincident, one-dimensional subspace
of solutions.
16
Gaussian Elimination andLU Decomposition
  • Gaussian Elimination on square matrix A
  • computes an LU decomposition
  • L is unit lower triangular (1s on diagonal)
  • U is upper triangular

17
Gaussian Elimination - comments
  • G.E. can be done on any square matrix
  • if A singular then diagonal of U will contain
    zero(s)
  • usually partial pivoting is used (swapping rows
    during elimination) to reduce errors
  • G.E. is an example of an explicit method for
    solving linear systems solve for solution in
    one sweep
  • Other, more efficient algorithms can be used for
    specialized matrix types, as well see later

18
Solving Systems with LU Decomposition
  • to solve Axb
  • decompose A into LU -- cost 2n3/3 flops
  • solve Lyb for y by forw. substitution -- cost n2
    flops
  • solve Uxy for x by back substitution -- cost n2
    flops
  • slower alternative
  • compute A-1 -- cost 2n3 flops
  • multiply xA-1b -- cost 2n2 flops
  • this costs about 3 times as much as LU
  • lesson
  • if you see A-1 in a formula, read it as solve a
    system, not invert a matrix

19
Symmetric Positive Definite
  • Symmetric Positive Definite an important matrix
    class
  • symmetric AAT
  • positive definite xTAxgt0 for x?0 ? all
    ?igt0
  • if A is spd,
  • LU decomposition can be written ALLT,
  • where L is lower triangular (not unit)
  • this is the Cholesky factorization -- cost n3/3
    flops
  • no pivoting required

20
Cramers Rule
  • A method for solving nn linear systems
  • What is its cost?

21
Vector Norms
  • norms differ by at most a constant factor, for
    fixed n

22
Matrix Norm
  • matrix norm defined in terms of vector norm
  • geometric meaning the maximum stretch resulting
    from application of this transformation
  • exact result depends on whether 1-, 2-, or ?-norm
    is used

23
Condition Number of a Matrix
  • A measure of how close a matrix is to singular
  • cond(I) 1
  • cond(singular matrix) ?
Write a Comment
User Comments (0)
About PowerShow.com