Vector Spaces and Eigenvalues - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

Vector Spaces and Eigenvalues

Description:

Idempotent matrices can be defined as any matrix such that AA=A. ... In general, the matrix In-X(X'X)-1X' is referred to as an idempotent matrix. ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 29
Provided by: ValuedGate1132
Category:

less

Transcript and Presenter's Notes

Title: Vector Spaces and Eigenvalues


1
Vector Spaces and Eigenvalues
  • Lecture XXVII

2
Orthonormal Bases and Projections
  • Suppose that a set of vectors x1,,xr for a
    basis for some space S in Rm space such that r ?
    m. For mathematical simplicity, we may want to
    form an orthogonal basis for this space. One way
    to form such a basis is the Gram-Schmit
    orthonormalization. In this procedure, we want
    to generate a new set of vectors y1,yr that
    are orthonormal.

3
The Gram-Schmit process is
4
Example
5
  • The vectors can then be normalized to one.
    However, to test for orthogonality

6
  • Theorem 2.13 Every r-dimensional vector space,
    except the zero-dimensional space 0, has an
    orthonormal basis.

7
  • Theorem 2.14 Let z1,zr be an orthornomal basis
    for some vector space S, of Rm. Then each x ? Rm
    can be expressed uniquely as
  • were u ? S and v is a vector that is orthogonal
    to every vector in S.

8
  • Definition 2.10 Let S be a vector subspace of Rm.
    The orthogonal complement of S, denoted S?, is
    the collection of all vectors in Rm that are
    orthogonal to every vector in S That is, S?xx
    ? Rm and xy0 for all y?S.
  • Theorem 2.15. If S is a vector subspace of Rm
    then its orthogonal complement S? is also a
    vector subspace of Rm.

9
Projection Matrices
  • The orthogonal projection of an m x 1 vector x
    onto a vector space S can be expressed in matrix
    form.
  • Let z1,zr be any othonormal basis for S while
    z1,zm is an orthonormal basis for Rm. Any
    vector x can be written as

10
  • Aggregating a(a1,a2) where a1(a1 ar) and
    a2(ar1am) and assuming a similar
    decomposition of ZZ1 Z2, the vector x can be
    written as
  • given orthogonality, we know that Z1Z1Ir and
    Z1Z2(0), and so

11
  • Theorem 2.17 Suppose the columns of the m x r
    matrix Z1 from an orthonormal basis for the
    vector space S which is a subspace of Rm. If x ?
    Rm, the orthogonal projection of x onto S is
    given by Z1Z1x.
  • Projection matrices allow the division of the
    space into a spanned space and a set of
    orthogonal deviations from the spanning set. One
    such separation involves the Gram-Schmit system.

12
  • In general, if we define the m x r matrix
    X1(x1,xr) and define the linear transformation
    of this matrix that produces an orthonormal basis
    as A, so that
  • we are left with the result that

13
  • Given that the matrix A is nonsingular, the
    projection matrix that maps any vector x onto the
    spanning set then becomes

14
  • Ordinary least squares is also a spanning
    decomposition. In the traditional linear model
  • within this formulation b is chosen to minimize
    the error between y and estimated y

15
  • This problem implies minimizing the distance
    between the observed y and the predicted plane
    Xb, which implies orthogonality. If X has full
    column rank, the projection space becomes
    X(XX)-1X and the projection then becomes

16
  • Premultiplying each side by X yields

17
  • Idempotent matrices can be defined as any matrix
    such that AAA.
  • Note that the sum of square errors under the
    projection can be expressed as

18
  • In general, the matrix In-X(XX)-1X is referred
    to as an idempotent matrix. An idempotent matrix
    is one that AAA

19
  • Thus, the SSE can be expressed as
  • which is the sum of the orthogonal errors from
    the regression

20
Eigenvalues and Eigenvectors
  • Eigenvalues and eigenvectors (or more
    appropriately latent roots and characteristic
    vectors) are defined by the solution
  • for a nonzero x. Mathematically, we can solve
    for the eigenvalue by rearranging the terms

21
  • Solving for l then involves solving the
    characteristic equation that is implied by
  • Again using the matrix in the previous example

22
  • In general, there are m roots to the
    characteristic equation. Some of these roots may
    be the same. In the above case, the roots are
    complex. Turning to another example

23
  • The eigenvectors are then determined by the
    linear dependence in A-lI matrix. Taking the
    last example
  • Obviously, the first and second rows are linear.
    The reduced system then implies that as long as
    x1x2 and x30, the resulting matrix is zero.

24
  • Theorem 11.5.1 For any symmetric matrix, A,
    there exists an orthogonal matrix H (that is, a
    square matrix satisfying HHI) wuch that
  • where L is a diagonal matrix. The diagonal
    elements of L are called the characteristic roots
    (or eigenvalues) of A. The ith column of H is
    called the characteristic vector (or eigenvector)
    of A corresponding to the characteristic root of
    A.

25
  • This proof follows directly from the definition
    of eigenvalues. Letting H be a matrix with
    eigenvalues in the columns it is obvious that

26
Kronecker Products
  • Two special matrix operations that you will
    encounter are the Kronecker product and vec()
    operators.
  • The Kronecker product is a matrix is an element
    by element multiplication of the elements of the
    first matrix by the entire second matrix

27
(No Transcript)
28
  • The vec(.) operator then involves stacking the
    columns of a matrix on top of one another.
Write a Comment
User Comments (0)
About PowerShow.com