Matrix Computations

Complex numbers

= x – yi| z | = = sign(z) = if z = x + yi

Vectors

z* = (,…, ) w* = for z = w = (w1, …, wn)

z.w= inner product of z and w for z = and w =

= z*w

= w1 +  + wn

| z | = = for z =

z and w are orthogonal (perpendicular) if z.w = 0.

u1, …, un are orthogonal if they are mutually orthogonal, i.e. uj.uk = 0 for jk.

u1, …, un are orthonormal if they are orthogonal and each uj has length one, i.e. | uj | = 1.

Matrices

A is upper triangular if all entries below the main diagonal are zero.

A is lower triangular if all entries above the main diagonal are zero.

A is upper Hessenberg if all entries below the first sub-diagonal are zero.

first subdiagonal = diagonal just below the main diagonal = diagonal of entries of the form ai,i-1

A* = nm matrix whose ijth element is if A = mn matrix whose ijth element is aij

(AB)* = B*A*

jth row of AB = the linear combination of the rows of B with the jth row of A as the coefficients.

jth column of AB = the linear combination of the columns of A with the jth column of B as the coefficients.

-1 =

A = ( a1 | a2 |  | an ) means the columns of A are a1, a2,  , an.

B( a1 | a2 |  | an ) = (Ba1 | Ba2 |  | Ban )

Q is orthogonal  the columns of Q are an orthonormal set  Q-1 = Q*.

Px=orthogonal projection of x on line through v

=

P =

Qx=orthogonal projection of x on hyperplane perpendicular to v

=x -

Q = I -

Fx=orthogonal reflection of x across hyperplane perpendicular to v

=x -

F = I –

LU decomposition

A  L, U, p where L is lower triangular, U is upper triangular, p is a vector containing a permutation of the integers 1, 2, …, n, and LU = C where the ith row of C is the pith row of A.

Ax = b  LUx = g where the ith component of g is the pith of b.  Using L do the same operations on g as you did on A.  Ux = h.  Solve using back substitution.

QR decomposition

A = QR where Q is orthogonal and R is upper triangular. To obtain Q and R , suppose A=(a1|a2 |  | an ) and Q= ( q1 | q2 |  | qn ). Then
rij = qi.aj for ij
v = aj – r1jq1 -  - rj-1, j qj-1
rj j = | v |

Ax = b  Rx = Q*b.  Solve using back substitution.

Householder reflection

Leta = and 1 jn. Let x = and v = x + sign(aj) | x | ej and F be the reflection across the plane perpendicular to v. Then Fy has the same first j-1 components as y for any vector y and Fa = .

Least Squares

Given: A=(a1|a2 |  | an )

x = is such that x1a1 +  + xnan = Ax is closest to b

b – Ax is orthogonal to Ay for every y

A*Ax = A*b.

Ax=orthogonal projection of b on the subspace spanned by the aj

=A(A*A)-1A*b

=Pb

P = A(A*A)-1A* = matrix for orthogonal projection on the subspace spanned by the aj

Given: Data points (x1, y1), …, (xm, ym) and functions f1(x), …, fn(x)

c = is such that y =c1f1(x) +  + cnfn(x) is the best least squares fit of the data

c minimizes S = [ y1 – ( c1f1(x1) +  + cnfn(x1) ) ]2 +  + [ ym – ( c1f1(xm) +  + cnfn(xm) ) ]2

c1a1 +  + cnan is closest to b = , where aj =

Eigenvalues and Eigenvectors

 is an eigenvalue of AAu = u for some u 0  det(A - I) = 0.

u is an eigenvector of A corresponding to the eigenvalue  Au = u (A - I)u = 0.

A = TDT-1T = matrix whose columns are the eigenvectors of A,

D = diagonal matrix with the eigenvalues of A on the diagonal.

Ak = TDkT-1Dk = diagonal matrix with the kth powers of the eigenvalues on the diagonal.

A-k = TD-kT-1D-k = diagonal matrix with the -kth powers of the eigenvalues on the diagonal.

(A - I)-k = T(D - I)-kT-1(D - I)-k = diagonal matrix with the -kth powers of the eigenvalues -  on the diagonal.

Simultaneous power iteration: Find QR decomposition of Ak, i.e. Ak = QkRk. Then form Ak=Qk1AQk. If the absolute values of the eigenvalues of Aare strictly decreasing in absolute value then the diagonal elements of Ak approach the eigenvalues of Ain decreasing order of absolute value as k.

QR algorithm: Let A0 = A. Then for k = 1, 2, … do the following. Pick a shift k (e.g. the entry in the lower right corner of Ak-1 or the eigenvalue of the 22 submatrix of Ak-1 in the lower right corner closest to the entry in the lower right corner of Ak-1. Then find the QR factorization of Ak-1-kI, i.e. Ak-1-kI = UkSk. Then let Ak = SkUk + kI. The diagonal elements of Ak approach the eigenvalues of A as k if the shift is properly chosen.

Singular value decompositions

A = UV*V = orthogonal matrix with columns the normalized eigenvectors, vj, of A*A,

 = diagonal matrix with square roots, j, of the eigenvalues of A*A on diagonal,

U = orthogonal matrix with columns uj = Avj/j if j 0 or A*vj = 0 otherwise.

Alternatively, the uj are the normalized eigenvectors of AA*, the j are the square roots of the eigenvalues of AA* and vj = A*uj/j if j 0 or satisfying Avj = 0 otherwise.

A maps the set of vectors of length 1 to an ellipse whose axes lie along the vectors uj and have half length j.

Norms & condition numbers

x = p = (p1, …, pn)A =

|| x ||1 = | x1 | +  + | xn |

|| x ||2 =

|| x || = max{ | x1 |, , | xn | }

|| x ||1, || x ||2, and || x || are all norms, i.e.

|| x + y ||  || x || + || y ||

|| x ||  |  | || x ||

|| x ||  0

|| 0 || = 0

|| x ||1 and || x || are a pair of dual norms, i.e.

| px |  || p ||1 || x || if p = (p1, …, pn) is a row vector. Furthermore, for each x there is a nonzero p for which equality holds

|| x ||2 is dual to itself, i.e.

| px |  || p ||2 || x ||2 Furthermore, | xTx | = || x || where xT = transpose of x.

|| A ||=

= || Ax ||

= largest stretching done by A

|| x || = || x ||  || A || = max{ a1, , an } where aj = | aj1 | +  + | ajn |

|| x || = || x ||1  || A || = max{ b1, , bn } where bj = | a1j | +  + | anj |

|| x || = || x ||2  || A ||=

= largest eigenvalue of A, if A is symmetric

Properties of a matrix norm

|| A + B ||  || A || + || B |||| A ||  |  | || A ||

|| A ||  0|| 0 || = 0

|| Ax ||  || A || || x |||| AB ||  || A || || B ||

(A)= condition number of A = || A || || A-1 || = =

|| x || = || x ||2  (A)=

= if A is symmetric

Error estimates

x = = vector of true values xa = = vector of approximate values

A = = matrix of true values Aa = = matrix of approximate values

b = Axba = Aaxa

(x) = || x - xa || = norm of vector of errors (x) = = relative error in x

|| Ax - Axa ||  || A || || x - xa || || A-1b – A-1ba ||  || A-1 || || b - ba || if Aa = A

  (A) if Aa = A

|| x – xa ||  in the general case